Welcoming Our AI Overlords - InformationWeek

InformationWeek is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

IT Life
01:00 PM
Connect Directly

Welcoming Our AI Overlords

Stephen Hawking warns artificial intelligence could end humanity, but science fiction often proposes a harmonious future for people and machines. Who's right?

Renowned physicist Stephen Hawking made headlines this week by warning of the potential for humanity's overthrow by artificial intelligence (AI). In an interview with the BBC, Hawking stated, "The development of full artificial intelligence could spell the end of the human race."

This is old news to science fiction fans. The genre has long speculated on what might happen when humans build machines that are stronger, faster, and more intelligent than their flesh-and-blood creators.

In the sci-fi canon, things often go poorly for flesh and blood. Skynet, a military computer network from the Terminator series, becomes self-aware and builds armies of machines to kill humans. The Matrix series also depicts a world ravaged by sentient machines that have enslaved humans for use as a renewable power source.

[A kinder, gentler perspective on AI: Our Robot-Filled Future: Not All Scary.]

But there are counterexamples in which humans and smart machines get on rather well.

Robert Heinlein's 1966 novel The Moon Is a Harsh Mistress includes a sentient supercomputer that befriends a group of rebel Moon colonists and helps them win their freedom from an authoritarian government on Earth. (IT pros will note with some satisfaction that the machine's first friend is a computer technician.)

Meanwhile, the Star Wars universe demonstrates a generally agreeable relationship between smart machines and humans. Aside from the occasional assassin droid or sadistic robot overseer, most robots are friendly, helpful, and even charming. It's the humans and other organic life forms who are out to exploit or exterminate one another.

The pinnacle of positive human/AI interaction in science fiction is Data, the android from Star Trek: The Next Generation. Despite having superior strength, senses, and computational power, Data shows no inclination to rule over his biological counterparts. In fact, the android yearns to become human.

Science fiction has also considered ways for humans to protect themselves against an AI overthrow. The most prominent example is Isaac Asimov's Three Laws of Robotics.

These laws, which are hard-coded into robots, are simple instructions intended to prevent machines from harming humans. However, many of Asimov's robot stories show how these laws can have unintended consequences.

So whose vision do you agree with? Are humans fated to be "superseded" as Dr. Hawking warns, or can we find ways to ensure that humans and sentient machines can coexist?

For myself, I think the real danger comes from how humans use the increasingly powerful machines that already exist, from weaponized drones to pervasive electronic surveillance. We don't need to wait for sentient machines to create a dystopian nightmare: There are plenty of tools in the toolbox to make it happen now.

Attend Interop Las Vegas, the leading independent technology conference and expo series designed to inspire, inform, and connect the world's IT community. In 2015, look for all new programs, networking opportunities, and classes that will help you set your organization’s IT action plan. It happens April 27 to May 1. Register with Discount Code MPOIWK for $200 off Total Access & Conference Passes.

Drew is formerly editor of Network Computing and currently director of content and community for Interop. View Full Bio
We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Comment  | 
Print  | 
More Insights
Newest First  |  Oldest First  |  Threaded View
Drew Conry-Murray
Drew Conry-Murray,
User Rank: Ninja
12/5/2014 | 4:14:44 PM
Re: Could we sub AI for Congress?
As much as I dislike Congress as a body, I'm not quite ready to turn the reins of government over to a coldly logical computer. Though there are days, particularly during election season when the pandering and appeals to our most basic fears are in high gear, when I might be persuaded otherwise.
User Rank: Ninja
12/3/2014 | 11:06:08 PM
Machines Can't have Souls
Shklovskii and Sagan's wonderful book, "Intelligent Life in the Universe" detailed a convincing pathway from cosmic dust to stars, planets, macromolecules and viruses, and then from bacteria to humans. One thing it couldn't explain is self-awareness. No physics book can. I don't buy the argument that increased complexity leads to self-awareness.

Without self-awareness, there can be no greed, desire, good or evil.

What I do see is a continuation of the present trend. There will be no need for any people but the owners of these wonderful machines, and the geniuses that enhance and improve them. For everyone else, unemployment and poverty.
Charlie Babcock
Charlie Babcock,
User Rank: Author
12/3/2014 | 6:18:38 PM
Could we sub AI for Congress?
Hawking has had such keen insights and stuck to them so tenaciously that I'm afraid to ask, how does he foresee the danger of machine domination developing. Until I hear from him, I'll join Jeopardy contestent Ken Jennings in welcoming our new computer overlords. AI has a chance of figuring out sooner than Congress what will happen to our environment if we continue consuming it at this pace.
User Rank: Apprentice
12/3/2014 | 3:30:45 PM
It will happen at some point
Our grand-children and certainly our great grand-children's generation will have to restle hard with what it means to be human. Intelligent machines will fill their lives, and most likely purely corporeal bodies will be vestiges of a bygone era.
Thomas Claburn
Thomas Claburn,
User Rank: Author
12/3/2014 | 3:29:36 PM
Re: If it happens, we only have ourselves to blame
> Bad humans, as always, are the root source of concern.

Agreed. If we build a super-intelligent robot, program it to reproduce and survive at all costs, neglect to include an off swtich, and arm it with weapons, it's not the AI that's dangerous. It's human stupidity.
Shane M. O'Neill
Shane M. O'Neill,
User Rank: Author
12/3/2014 | 2:55:11 PM
If it happens, we only have ourselves to blame
I worry more about AI in the wrong human hands than AI itself. Bad humans, as always, are the root source of concern.
IT Salary Report 2020: Get Paid What You Are Worth
Jessica Davis, Senior Editor, Enterprise Apps,  2/12/2020
10 Analytics and AI Startups You Should Know About
Cynthia Harvey, Freelance Journalist, InformationWeek,  2/19/2020
Fighting the Coronavirus with Analytics and GIS
Jessica Davis, Senior Editor, Enterprise Apps,  2/3/2020
White Papers
Register for InformationWeek Newsletters
State of the Cloud
State of the Cloud
Cloud has drastically changed how IT organizations consume and deploy services in the digital age. This research report will delve into public, private and hybrid cloud adoption trends, with a special focus on infrastructure as a service and its role in the enterprise. Find out the challenges organizations are experiencing, and the technologies and strategies they are using to manage and mitigate those challenges today.
Current Issue
IT Careers: Tech Drives Constant Change
Advances in information technology and management concepts mean that IT professionals must update their skill sets, even their career goals on an almost yearly basis. In this IT Trend Report, experts share advice on how IT pros can keep up with this every-changing job market. Read it today!
Flash Poll