Stephen Hawking warns artificial intelligence could end humanity, but science fiction often proposes a harmonious future for people and machines. Who’s right?

Andrew Conry Murray, Director of Content & Community, Interop

December 3, 2014

3 Min Read

Renowned physicist Stephen Hawking made headlines this week by warning of the potential for humanity's overthrow by artificial intelligence (AI). In an interview with the BBC, Hawking stated, "The development of full artificial intelligence could spell the end of the human race."

This is old news to science fiction fans. The genre has long speculated on what might happen when humans build machines that are stronger, faster, and more intelligent than their flesh-and-blood creators.

In the sci-fi canon, things often go poorly for flesh and blood. Skynet, a military computer network from the Terminator series, becomes self-aware and builds armies of machines to kill humans. The Matrix series also depicts a world ravaged by sentient machines that have enslaved humans for use as a renewable power source.

[A kinder, gentler perspective on AI: Our Robot-Filled Future: Not All Scary.]

But there are counterexamples in which humans and smart machines get on rather well.

Robert Heinlein's 1966 novel The Moon Is a Harsh Mistress includes a sentient supercomputer that befriends a group of rebel Moon colonists and helps them win their freedom from an authoritarian government on Earth. (IT pros will note with some satisfaction that the machine's first friend is a computer technician.)

Meanwhile, the Star Wars universe demonstrates a generally agreeable relationship between smart machines and humans. Aside from the occasional assassin droid or sadistic robot overseer, most robots are friendly, helpful, and even charming. It's the humans and other organic life forms who are out to exploit or exterminate one another.

The pinnacle of positive human/AI interaction in science fiction is Data, the android from Star Trek: The Next Generation. Despite having superior strength, senses, and computational power, Data shows no inclination to rule over his biological counterparts. In fact, the android yearns to become human.

Science fiction has also considered ways for humans to protect themselves against an AI overthrow. The most prominent example is Isaac Asimov's Three Laws of Robotics.

These laws, which are hard-coded into robots, are simple instructions intended to prevent machines from harming humans. However, many of Asimov's robot stories show how these laws can have unintended consequences.

So whose vision do you agree with? Are humans fated to be "superseded" as Dr. Hawking warns, or can we find ways to ensure that humans and sentient machines can coexist?

For myself, I think the real danger comes from how humans use the increasingly powerful machines that already exist, from weaponized drones to pervasive electronic surveillance. We don't need to wait for sentient machines to create a dystopian nightmare: There are plenty of tools in the toolbox to make it happen now.

Attend Interop Las Vegas, the leading independent technology conference and expo series designed to inspire, inform, and connect the world's IT community. In 2015, look for all new programs, networking opportunities, and classes that will help you set your organization’s IT action plan. It happens April 27 to May 1. Register with Discount Code MPOIWK for $200 off Total Access & Conference Passes.

About the Author(s)

Andrew Conry Murray

Director of Content & Community, Interop

Drew is formerly editor of Network Computing and currently director of content and community for Interop.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights