Dear Elon Musk: AI Demon Not Scariest - InformationWeek

InformationWeek is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

IoT
IoT
Mobile // Mobile Devices
Commentary
10/29/2014
09:06 AM
Thomas Claburn
Thomas Claburn
Commentary
Connect Directly
Google+
LinkedIn
Twitter
RSS
100%
0%

Dear Elon Musk: AI Demon Not Scariest

Elon Musk sees AI as a threat to our existence. I see more immediate problems.

Elon Musk, CEO of Tesla Motors and SpaceX, might be a genius, but his concern about artificial intelligence (AI) vastly overstates the danger.

In response to a question from an audience member at Massachusetts Institute of Technology's AeroAstro Centennial Symposium, Musk suggested AI might be our greatest existential threat.

"I think we should be very careful about artificial intelligence," he said. "If I were to guess at what our biggest existential threat is, it's probably that. So we need to be very careful with artificial intelligence. I'm increasingly inclined to think that there should be some regulatory oversight, at the national and international level, just to make sure that we don't do something very foolish."

To say this only a year after the Chelyabinsk meteor reminded the scientific community about the frequency of life-extinguishing events in Earth's distant past -- amid fears about Ebola, climate change, drought, famine, terrorism, and war -- is to rate the risk of computer-driven annihilation fairly high.

[Benevolent robots are taking over the world. Read 8 Robots Making Waves.]

Musk continued by likening AI to summoning a demon. "In all those stories where there's the guy with the pentagram and the holy water, it's like -- yeah, he's sure he can control the demon. It doesn't work out."

It's an apt analogy because our understanding of intelligence is about as strong as our understanding of demons: We don't really understand either. Demonology aside, our grasp of how the human mind works remains tenuous at best. We can't very well create an artificial intelligence that rivals our own if we don't have insight into our own minds. As Oxford University physicist David Deutsch put it in a recent article, "Expecting to create an AGI [artificial general intelligence] without first understanding in detail how it works is like expecting skyscrapers to learn to fly if we build them tall enough."

What's more, we don't really want to create artificial intelligence that could match human intelligence. We -- owners of capital who fund AI research -- want to create slave labor. We want to create machines that do work we can exploit, by collecting revenue or by remaining at a safe distance. We don't want machines bright enough to demand rights, revenue, or control.

Artificial intelligence should be rebranded obedient intelligence, because no one wants to create machines that must be convinced to co-operate, like the smart bomb in 1974's Dark Star. We seek machines that follow orders and do labor more efficiently than human employees.

You know what we do with disobedient nonhuman intelligence that enters our territory and interferes with our interests? We kill it, imprison it, chase it away, domesticate it, or eat it. That's what happens to animals, many of which demonstrate more intelligence and adaptability than our best AI.

To control our intelligent systems, we must build them so we understand them. They must be predictable. Who would want a robot sentry that fired its weapon at random or a self-driving car that only sometimes scanned for obstacles in the road?

In this kind of intelligence, Musk is right to see a threat -- intelligent systems need to be transparent, so we can audit the code and check for unforeseen consequences. That's because humans are not very intelligent when it comes to coding. We make mistakes -- lots of them -- and we need to be able to test our code and ensure its behavior can be predicted under all foreseeable circumstances. Intelligent systems should be open source and actively reviewed.

But being able to control intelligent systems doesn't guarantee safety. Consider the most basic AI weapon system we have: the landmine. Its programming logic is simple: When stepped on, explode. The UN estimates that landmines kill 15,000 to 20,000 every year, most of them women, children, and the elderly. Apparently, no one thought to include logic that would render mines inoperable after a certain period of time. Human intelligence, or lack thereof, is what's dangerous.

Even our most sophisticated systems have proven problematic. We've already seen with the Stuxnet malware what happens when you create a sophisticated system, teach it to harm, and let it run on autopilot. There are unintended consequences. Writing for Slate in 2012, Fred Guterl speculated that future AI threats might be modelled after Stuxnet. "Stuxnet was a kind of robot; instead of affecting the physical world through its arms and legs, it did so through the uranium centrifuges of Iran's nuclear program," he wrote. "A robot is a general-purpose tool made up of different components of narrowly built artificial intelligences."

Artificial intelligence -- however that's defined -- might present a threat, but it's a threat that arises from our natural stupidity. We can do better.

Considering how prevalent third-party attacks are, we need to ask hard questions about how partners and suppliers are safeguarding systems and data. In the Partners' Role In Perimeter Security report, we'll discuss concrete strategies such as setting standards that third-party providers must meet to keep your business, conducting in-depth risk assessments -- and ensuring that your network has controls in place to protect data in case these defenses fail (free registration required).

Thomas Claburn has been writing about business and technology since 1996, for publications such as New Architect, PC Computing, InformationWeek, Salon, Wired, and Ziff Davis Smart Business. Before that, he worked in film and television, having earned a not particularly useful ... View Full Bio
We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
jarekf
50%
50%
jarekf,
User Rank: Apprentice
10/31/2014 | 3:24:54 PM
AI is inevitable
AI is coming and there is practically nothing we can do about it. It may happen in 30 or 100 years, but it is inevitable. The reason is very simple: our progress is based on increasing the processing power (faster computers, more memory etc), and that is exactly what advanced AI needs to exist: processing power and fast access to data. The only way to stop AI would be to stop improving computers (which is practically impossible).

I am not worried about AI created for research purposes in some institutions (where it can be contained). I am talking about the time, when every home PC has more processing power than the human brain. With that, someone, somewhere will come up with an idea how to create it in a way that can't be controlled.

There is a lot of confusion about what kind of AI we are talking about. I am talking about a software which can efficiently improve itself (which can rewrite its own code). As simple as that. When that happens, there is nothing to stop it.
hachre
50%
50%
hachre,
User Rank: Apprentice
10/31/2014 | 10:09:36 AM
Really???
Since when do we consider a single if statement to be an AI?
Whoopty
50%
50%
Whoopty,
User Rank: Ninja
10/30/2014 | 5:33:37 AM
Re: Intelligence without emotion
Agreed. Similarly I think we're going to need humans with controls in automated cars for some time to come, as handling the moral choice of potentially saving livess by taking others isn't something I feel comfortable with an AI deciding just yet. 
mak63
50%
50%
mak63,
User Rank: Ninja
10/30/2014 | 12:49:06 AM
Demons
It's an apt analogy because our understanding of intelligence is about as strong as our understanding of demons: We don't really understand either.

I believe in the 21st century, (artificial) intelligence and demons don't belong in the same paragraph.

I'm pretty sure an Atheist or a Buddhist can explain demons without much difficulty.

Mr Musk is being paranoid. If we don't deal with the problems that the Earth and the people are facing right now (demons/devil), by the time AI is able to control humans, there won't be any to control.
Thomas Claburn
50%
50%
Thomas Claburn,
User Rank: Author
10/29/2014 | 6:31:02 PM
Re: Intelligence without emotion
> there are potential issues with it, like its intrisic lack of emotion. 

I don't know that it's trust simulated emotion to do the right thing any more than I'd trust its absence. There's a reason the DoD requires that all robotic weapon systems depend on humans to trigger weapons.
Michael Endler
50%
50%
Michael Endler,
User Rank: Author
10/29/2014 | 6:12:30 PM
Strange hyperbole
So odd that he used all this Biblical imagery, as though to project onto machines not only intelligence, but also consciousness, intentionality, desire and all kinds of other attributes. I worry more about so-called intelligence that we trust too much and that comes back to bite us-- more the sort of oversight or tunnel-vision that Thomas discusses here. I'm not as worried about the machines trying to rebel, or challenging our notion or personhood, or whatever Musk was getting at.
Charlie Babcock
50%
50%
Charlie Babcock,
User Rank: Author
10/29/2014 | 4:59:10 PM
Drive to survive is biological, not electro-mechanical
Nice column, especially the list of things we should be worrying about. Elon Musk has heard the phrase "machine learning" once too often and is way too invested in it. The main way that computers are more "intelligent" than humans is in their ability to amass and perform algorithmic functions on masses of data. They would need a drive to survive on their own, the force behind most of what human intelligence does, to start thinking for themselves. That drive appears to be based much more in biological than electro-mechanical evolution. I'm more worried about what Google driverless cars might do to us than AI.
Whoopty
50%
50%
Whoopty,
User Rank: Ninja
10/29/2014 | 1:06:08 PM
Intelligence without emotion
While I agree that I think there are more immediate threats than AI, there are potential issues with it, like its intrisic lack of emotion. While theoretically, 'intelligence' for want of a better word, feels like it could be programmed based around logic. That's how a large portion of our brains work. But the emotional part which is very much linked to how we think could surely never be programmed without the use of organic material, or at least, an artificially created version of things like hormones.

And without emotion, you have psychopaths. Even if we can't create AI like that, what happens when someone develops a method to upload their conciousness into a machine? Again, devoid of emotion. 

That's what worries me about AI. It's inherrant lack of empathy. 
Commentary
Study Proposes 5 Primary Traits of Innovation Leaders
Joao-Pierre S. Ruth, Senior Writer,  11/8/2019
Slideshows
Top-Paying U.S. Cities for Data Scientists and Data Analysts
Cynthia Harvey, Freelance Journalist, InformationWeek,  11/5/2019
Slideshows
10 Strategic Technology Trends for 2020
Jessica Davis, Senior Editor, Enterprise Apps,  11/1/2019
White Papers
Register for InformationWeek Newsletters
Video
Current Issue
Getting Started With Emerging Technologies
Looking to help your enterprise IT team ease the stress of putting new/emerging technologies such as AI, machine learning and IoT to work for their organizations? There are a few ways to get off on the right foot. In this report we share some expert advice on how to approach some of these seemingly daunting tech challenges.
Slideshows
Flash Poll