No, AI Won't Kill Us All - InformationWeek

InformationWeek is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Data Management // Big Data Analytics

No, AI Won’t Kill Us All

When famous technologists and scientists fear the menace of thinking machines, it's time to worry, right? Not really, because computers lack the imagination to wreak havoc, says one AI expert.

8 Google Projects To Watch in 2015
8 Google Projects To Watch in 2015
(Click image for larger view and slideshow.)

The sentient, self-aware, and genuinely bad-tempered computer is a staple of science fiction -- the murderous HAL 9000 of 2001: A Space Odyssey being a prime example from the genre. Recently, though, more than a few of the world's top technological and scientific minds -- most notably Bill Gates, Stephen Hawking, and Elon Musk -- have warned humanity of the threat posed by artificial intelligence. In fact, AI has even been named one of the "12 risks that threaten human civilization," according to a new report from the Global Challenges Foundation and Oxford University’s Future of Humanity Institute.

Whoa. So perhaps it's time to step back from the precipice of Skynet-like apocalypse? Maybe focus on making computers a little less smart -- or at least less autonomous?

No, actually it’s a good time to take a deep breath and relax, says Dr. Akli Adjaoute, founder and CEO of Brighterion, a San Francisco-based provider of AI and machine-learning software for healthcare and identity fraud, homeland security, financial services, mobile payments, and other industries. Adjaoute has a PhD in artificial intelligence and mathematics from Pierre and Marie Curie University in Paris. He has spent the past 15 years developing AI technologies for commercial applications.

In short, Adjaoute knows his stuff, and he says AI's ominous potential is vastly overblown.

(Image: No comparison: Kasparov versus Deep Blue via Stanford University)

(Image: No comparison: Kasparov versus Deep Blue via Stanford University)

In a phone interview with InformationWeek, Adjaoute provided a very simple reason the fear of malevolent, thinking machines is unfounded: Computers, unlike people, have no imagination.

"Suppose I'm on the 10th floor, and I'm talking to you from my office," said Adjaoute. "I say, 'Hey, could you please take this bucketful of water, and run to the reception [area] on the first floor?' What happens? You'll say, 'Oh, I will get wet, because the water will splash on me if I run.'"

The human mind, he noted, can imagine that carrying an open, sloshing bucket of water across office floors (and possible down several flights of stairs) will likely cause water to spill out of the bucket and onto the carrier's clothing. That's imagination at work.

A computer lacks similar cognitive capabilities, however. Rather, it's very, very fast at carrying out instructions.

Even powerful AI systems such as IBM's Jeopardy!-winning Watson, don’t mimic the human brain. (The same can be said for IBM's Deep Blue computer, which in 1997 defeated world chess champion Garry Kasparov in a six-game match.)

"We don't claim that Watson is thinking in the way people think. It is working on a computational problem at its core," IBM research scientist Murray Campbell, one of the developers of Deep Blue, told the New York Times in 2011.

"The computer doesn't even know it's playing chess," said Adjaoute of Deep Blue. "It's just another level of stupid calculation."

[ Read more about Watson's voice. ]

As Allen Institute CEO Oren Etzioni recently told CNBC, AI's critics may be blurring the distinction between machines capable of performing instructions very efficiently, and truly autonomous systems that think and act independently.

"How are you going to have self-awareness if all the program does is look to the data, and analyze it with zeros and ones?" said Adjaoute. "How will it be aware of what it's doing? It's impossible."

He added: "I am tired of seeing artificial intelligence become the boogeyman of technology. There is something irrational about the fear of AI."

Attend Interop Las Vegas, the leading independent technology conference and expo series designed to inspire, inform, and connect the world's IT community. In 2015, look for all new programs, networking opportunities, and classes that will help you set your organization’s IT action plan. It happens April 27 to May 1. Register with Discount Code MPOIWK for $200 off Total Access & Conference Passes.

Jeff Bertolucci is a technology journalist in Los Angeles who writes mostly for Kiplinger's Personal Finance, The Saturday Evening Post, and InformationWeek. View Full Bio

We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Comment  | 
Print  | 
More Insights
Newest First  |  Oldest First  |  Threaded View
<<   <   Page 4 / 4
D. Henschen
D. Henschen,
User Rank: Author
2/24/2015 | 5:21:17 PM
Re: From where will computers get their paranoia?
Good point, Charlie. I think the people who imagine that AI might be a threat tend to be those who can imagine all the sorts of nefarious things that humans are capable of. Suspicious minds, as Elvis would have it.
Charlie Babcock
Charlie Babcock,
User Rank: Author
2/24/2015 | 4:05:31 PM
From where will computers get their paranoia?
TerryB, for computers to conclude that something would work better with humans out of the way, they'd need to be both competitive and paranoid. That is, they'd need to understand that they were in a competition for resources with humans and conclude that they were likely to lose out if they don't get rid of humans. Such thinking, to me, is the exclusive domain of the human race and can't be projected onto computers, even AI machines.
User Rank: Ninja
2/24/2015 | 1:39:34 PM
Wishful Thinking
The scientist sounds exactly like the guy that invented SkyNet. It wasn't created to do bad things either. I'm struggling to understand the difference in imagination and faulty logic.

His only point which is absolutely true is we are light years from creating AI we have to be scared of yet. But in longer run, I don't think it is out of realm of "analyzing data" that a computer might conclude something would work better with humans out of the loop.  And that something might be existence. :-)
User Rank: Ninja
2/24/2015 | 12:54:27 PM
I am human hear me roar
I always tell the masses here at the firm when they complain their system is doing this or that and how it "should know" - be glad it doesn't. I tell them the computer is only as smart as it is programmed to be and therefore YOU are smarter than the computer - otherwise it wouldn't need anyone there to provide any input to tell it what to do. It would just know. And YOU would be out of a job if that were the case.
<<   <   Page 4 / 4
10 Ways to Transition Traditional IT Talent to Cloud Talent
Lisa Morgan, Freelance Writer,  11/23/2020
Top 10 Data and Analytics Trends for 2021
Jessica Davis, Senior Editor, Enterprise Apps,  11/13/2020
Can Low Code Measure Up to Tomorrow's Programming Demands?
Joao-Pierre S. Ruth, Senior Writer,  11/16/2020
White Papers
Register for InformationWeek Newsletters
Current Issue
Why Chatbots Are So Popular Right Now
In this IT Trend Report, you will learn more about why chatbots are gaining traction within businesses, particularly while a pandemic is impacting the world.
Flash Poll