SAS CTO Oliver Schabenberger counters Tesla founder Elon Musk's description of artificial intelligence as the scariest problem.

Guest Commentary, Guest Commentary

September 6, 2017

6 Min Read
489

Tesla CEO Elon Musk has received a lot of criticism recently for saying at the National Governors Association meeting, “AI is a fundamental existential risk for human civilization, and I don’t think people fully appreciate that.” Musk also referred to artificial intelligence (AI) technology as “the scariest problem” and called for government regulation. This is not new rhetoric; we have heard alarming language about AI as an existential threat to humanity for years now.

I prefer to be killed by my own stupidity rather than the codified morals of a software engineer or the learned morals of an evolving algorithm. But am I scared? No. Do I feel threatened? No.

It is certainly true that we have seen machines, devices, appliances, automobiles and software become increasingly capable over time. They have become increasingly intelligent only to the extent that we apply a machine-motivated definition of intelligence. The truth: Machines have become increasingly capable of performing human tasks.

Since mankind first emerged, humans have transformed objects into tools. By smashing two rocks together, they created spearheads. With the Industrial Revolution, tools became increasingly automated. Today, robots do most of the work in factories. Thermostats respond to temperature changes without our intervention. Still, no one would argue that these automated tools think for themselves.

But is this long-standing paradigm about to change? With the rise of AI, will our tools and machines start to truly think for themselves? Do we risk becoming slaves to machine masters?

The frenzied alarms are rooted in the belief that once artificial intelligence takes hold, it will develop so quickly that we cannot control its negative effects, superintelligence in machines will develop, and the rest is – or will be – history.

I do not believe that this will happen, at least not in this form, and not any time soon. Let us look at where artificial intelligence stands today.

Where does AI stand?

AI has become a catch phrase for doing things smarter, in a more automated, autonomous way. While people tend to call anything clever or unexpected “AI,” most smart tools do not actually qualify. After all, even a simple calculator is better at arithmetic than we will ever be, but it is not artificial intelligence.

"The artificial intelligence systems of today learn from data – they learn only from data. These systems cannot grow beyond the limits of the data by creating, innovating or reasoning."

Still, there is a growing trend toward increasingly intelligent automation. 

Advanced analytics is progressing from an approach based on static models crafted by statisticians and data scientists to analytic systems based on machine learning. Rather than placing the model at the center of analytic processing, we use a data-driven approach where we automate the generation of features, the determination of the best algorithm, and the model deployment. The systems can update themselves as new data become available. They are learning systems in that sense.

Still, no intelligence.

With the advent of deep learning, machines are beginning to solve problems in a novel way: by writing the algorithms themselves. The software developer who codifies a solution through programming logic is replaced by a data scientist who defines and trains a deep neural network. The expert who studied and learned a domain is replaced by a reinforcement learning algorithm that discovers the rules of play from historical data.

We are learning incredible lessons in this process:

  • Systems we thought could not be computerized by man, such as the game of Go, have been learned by a machine. And the algorithm did not just figure out the rules of the game, it beat the best player in the world – repeatedly.

  • Software solutions once developed in the classical way – by writing code – are now solved with greater accuracy by learned algorithms. Examples include voice-to-text, image classification and document summarization.

  • Deep domain knowledge is replaced by sufficiently large and accurate data from which an algorithm can learn – and acquire skills.

Whether it’s autonomous vehicles, image classification, emotion detection, chat bots, speech-to-text or language translation, deep learning has already revolutionized our lives.

{image 1}

Significant blind spots

But does the rise of such highly sophisticated deep learning mean that machines will soon surpass their makers? They are surpassing us in reliability, accuracy and throughput. But they are not surpassing us in thinking or learning. Not with today’s technology.

The artificial intelligence systems of today learn from data – they learn only from data. These systems cannot grow beyond the limits of the data by creating, innovating or reasoning. Even a reinforcement learning system that discovers rules of play from past data cannot develop completely new rules or new games. It can apply the rules in a novel and more efficient way, but it does not invent a new game. The machine that learned to play Go better than any human being does not know how to play Poker.

An object classification system cannot recognize an object it has not been told about during training. The machines trained with today’s technology will not figure out on their own that ice is frozen water.

None of this means that AI is not powerful. It has the potential to transform virtually every industry. It delivers efficiencies, capabilities, and performance at a previously unknown level. And we will need all that to catch up with the need to automate analytic insight in a connected world that drowns in data.

Where to from here?

True intelligence requires creativity, innovation, intuition, independent problem solving, self-awareness and sentience. The systems built based on deep learning do not – and cannot – have these characteristics. These are trained by top-down supervised methods. We first tell the machine the ground truth, so that it can discover its regularities. They do not grow beyond that.

But they certainly can – and do – enhance our own intelligence. Indeed, since data scientists currently spend 80% of their time on data management, one way AI can help is by handling those data wrangling tasks. By giving data scientists more time for actual analytics, AI can make the data scientist more valuable and their jobs more enjoyable.

What about true, creative, human-like artificial intelligence, what is sometimes referred to as artificial general intelligence?

We will not get there without a disruptive new technology – which we have not found yet. Until then, despite our persistent qualms about machines’ ability to rule over humans, you can be sure we will keep looking.

Oliver_Schabenberger-sas.jpg

Oliver Schabenberger is Executive Vice President and Chief Technology Officer, SAS.

About the Author(s)

Guest Commentary

Guest Commentary

The InformationWeek community brings together IT practitioners and industry experts with IT advice, education, and opinions. We strive to highlight technology executives and subject matter experts and use their knowledge and experiences to help our audience of IT professionals in a meaningful way. We publish Guest Commentaries from IT practitioners, industry analysts, technology evangelists, and researchers in the field. We are focusing on four main topics: cloud computing; DevOps; data and analytics; and IT leadership and career development. We aim to offer objective, practical advice to our audience on those topics from people who have deep experience in these topics and know the ropes. Guest Commentaries must be vendor neutral. We don't publish articles that promote the writer's company or product.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights