What Intelligent Machines Can Do, And What They Can't
From Alpha Go to cancer detection to data center efficiency, we talk to analytics practitioner and SAS CTO Oliver Schabenberger about what AI can do and what it can't do. Here's what he said.
Are killer machines coming to annihilate mankind? Are we headed for a dystopian future where robots are our overlords? Are the Cylons already among us? Are concerns voiced by industry icons such as Elon Musk, who has donated millions to The Future of Life Institute, warranted?
Oliver Schabenberger recently added a more measured voice to this debate in this commentary piece that he wrote for InformationWeek, pointing out that machines "are not surpassing us in thinking or learning." Schabenberger is the CTO of analytics software company SAS, and InformationWeek recently sat down with him to find out more about his thoughts on the opportunities presented by AI and more. Here's what he said.
Oliver Schabenberger is Executive Vice President and Chief Technology Officer, SAS.
First, in terms of definitions, Schabenberger notes that AI is really about using a computerized system to perform a human task. It's something that we could do ourselves but we decide to implement it through a computerized system. That definition is really different from creating a "thinking machine."
Rather, "the revolution is that we are now able to do those tasks with an accuracy that was previously not possible," Schabenberger told me in an interview. "We are interacting with our devices now with voice. Some years ago the accuracy was just not there for that to be a satisfying interaction, and now it is. And that changes our perception."
But these new functionalities are still very limited, according to Schabenberger. For instance, when it comes to understanding language, machines don't have context to understand conversation.
"The systems we are building with deep learning right now don't understand context," he said. "If you interact with a chat bot, you will learn very quickly that they don't understand the humanness of conversations. They can't understand sarcasm."
(At least one project is underway to help bots learn the art of sarcasm.)
But while machines may be limited in the art of conversation, satire, and wit, they excel at pattern recognition and processing huge volumes of data. Right now, that's where organizations are finding great opportunity to deploy AI. One application, for instance, is in looking at human moles to help detect those that exhibit the telltale signs of being melanomas.
"When a machine looks at a melanoma it doesn't see 'area, border, and color,'" Schabenberger said, referring to what doctors look for when examining human moles for signs of cancer. "It just sees patterns and pixels. When we fed it images over and over again, it recognized the difference between a mole and a melanoma because we call it that. That is top-down, supervised training." We define melanoma for the machine and show it images that fit that description.
A machine that can look at thousands of images faster than a radiologist can will be able to help doctors correctly diagnose cancer faster, and that's a benefit to humanity. By looking at large numbers of images, the machine learns to detect patterns that we say fit a particular definition.
Machines can do more, faster. Schabenberger gives another common example, talking about the system that beat the best human player in the world at Alpha Go.
The system used a standard neural network with 133 million parameters.
"I'm a statistician. I taught statistics at a university. I joined SAS. I've never built a model with that many parameters. I've hardly built a model with 1,000 parameters," Schabenberger told me.
These systems can just do more than humans can. But a side effect of that is that we don't really understand why they work or why they don't work. You can't use a debugger to figure it out, Schabenberger said.
When these systems don't work, the fix is to feed them more information. So when an image recognition system says an apple is a pineapple, the only thing we can do is to show it more images of pineapples so that it recognizes pineapples, Schabenberger told me.
"But that doesn't guarantee that doing that won't make it forget something else it already knew. That's called catastrophic forgetfulness."
Catastrophic forgetfulness is defined in this Bloomberg article as "a kind of perpetual present: every time the network is given new data, it overwrites what it has previously learned."
Schabenberger says that when this happens," the algorithm changes, and you don't quite know how… This is where it turns into half art and half science to figure out how to feed it what it needs."
Schabenberger believes that it will be a big step forward for "creative smartness" when we can get away from training algorithms with this kind supervised approach where we define the truth and let the machine map the way to finding that truth and instead move to an unsupervised approach where the algorithm explores on its own.
"That kind of system is closer to a system that behaves more like us," he said. It still doesn't have a brain, according to Schabenberger. But it can recognize patterns and clusters and decide on the next best action -- much like the Alpha Go optimization system did.
"We've already proven these systems can be trained," he said, pointing to efforts to apply AI to improve data center efficiency.
Such optimization systems can be applied to games and data centers. The next step may be applying them to marketing. Schabenberger expects systems like this to be created in the next few years.
In marketing, such systems could be applied to help optimize the customer journey.
"It's about maximizing the lifetime value of the relationship with the customer," he said. "Given your history and where you are now, what would be your next move?"
About the Author
You May Also Like