Artificial intelligence and machine learning have distinct limitations. Businesses looking to implement AI need to understand where these boundaries are drawn.

Mark Runyon, Director of Consulting, Improving

September 17, 2020

4 Min Read
Image: Viktor - stock.adobe.com

Although we are still in the infancy of the AI revolution, there’s not much artificial intelligence can’t do. From business dilemmas to societal issues, it is being asked to solve thorny problems that lack traditional solutions. Possessing this endless promise, are there any limits to what AI can do?

Yes, artificial intelligence and machine learning (ML) do have some distinct limitations. Any organization looking to implement AI needs to understand where these boundaries are drawn so they don’t get themselves into trouble thinking artificial intelligence is something it’s not. Let’s take a look at three key areas where AI gets tripped up. 

1. The problem with data

AI is powered by machine learning algorithms. These algorithms, or models, eat through massive amounts of data to recognize patterns and draw conclusions. These models are trained with labeled data that mirrors countless scenarios the AI will encounter in the wild. For example, doctors must tag each x-ray to denote if a tumor is present and what type. Only after reviewing thousands of x-rays, can an AI correctly label new x-rays on its own. This collection and labeling of data is an extremely time-intensive process for humans.

In some cases, we lack enough data to adequately build the model. Autonomous automobiles are having a bumpy ride dealing with all the challenges thrown at them. Consider a torrential downpour where you can’t see two feet in front of the windshield, much less the lines on the road. Can AI navigate these situations safely? Trainers are logging hundreds of thousands of miles to encounter all these hard use cases to see how the algorithm reacts and make adjustments accordingly.

Other times, we have enough data, but we unintentionally taint it by introducing bias. We can draw some faulty conclusions when looking at racial arrest records for marijuana possession. A Black person is 3.64 times more likely to be arrested than a white person. This could lead us to the conclusion that Black people are heavy marijuana users. Yet, without analyzing usage statistics, we would fail to see the mere 2% difference between the races. We draw the wrong conclusions when we don’t account for inherent biases in our data. This can be compounded further when we share flawed datasets. 

Whether it’s the manual nature of logging data or a lack of quality data, there are promising solutions. Reinforcement learning could one day shift humans to supervisors in the tagging process. This method for training robots, applying positive and negative reinforcement, could be utilized for training AI models. When it comes to missing data, virtual simulations may help us bridge the gap. They simulate target environments to allow our model to learn outside the physical world.

2. The black box effect

Any software program is underpinned by logic. A set of inputs fed into the system can be traced through to see how they trigger the results. It isn’t as transparent with AI. Built on neural networks, the end result can be hard to explain. We call this the black box effect. We know it works, but we can’t tell you how. That causes problems. In a situation where a candidate fails to get a job or a criminal receives a longer prison sentence, we have to show the algorithm is applied fairly and is trustworthy. A web of legal and regulatory entanglements awaits us when we can’t explain how these decisions were made within the caverns of these large deep learning networks.

The best way to overcome the black box effect is by breaking down features of the algorithm and feeding it different inputs to see what difference it makes. In a nutshell, it’s humans interpreting what AI is doing. This is hardly science. More work needs to be done to get AI across this sizable hurdle.

3. Generalized systems are out of reach

Anyone worried that AI will take over the world in some Terminator-type future can rest comfortably. Artificial intelligence is excellent at pattern recognition, but you can’t expect it to operate on a higher level of consciousness. Steve Wozniak called this the coffee test. Can a machine enter a normal American home and make a cup of coffee? This includes finding the coffee grinds, locating a mug, identifying the coffee machine, adding water and hitting the right buttons. This is referred to as artificial general intelligence where AI makes the leap to simulate human intelligence. While researchers work diligently on this problem, others question if AI will ever achieve this.

AI and ML are evolving technologies. Today’s limitations are tomorrow’s successes. The key is to continue to experiment and find where we can add value to the organization. Although we should recognize AI’s limitations, we shouldn’t let it stand in the way of the revolution.

About the Author(s)

Mark Runyon

Director of Consulting, Improving

Mark Runyon works as a director of consulting for Improving. For the past 20 years, he has designed and implemented innovative technology solutions for companies in the finance, logistics, and pharmaceutical space. He is a frequent speaker at technology conferences and is a contributing writer at The Enterprisers Project. He focuses on IT management, DevOps, cloud, and artificial intelligence. Mark holds a Master of Science in Information Systems from Georgia State University.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights