To liberate the full potential of machine learning as fast as possible, CIOs, CEOs, and society in general, must first get over fears of losing control.

Guest Commentary, Guest Commentary

May 9, 2019

5 Min Read

By now we should know that machines tend to uplift our quality of life. Over the past 100 years or so, the unfolding of each new breakthrough technology -- from electricity to the automobile to the internet -- has brought trepidation, then ultimately acceptance. Despite these past experiences, our society seems to have a peculiarly strong resistance to the latest technologies that are radically transforming our lives: artificial intelligence (AI) and machine learning (ML).

Perhaps that is not so unreasonable, as the AI/ML revolution is quite different from the revolutions that came before it: It involves giving up, rather than increasing, our control over machines. The internet may be a disruptive force, even an anarchic one, but at the end of the day, it is under human power. A web page will always do what a human web designer or programmer coded it to do, bugs and glitches notwithstanding. ML models are a different thing altogether. Yes, we humans set up the parameters within which they run, but ultimately, they make decisions according to logic our minds can’t always interpret.

Even the way we interact with computers has changed. In the last 15 years alone, we have gone from controlling them via a keyboard/mouse input to swiping their screens with our fingertips to simply using our voices to talk to them as we would with another person, thanks to advances in natural language processing. Not only can computers do more complex work than they could in 1998 or 2008, we also interact with them in a way that puts them on almost an equal footing with us. No wonder many find the AI/ML revolution a bit unnerving.

Then there is the lightning-fast pace of change. The field of AI/ML is constantly evolving, so that 2018 models are significantly faster and more accurate than ones from 2017. New innovations further accelerate the pace. For example, the past year or so has seen wider application of transfer learning, a time-saving technique that lets data scientists adapt an existing pre-trained model to a new task. Transfer learning lets those with fewer resources piggyback off the work of major research institutions and big tech companies, vastly decreasing the time and resources required to build a highly sophisticated and accurate model.

It’s an understandable impulse to cling to the familiar by keeping humans in total control of computing processes. However, it would be incredibly wrongheaded. Society stands to gain immensely from the automation of rote work and the greater accuracy of ML-driven insight. So how can we move forward when the ultimate hurdle for technology is not the pace of possible advancement, but rather the mindset of its masters?

The answer may be that those of us who wish to advance AI/ML solutions must take care to address stakeholders’ concerns while also emphasizing the technology’s potential benefits. It is vital to offer a positive vision for the future, not just assurances against harm.

The healthcare industry is dealing with this challenge as it gradually adopts ML-driven tools that supplement human judgment. Researchers at Google recently trained deep learning algorithms to gauge a patient’s cardiovascular health based on photos of their eyes. In the future, doctors may not have to rely on blood tests to measure a patient’s cholesterol levels or their risk of heart disease.

In our experience, medical personnel often have credible concerns that hospital management implementing an AI/ML solution should take care to address. Will the technology add a step in their already hectic workflow? If a ML model makes a mistake that impacts a patient’s well-being, could a nurse’s license be on the line? Healthcare providers may be more willing to try out AI/ML solutions once these issues are discussed, along with the potential time- and labor-savings of automation.

The stakes are different, but no less critical for CEOs tasked with managing financial, legal, and reputational risk for an entire organization. Here, transparency becomes a key concern, in two directions.

Current regulatory frameworks depend on transparency of outcome, meaning that firms need to be able to explain to regulators why a certain decision was made or how a certain result came about. Most AI/ML solutions, however, are “black boxes” that can offer no such rationale. This can make conversations with regulators uneasy, which is an understandable concern for any business leaders, but especially those in the pharmaceutical, finance, and healthcare industries. For an AI/ML evangelist in these sectors, crafting a plan that addresses how a solution will interact with regulations will be key. However, so will painting a picture of the higher profit margins and increased efficiencies that will become possible with AI/ML integration.

Greater transparency of outcomes with AI/ML might be possible with improvements in transparency of data. Today firms keep a tight grip on proprietary data, as well as sensitive medical, insurance, and financial information. That can make it difficult to see what data has been fed into a given AI/ML system, which exacerbates the “black box” problem. It’s a bit of a Catch-22: Firms want to know how a given model works before they feed it their precious data. But until they loosen their grips on their data, it will be hard to understand how any model works. As AI/ML becomes more trusted, a second step will be for regulators, business leaders, and other stakeholders to open access to protected data and increase transparency.

Technology makes things faster, easier, and more cost effective. As a result, it is likely that machines will be making more decisions for us, and in some cases outpacing us in how quickly they can grow and develop. Companies, despite profit opportunities, may not fully adapt to this reality until society does as well. To liberate the full potential of machine learning as fast as possible, we’ll need a broader shift in our way of thinking as the technology progresses. Transparency about how an algorithm predicts an outcome and what data it uses is a first step towards the change.

Manuel Amunategui is vice president of data science at SpringML , where his team specializes in building predictive analytics solutions across multiple industries. He recently published a book Monetizing Machine Learning  and is also a prolific LinkedIn blogger with a strong following.

 

About the Author(s)

Guest Commentary

Guest Commentary

The InformationWeek community brings together IT practitioners and industry experts with IT advice, education, and opinions. We strive to highlight technology executives and subject matter experts and use their knowledge and experiences to help our audience of IT professionals in a meaningful way. We publish Guest Commentaries from IT practitioners, industry analysts, technology evangelists, and researchers in the field. We are focusing on four main topics: cloud computing; DevOps; data and analytics; and IT leadership and career development. We aim to offer objective, practical advice to our audience on those topics from people who have deep experience in these topics and know the ropes. Guest Commentaries must be vendor neutral. We don't publish articles that promote the writer's company or product.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights