Chief information officers and other IT leaders can reduce or amplify risks depending on how they answer these five questions.

Guest Commentary, Guest Commentary

May 29, 2019

4 Min Read

What is the leading barrier to the adoption of artificial intelligence? I believe it is a lack of trust. CIOs, enterprise organization leaders and developers want to have confidence in their AI systems and build trust with their external stakeholders. Yet, we know from experience, and several high-profile examples, that there are serious risks when AI is used without a robust governance and an ethical framework.

An increasing number of industries are starting to embrace AI because it can help people do their jobs better and more efficiently. But will doctors continue to trust AI as a diagnostic tool if the result is a misdiagnosis that ultimately puts the patient’s health at risk? Will car dealers use AI to determine the credit worthiness of customers even if good buyers get denied, leading to lost sales?

Below are five key questions every company should ask when working with developers to design an AI agent. How your organization answers these questions can either reduce or amplify risks -- and greatly impact the trustworthiness of AI.

What are your AI goals?

For example, you may develop an AI agent -- a program that can make decisions or perform a service based on its environment, user input and experiences -- to see, recognize and classify images. If that technology is used to classify pictures in an online photo album, it has a much lower risk profile than if it is used to detect objects in front of an autonomous vehicle deciding to stop or go. 

How complex is what you’re trying to achieve?

It’s important to consider how many different capabilities are required to achieve the AI agent’s goal. Sensing and perceiving meaning from video is more complex than doing the same from still images. Utilizing unstructured textual data requires the use of optical character recognition (OCR) and natural language processing (NLP) to first read and then contextualize the data. The more capabilities required of the AI agent, the greater the risk of systemic failure.

Is your environment stable or variable?

For an AI agent to predict the creditworthiness of loan applicants, the environment may be stable if the data is in a structured format with little variability. By contrast, an autonomous vehicle operating on the open road may have an environment that is highly variable and unpredictable. That is going to significantly increase the risk of prediction error because the AI agent is operating in a dynamic environment and unchartered territory.  

How could bias be introduced into your AI’s predictions?

AI agents that are designed to make predictions about people often contain a risk of bias. When AI agents are processing data on people, developers need to consider a long list of individual characteristics, such as ethnicity, age, gender and sexual orientation and to determine whether these characteristics could have an impact on the decisions or actions of the AI agent.

It’s important that for each AI use case the level of diversity of the full population is considered and measures are taken to monitor how well the AI agent operates across all these groups.

What is the level of human involvement?

The fifth condition, and one which is commonly used today to mitigate the risks of AI, is the level of human involvement in decisions or actions taken by the AI agent. Many organizations today are using AI to augment current human operators, which means that the AI agents are providing good insights, but human operators are still making the final decision and executing the final action. As the level of autonomy of AI agents increases, so too must the continuous mechanisms to monitor its performance.  

It’s clear that most organizations want to harness AI’s full potential to fuel future growth, but to do so, they will need to adopt governance and ethical standards that embed their systems with trust and security.

Drawing on 25 years as a technology risk advisor, Cathy Cobey is EY's Global Trusted AI Advisory Leader. She leads a global team that considers the ethical and control implications of artificial intelligence and autonomous systems. Cobey leverages her unique background as a CPA and her involvement with the EY Climate Change & Sustainability practice to consider the full spectrum of technological and societal implications in intelligent automation development. She also serves on several technical advisory committees to develop industry and regulatory standards for emerging technology.

About the Author(s)

Guest Commentary

Guest Commentary

The InformationWeek community brings together IT practitioners and industry experts with IT advice, education, and opinions. We strive to highlight technology executives and subject matter experts and use their knowledge and experiences to help our audience of IT professionals in a meaningful way. We publish Guest Commentaries from IT practitioners, industry analysts, technology evangelists, and researchers in the field. We are focusing on four main topics: cloud computing; DevOps; data and analytics; and IT leadership and career development. We aim to offer objective, practical advice to our audience on those topics from people who have deep experience in these topics and know the ropes. Guest Commentaries must be vendor neutral. We don't publish articles that promote the writer's company or product.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights