Navigate Turbulence with the Resilience of Responsible AI - InformationWeek

InformationWeek is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

IoT
IoT
Data Management // AI/Machine Learning
Commentary
7/22/2020
07:00 AM
Scott Zoldi, Chief Analytics Officer, FICO
Scott Zoldi, Chief Analytics Officer, FICO
Commentary
50%
50%

Navigate Turbulence with the Resilience of Responsible AI

Extraordinary economic conditions require brand-new analytic models, right? Not if existing predictive models are built with responsible AI. Here's how to tell.

Image: Pixabay
Image: Pixabay

The COVID-19 pandemic has caused data scientists and business leaders alike to scramble, looking for answers to urgent questions about the analytic models they rely on. Financial institutions, companies and the customers they serve are all grappling with unprecedented conditions, and a loss of control that may seem best remedied with completely new decision strategies. If your company is contemplating a rush to crank out brand-new analytic models to guide decisions in this extraordinary environment, wait a moment. Look carefully at your existing models, first.

Existing models that have been built responsibly -- incorporating artificial intelligence (AI) and machine learning (ML) techniques that are robust, explainable, ethical, and efficient -- have the resilience to be leveraged and trusted in today's turbulent environment. Here’s a checklist to help determine if your company’s models have what it takes. 

Robustness

In an age of cloud services and opensource, there are still no “fast and easy” shortcuts to proper model development. AI models that are produced with the proper data and scientific rigor are robust, and capable of thriving in tough environments like the one we are experiencing now.

A robust AI development practice includes a well-defined development methodology; proper use of historical, training and testing data; a solid performance definition; careful model architecture selection; and processes for model stability testing, simulation and governance. Importantly, all these factors must be adhered to by the entire data science organization. 

Let me emphasize the importance of relevant data, particularly historic data. Data scientists need to assess, as much as possible, all the different customer behaviors that might be encountered in the future: suppressed incomes such as during a recession, and hoarding behaviors associated with natural disasters, to name just two. Additionally, the models’ assumptions must be tested to make sure they can withstand wide shifts in the production environment.

Explainable AI

Neural networks can find complex nonlinear relationships in data, leading to strong predictive power, a key component of an AI. But many organizations hesitate to deploy “black box” machine learning algorithms because, while their mathematical equations are often straightforward, deriving a human-understandable interpretation is often difficult. The result is that even ML models with improved business value may be inexplicable -- a quality incompatible with regulated industries -- and thus are not deployed into production.

To overcome this challenge, companies can use a machine learning technique called interpretable latent features. This leads to an explainable neural network architecture, the behavior that can be easily understood by human analysts. Notably, as a key ingredient of Responsible AI, model explainability should be the primary goal, followed by predictive power.

Ethical AI

ML learns relationships between data to fit a particular objective function (or goal). It will often form proxies for avoided inputs, and these proxies can show bias. From a data scientist’s point of view, ethical AI is achieved by taking precautions to expose what the underlying machine learning model has learned, and test if it could impute bias.

These proxies can be activated more by one data class than another, resulting in the model producing biased results. For example, if a model includes the brand and version of an individual’s mobile phone, that data can be related to the ability to afford an expensive cell phone -- a characteristic that can impute income and, in turn, bias.

A rigorous development process, coupled with visibility into latent features, helps ensure that the analytics models your company uses function ethically. Latent features should continually be checked for bias in changing environments.

Efficient AI

Efficient AI doesn’t refer to building a model quickly; it means building it right the first time. To be truly efficient, models must be designed from inception to run within an operational environment, one that will change. These models are complicated and cannot be left to each data scientist’s artistic preferences. Rather, in order to achieve Efficient AI, models must be built according to a company-wide model development standard, with shared code repositories, approved model architectures, sanctioned variables, and established bias testing and stability standards for models. This dramatically reduces errors in model development that, ultimately, would get exposed otherwise in production, cutting into anticipated business value and negatively impacting customers.

As we have seen with the COVID-19 pandemic, when conditions change, we must know how the model responds, what will it be sensitive to, how we can determine if it is still unbiased and trustworthy, or if strategies in using it should be changed. Being efficient is having those answers codified through a model development governance blockchain that persists the information about the model. This approach puts every development detail about the model at your fingertips -- which is what you’ll need during a crisis.

Altogether, achieving responsible AI isn’t easy, but in navigating unpredictable times, responsibly developed analytic models allow your company to adjust decisively, and with confidence.

Scott Zoldi is Chief Analytics Officer of FICO, a Silicon Valley software company. He has authored 110 patent applications, with 56 granted and 54 pending.

The InformationWeek community brings together IT practitioners and industry experts with IT advice, education, and opinions. We strive to highlight technology executives and subject matter experts and use their knowledge and experiences to help our audience of IT ... View Full Bio
We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Comment  | 
Print  | 
More Insights
News
Think Like a Chief Innovation Officer and Get Work Done
Joao-Pierre S. Ruth, Senior Writer,  10/13/2020
Slideshows
10 Trends Accelerating Edge Computing
Cynthia Harvey, Freelance Journalist, InformationWeek,  10/8/2020
News
Northwestern Mutual CIO: Riding Out the Pandemic
Jessica Davis, Senior Editor, Enterprise Apps,  10/7/2020
White Papers
Register for InformationWeek Newsletters
Video
Current Issue
[Special Report] Edge Computing: An IT Platform for the New Enterprise
Edge computing is poised to make a major splash within the next generation of corporate IT architectures. Here's what you need to know!
Slideshows
Flash Poll