Researchers working directly with machine learning models are tasked with the challenge of minimizing cases of unjust bias.

Guest Commentary, Guest Commentary

August 21, 2020

5 Min Read

Artificial intelligence systems derive their power in learning to perform their tasks directly from data. As a result, AI systems are at the mercy of their training data and in most cases are strictly forbidden to learn anything beyond what is contained in their training data.

Data by itself has some principal problems: It is noisy, nearly never complete, and it is dynamic as it continually changes over time. This noise can manifest in many ways in the data -- it can arise from incorrect labels, incomplete labels or misleading correlations. As a result of these problems with data, most AI systems must be very carefully taught how to make decisions, act or respond in the real world. This ‘careful teaching’ involves three stages.

Stage 1:  In the first stage, the available data must be carefully modeled to understand its underlying data distribution despite its incompleteness. This data incompleteness can make this modeling task nearly impossible. The ingenuity of the scientist comes into play in making sense of this incomplete data and modeling the underlying data distribution. This data modeling step can include data pre-processing, data augmentation, data labeling and data partitioning among other steps. In this first stage of "care," the AI scientist is also involved in controlling the data into special partitions with an express intent to minimize bias in the training step for the AI system. This first stage of care requires solving an ill-defined problem and therefore can evade the rigorous solutions.

Stage 2: The second stage of "care" involves the careful training of the AI system to minimize biases. This includes detailed training strategies to ensure the training proceeds in an unbiased manner from the very beginning. In many cases, this step is left to standard mathematical libraries such as Tensorflow or PyTorch, which address the training from a purely mathematical standpoint without any understanding of the human problem being addressed. As a result of using industry standard libraries to train AI systems, many applications served by such AI systems miss the opportunity to use optimal training strategies to control bias. There are attempts being made to incorporate the right steps within these libraries to mitigate bias and provide tests to discover biases, but these fall short due to the lack of customization for a particular application. As a result, it is likely that such industry standard training processes further exacerbate the problem that the incompleteness and dynamic nature of data already creates. However, with enough ingenuity from the scientists, it is possible to devise careful training strategies to minimize bias in this training step.

Stage 3: Finally in the third stage of care, data is forever drifting in a live production system, and as such, AI systems have to be very carefully monitored by other systems or humans to capture  performance drifts and to enable the appropriate correction mechanisms to nullify these drifts. Therefore, researchers must carefully develop the right metrics, mathematical tricks and monitoring tools to carefully address this performance drift even though the initial AI systems may be minimally biased.

Two other challenges

In addition to the biases within an AI system that can arise at each of the three stages outlined above, there are two other challenges with AI systems that can cause unknown biases in the real world.

The first is related to a major limitation in current day AI systems -- they are almost universally incapable of higher-level reasoning; some exceptional successes exist in controlled environment with well-defined rules such as AlphaGo. This lack of higher-level reasoning greatly limits these AI systems from self-correcting in a natural or an interpretive manner. While one may argue that AI systems may develop their own method of learning and understanding that need not mirror the human approach, it raises concerns tied to obtaining performance guarantees in AI systems.

The second challenge is their inability to generalize to new circumstances. As soon as we step into the real world, circumstances constantly evolve, and current day AI systems continue to make decisions and act from their previous incomplete understanding. They are incapable of applying concepts from one domain to a neighbouring domain and this lack of generalizability has the potential to create unknown biases in their responses. This is where the ingenuity of scientists is again required to protect against these surprises in the responses of these AI systems. One protection mechanism used are confidence models around such AI systems. The role of these confidence models is to solve the ‘know when you don’t know’ problem. An AI system can be limited in its abilities but can still be deployed in the real world as long as it can recognize when it is unsure and ask for help from human agents or other systems. These confidence models when designed and deployed as part of the AI system can minimize the effect of unknown biases from wreaking uncontrolled havoc in the real world.

Finally, it is important to recognize that biases come in two flavors: known and unknown. Thus far, we have explored the known biases, but AI systems can also suffer from unknown biases. This is much harder to protect against, but AI systems designed to detect hidden correlations can have the ability to discover unknown biases. Thus, when supplementary AI systems are used to evaluate the responses of the primary AI system, they do possess the ability to detect unknown biases. However, this type of an approach is not yet widely researched and, in the future, may pave the way for self-correcting systems.

In conclusion, while the current generation of AI systems has proven to be extremely capable, they are also far from perfect especially when it comes to minimizing biases in the decisions, actions or responses. However, we can still take the right steps to protect against known biases.

Mohan_Mahadevan-Onfido.cp.jpg

Mohan Mahadevan is VP of Research at Onfido. Mohan was the former Head of Computer Vision and Machine Learning for Robotics at Amazon and previously also led research efforts at KLA-Tencor. He is an expert in computer vision, machine learning, AI, data and model interpretability. Mohan has over 15 patents in areas spanning optical architectures, algorithms, system design, automation, robotics and packaging technologies. At Onfido, he leads a team of specialist machine learning scientists and engineers, based out of London.

 

About the Author(s)

Guest Commentary

Guest Commentary

The InformationWeek community brings together IT practitioners and industry experts with IT advice, education, and opinions. We strive to highlight technology executives and subject matter experts and use their knowledge and experiences to help our audience of IT professionals in a meaningful way. We publish Guest Commentaries from IT practitioners, industry analysts, technology evangelists, and researchers in the field. We are focusing on four main topics: cloud computing; DevOps; data and analytics; and IT leadership and career development. We aim to offer objective, practical advice to our audience on those topics from people who have deep experience in these topics and know the ropes. Guest Commentaries must be vendor neutral. We don't publish articles that promote the writer's company or product.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights