Is Explainable AI Explainable Enough Yet?

The “black box” nature of some types of deep learning inspired an outcry for explainable AI. It’s a tough problem to solve for a lot of reasons.

Lisa Morgan, Freelance Writer

September 24, 2024

9 Min Read
3D render representing the concept of Artificial Intelligence
Jose Luis Stephens via Alamy Stock

As artificial intelligence usage continues to increase, there’s a problem lurking in the background growing larger by the day: It’s the ability of AI to explain itself so it’s clear what led to an outcome. Deep learning opacity has driven demands for explainable AI (XAI). It doesn’t help that researchers can’t explain how the models they build work and why. The increasing complexity of deep learning and large language models is making XAI even more difficult to achieve, and a higher priority. 

“We have been able to achieve explainability through techniques like LIME, SHAP, and attention visualization, but that only provides a partial understanding of complex AI systems,” says Aditi Godbole, senior data scientist at enterprise application software provider SAP, in an email interview. “Even though we have mechanisms to explain certain aspects of many deep learning models’ decision-making, providing comprehensive and intuitive explanations is still challenging.” 

Several developments are needed to improve the state of explainable AI, such as the development of more advanced techniques and establishment of industry standards to ensure consistency and comparability, she says. Also necessary is interdisciplinary collaboration among fields like computer science, cognitive psychology, and communication for more effective explanations, as well as well-defined regulatory frameworks with clear guidelines and requirements. 

Related:AI Isn’t Fully Explainable or Ethical (That’s our Challenge)

“More often than not, the higher the accuracy provided by an AI model, the more complex and less explainable it becomes, which makes developing explainable AI models challenging,” says Godbole. “The premise of these AI systems is that they can work with high-dimensional data and build non-linear relationships that are beyond human capabilities. This allows them to identify patterns at a large scale and provide higher accuracy. However, it becomes difficult to explain this non-linearity and provide simple, intuitive explanations in understandable terms.” 

Other challenges are providing explanations that are both comprehensive and easily understandable and the fact that businesses hesitate to explain their systems fully for fear of divulging intellectual property (IP) and losing their competitive advantage. 

“As we make progress towards more sophisticated AI systems, we may face greater challenges in explaining their decision-making processes. For autonomous systems, providing real-time explainability for critical decisions could be technically difficult, even though it will be highly necessary,” says Godbole. “When AI is used in sensitive areas, it will become increasingly important to explain decisions that have significant ethical implications, but this will also be challenging. As the complexity of systems using multiple interacting AI agents increases, explaining the resulting behavior could become extremely complex.” 

Related:How to Monitor AI with AI

Aditi_Godbole-SAP.jpg

Explainable AI is rapidly evolving as more businesses express the need for it, according to Maitreya Natu, chief data scientist at SaaS-based autonomous enterprise software provider for IT and business operations Digitate

“Going forward, it won’t just be a wise choice, but a necessity. Industries would have to look for creative ways to find a golden middle between the transparency of the white-box solutions and the accuracy of the black-box solutions,” says Natu in an email interview. “Formal design methods will evolve to bring explainability by design from the very early stages of developing AI solutions, and elaborate frameworks will get set to evaluate the explainability levels of AI solutions and define its tangible benefits with regard to trust, adoption, regulatory compliance, etc. Explainable AI will set the foundation for new ways of augmented intelligence through human-machine collaboration.” 

Related:Causal AI: AI Confesses Why It Did What It Did

XAI’s key drivers include the issues black-box solutions introduce, such as overfitting or spurious correlations, difficulty in maintaining performance, unexpected behavior, potential unfairness and more. Explainability will prevent these issues and allow human experts to more effectively train, provide feedback to, and course-correct an AI model, Natu says. Explainable AI can open-up new metaphors of augmented intelligence where AI models truly benefit from the intuition of an expert and the experience of a seasoned practitioner.  

When designing complex analytics pipelines, careful decisions should be made to leave enough hooks for backward traceability and explainability. In other words, consider an application as a complex flow-chart, and if hooks are provided at every decision point of this flow-chart, then trace from the leaf to root can offer a fair degree of explainability behind the generated output.  

“Black-box algorithms are inevitable for many scenarios that are inherently complex such as fraud detection or complex event predictions. But these algorithms can be complemented with post-hoc explainability methods that enable interpretability by analyzing the response function of a machine learning model,” says Natu. “There will be cases where comprehensive reasoning cannot be inferred but even in such cases, techniques can be used to infer the contributions of different features in the output of an ML model.” 

Language Challenges 

Humans and machines speak different languages, which is why analytics vendors have been transforming human language queries into SQL, but the transformation in reverse can be problematic.  

For example, every SQL statement has a from clause that states where the data came from, which goes into the query result. According to Mike Flaxman, VP of product and spatial data science practice lead at advanced analytics company HEAVY.AI, it’s easier to identify where the data came from, the harder part is the transformations. 

“When you select sales from a table, it’s self-evident when you see the SQL. The number came from that column, and that is the semantic answer,” says Flaxman. “The problem is that corporations don’t have one sales table, they have 400 and three different definitions of sales. Those are real dilemmas in the semantics of how that information was computed, as well as who did it, when and where, which brings up the issue of providence. SQL is explicit, but the answer might be a really complicated formula.” 

For example, a heat reading from a thermometer provides a simple answer. However, heat indexes are complicated because they have as many exceptions as rules. Moreover, there are many heat indexes, so if one does not understand which heat index number is used, one can’t really understand what the heat index means. 

“You can document the top-level thing and make that explicable, but then one step below that is where did that column even come from and what was the formula or science that went into it?” says Flaxman. “So, I think SQL is a partial answer, and then explain the results in [human language].” 

Data Challenges 

Data powers AI. If the data quality is bad, the outcome of an analytical result, recommendation or answer will likely be erroneous. Data governance is also important to ensure proper data usage. 

Vyas Sekar, Tan Family professor of electrical and computer engineering in the ECE department at Carnegie Mellon University and chief scientist at data analytics company Conviva, believes the advent of large foundation and multimodal models has had a negative effect on XAI efforts. He also underscores the importance of data. 

“We don't know what goes in and can't understand what comes out or why it came out!” says Sekar in an email interview. “We need to understand the training data that went into building the model as well as the algorithm/model being used to respond and the kinds of questions we are asking. As we deal with larger datasets and more complex models and multimodal interactions, there are going to be challenges.” 

Vyas_Sekar_(002).jpg

The increasing deployment of large language models (LLM) over the past two years has made it increasingly challenging to maintain auditable systems. To address the issue, Panagiotis Angelopoulos, chief technology officer at motivation AI company Persado says it’s crucial to focus on the auditability of the training data itself -- ensuring that the training corpus is free from biases to enable fair and reliable AI systems. 

“Moreover, rigorous testing, including red teaming, is vital for these complex models. This involves subjecting the AI system to various extreme scenarios post-training to observe how it reacts and what decisions it makes under different conditions,” says Angelopoulos. “Such testing helps in evaluating the robustness and decision-making processes of AI systems, ensuring they operate reliably and ethically in real-world situations." 

Visualizations Could Help 

HEAVY.AI has been experimenting with visualizations that advance explainability, but it’s a challenging endeavor. 

“We can agree to the SQL answer to a question in a lot of cases, but we can’t agree on a good map or visualization. There are no benchmarks to my knowledge, and few standards in terms of explaining data and analytical results,” says HEAVY.AI’s Flaxman. “The graphic becomes part of the next question, and I believe the models [can do] that. It’s just that we have little theory to go on, so we’re trying to piece it together as we go, but I think that conversational ability is quite important.” 

Diby Malakar, VP of product management at data intelligence company Alation, believes a critical step forward involves creating a clearer visual map of the data lineage, including tracking the technical, system and business metadata as data flows across systems and into AI models.  

“Mapping is essential for understanding and explaining the AI’s decision-making process, which is especially important in high-stakes environments where accountability and trust are crucial,” says Malakar in an email interview. “The future of explainable AI lies in powerful AI governance frameworks that include data lineage as a fundamental component. Ensuring transparency across the entire lifecycle of data -- from its origin to its role in AI decision-making -- is key to enhancing explainability. This will likely involve the development of new tools and methodologies that provide a visual representation of how data is used across systems, making it easier for data scientists, end users, business leaders, and public stakeholders to understand AI decisions.” 

Bottom Line 

XAI is becoming increasingly necessary as organizations rely on AI more than they have in the past. While not all AI techniques are considered “black box,” deep learning models and LLMs continue to become more complex and therefore more difficult to explain. 

Many things need to happen to realize the vision of XAI, and there’s no simple or immediate fix to all the issues. Advancing it requires organizations to prioritize it, experts who can advance it, sound regulation and unflinching market demand. 

About the Author

Lisa Morgan

Freelance Writer

Lisa Morgan is a freelance writer who covers business and IT strategy and emerging technology for InformationWeek. She has contributed articles, reports, and other types of content to many technology, business, and mainstream publications and sites including tech pubs, The Washington Post and The Economist Intelligence Unit. Frequent areas of coverage include AI, analytics, cloud, cybersecurity, mobility, software development, and emerging cultural issues affecting the C-suite.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights