Story

Why We Need Explainable AI

When you sit down at the end of the day to watch your favorite TV show, your streaming service will usually suggest other shows to watch. Those suggestions are based on the behavior of users like you, as determined by artificial intelligence and machine learning.

 

But how does the artificial intelligence model make that determination? How does it crunch the numbers behind the thousands of programs hosted on the streaming service’s servers? How does it process the data it has about you to determine which other users you are similar to?

Questions about entertainment may seem like low stakes, but artificial intelligence systems are being used in situations that can straddle life and death like self-driving cars and healthcare diagnostics. And those systems face similar questions, leading some to refer to AI systems as “black boxes.”

The explainable AI (XAI) movement has focused on helping people answer them, with the goal of building trust in AI results, while helping people understand its limits.

“Our first challenge has been making the public aware of the implications of using and trusting AI,” said IEEE Fellow Karen Panetta. “Having AI provide decisions that can impact people’s lives, careers, health or safety requires absolute clarity on how it arrived at these decisions and how humans should use the information.”

WHY DO WE NEED XAI?

 

Explainable AI is a hot topic. An article published in IEEE Access, titled “Peeking Inside the Black Box: A Survey on Explainable Artificial Intelligence” is one of the journal’s most frequently read articles. The October 2021 edition of IEEE Computer Magazine devoted several articles to the subject and interest has only increased in recent years with the release of powerful generative AI models and our growing reliance on AI in several industries.

According to the IEEE Access article, there are four main reasons for XAI: to show why an AI’s output is accurate; to prevent erroneous or biased predictions; to provide a pathway to improve the AI model; and to harness new insights in other fields of study.

WHAT’S THE RISK OF AI ERROR?

 

To understand why explainable AI is necessary, consider some of the risks of AI going wrong. In transportation, autonomous vehicles have mistaken humans for other objects, like plastic bags, leading to disastrous consequences. In healthcare, a machine learning algorithm was trained to predict which pneumonia patients needed to be admitted to the hospital and which could be treated in an outpatient setting. The algorithm came to the mistaken conclusion that patients with asthma were at lower risk of dying. In fact, patients with asthma tended to get more intensive pneumonia treatment and were more likely to live as a result. That result, however, did not provide guidance on what to do before treatment.

In finance, regulations usually require lenders to explain to applicants why they were denied credit. But if machine learning models applied to lending decisions are opaque, lenders may not know why the model denied an application for a mortgage.

 

WHY IS AI CALLED A ‘BLACK BOX’?

 

To IEEE Senior Member Antonio Pedro Timoszczuk, there are two reasons AI might be referred to as a “black box.”  The first is that we cannot understand what’s going on at all. The second is that AI’s process is outside the human capability to visualize easily.

He believes it’s the second one. And this is particularly true for types of AI that mimic human cognition, like artificial neural networks. Essentially, AI functions by transforming input data from Point A into a specific output at Point B. The process involves various mathematical functions that organize and interpret the data, enabling it to be categorized or distinguished in meaningful ways.

The input data’s quality and variety significantly influence the outcomes. However, pinpointing precisely what happens inside the AI during this transformation is challenging. The reason is the vast number of dimensions AI operates in; it could be dealing with hundreds or even thousands of variables simultaneously. This complexity is what makes the inner workings of AI akin to a puzzle.

There’s an alternative perspective, though. Rather than trying to dissect the internal mechanics of AI, we can approach it systemically. This means starting with a clear understanding of the problem we aim to solve with AI, and carefully selecting the relevant data needed for this purpose. Avoiding unnecessary data simplifies the analysis. By examining AI from this systemic viewpoint, we can gain insights into its behavior and potentially trace back to the roots of any errors or unexpected outcomes.

To IEEE Senior Member Sambit Bakshi, calling AI a black box implies that its internal mechanisms aren’t understandable or interpretable. “If that were the case, then debugging or auditing could have been challenging. However, that is not the case,” Bakshi said. “There are several strategies that can be employed to understand and rectify the erroneous outcomes of an AI-based system. Model interpretation tools, such as feature analysis can determine the important features that can provide insights into what contributes more towards the model’s decisions.”

 

About IEEE: IEEE is the world’s largest technical professional organization dedicated to advancing technology for the benefit of humanity. IEEE and its members inspire a global community through its highly cited publications, conferences, technology standards, and professional and educational activities.