An inmate at a New York correctional facility, Glenn Rodriguez, was due for parole soon. The man has been on his best behavior and was looking forward to being released and starting a new life. To Glenn’s horror, he was denied parole. An AI algorithm the parole board used gave this inmate a poor score, and as it wasn’t explainable, no one knew that something was going terribly wrong. Mr. Rodriguez fought his case and eventually was released after spending another unnecessary year in prison.
Unfortunately, this type of mistake can occur whenever AI is deployed. If we don’t see the reasoning behind algorithms’ decisions, we can’t spot the problem. You can prevent this issue in your organization by following the explainable AI principles while developing your artificial intelligence solution.
So, what is explainable artificial intelligence (XAI)? How to decide on the right level of explainability for your sector? And which challenges to expect on the way?
When speaking of AI, many people think of black-box algorithms that take millions of input data points, work their magic, and deliver unexplainable results that users are supposed to trust. This kind of model is created directly from data, and not even its engineers can explain its outcome.
Black-box models, such as neural networks, possess superior skills when it comes to challenging prediction tasks. They produce results of remarkable accuracy, but no one can understand how algorithms arrived at their predictions.
In contrast, users can understand the rationale behind its decisions with explainable white box AI, making it increasingly popular in business settings. These models are not as technically impressive as black-box algorithms. Still, their transparency is a tradeoff as it offers a higher level of reliability and is preferable in highly regulated industries.
Explainable AI (XAI) refers to a set of techniques, design principles, and processes that help developers/organizations add a layer of transparency to AI algorithms so that they can justify their predictions. XAI can describe AI models, their expected impact, and potential biases. Human experts can understand the resulting predictions and build trust and confidence in the results with this technology.
When speaking of explainability, it all boils down to what you want to explain.
There are two possibilities:
There are two approaches to this technique:
The US National Institute of Standards and Technology (NIST) developed four explainable AI principles:
XAI can provide a detailed model-level explanation of why a particular decision was made. This explanation comes in a set of understandable rules. Following a simplified loan application example below, when applicants are denied a loan, they will receive a straightforward justification: everyone over 40 years old saves less than $433 per month and applies for credit with a payback period of over 38 years will be denied a loan. The same goes for younger applicants who save less than $657 per month.
In some industries, an explanation is necessary for AI algorithms to be accepted. This can be either due to regulations and/or human factors. Think about brain tumor classification. No doctor will be comfortable preparing for a surgery solely based on “the algorithm said so.” And what about loan granting? Clients who got their application denied would want to understand why. Yes, there are more tolerant use cases where an explanation is not essential. For instance, predictive maintenance applications are not a matter of life or death, but even then, employees would feel more confident knowing why particular equipment might need preemptive repair.
Senior management often understands the value of AI applications, but they also have their concerns. According to Gaurav Deshpande, VP of Marketing at TigerGraph, there is always a “but” in executives’ reasoning: “…but if you can’t explain how you arrived at the answer, I can’t use it. This is because of the risk of bias in the black box AI system that can lead to lawsuits and significant liability and risk to the company brand as well as the balance sheet.”
The ideal XAI solution is the one that is reasonably accurate and can explain its results to practitioners, executives, and end-users. Incorporating explainable AI principles into intelligent software:
1. Explainable AI in healthcare
AI has many applications in healthcare. Various AI-powered medical solutions can save doctors’ time on repetitive tasks, allowing them to primarily focus on patient-facing care. Additionally, algorithms are good at diagnosing various health conditions as they can be trained to spot minor details that escape the human eye. However, when doctors cannot explain the outcome, they are hesitant to use this technology and act on its recommendations.
One example comes from Duke University Hospital. A team of researchers installed a machine learning application called Sepsis Watch, which would send an alert when a patient was at risk of developing sepsis. The researchers discovered that doctors were skeptical of the algorithm and reluctant to act on its warnings because they did not understand it.
This lack of trust is passed to patients who are hesitant to be examined by AI. Harvard Business Review published a study where participants were invited to take a free assessment of their stress level. 40% of the participants registered for the test when they knew a human doctor would do the evaluation. Only 26% signed up when an algorithm was performing the diagnosis.
When it comes to diagnosing and treatments, the decisions made can be life-changing. No surprise that doctors are desperate for transparency. Luckily, with explainable AI, this becomes a reality. For example, Keith Collins, CIO of SAS, mentioned his company is already developing such a technology. Here is what he said: “We’re presently working on a case where physicians use AI analytics to help detect cancerous lesions more accurately. The technology acts as the physician’s ‘virtual assistant,’ and it explains how each variable in an MRI image, for example, contributes to the technology identifying suspicious areas as probable for cancer while other suspicious areas are not.”
2. XAI in finances
Finance is another heavily regulated industry where decisions need to be explained. It is vital that AI-powered solutions are auditable; otherwise, they will struggle to enter the market.
AI can help assign credit scores, assess insurance claims, and optimize investment portfolios, among other applications. However, if the algorithms provide biased output, it can result in reputational loss and even lawsuits.
Not long ago, Apple made headlines with its Apple Card product, which was inherently biased against women, lowering their credit limits. Apple’s co-founder, Steve Wozniak, confirmed this claim. He recalled that together with his wife, they have no separate bank accounts nor separate assets, and still when applying for Apple Card, his granted limit was ten times higher than his wife’s. As a result of this unfortunate event, the company was investigated by the New York State Department of Financial Services.
With explainable AI, one can avoid such scandalous situations by justifying the output. For example, loan granting is one use case that can benefit from XAI. The system would be able to justify its final recommendation and give clients a detailed explanation if their loan application was declined. This allows users to improve their credit profiles and reapply later.
3. Explainable AI in the automotive industry
Autonomous vehicles operate on vast amounts of data, requiring AI to analyze and make sense of it all. However, the system’s decisions need to be transparent for drivers, technologists, authorities, and insurance companies in case of any incidents.
Also, it is crucial to understand how vehicles will behave in case of an emergency. Here is how Paul Appleby, former CEO of a data management software company Kinetica, voiced his concern: “If a self-driving car finds itself in a position where an accident is inevitable, what measures should it take? Prioritize the protection of the driver and put pedestrians in grave danger? Avoid pedestrians while putting the passengers’ safety at risk?”
These are tough questions to answer, and people would disagree on how to handle such situations. But it is important to set guidelines that the algorithm can follow in such cases. This will help passengers decide whether they are comfortable traveling in a car designed to make certain decisions. Additionally, after an incident, the provided explanation will help developers improve the algorithm in the future.
4. Explainable artificial intelligence in manufacturing
AI has many applications in manufacturing, including predictive maintenance, inventory management, and logistics optimization. With its analytical capabilities, this technology can add to the “tribal knowledge” of human employees. But it is easier to adopt decisions when you understand the logic behind them.
Heena Purohit, Senior Product Manager for IBM Watson IoT, explains how their AI-based maintenance product approaches explainable AI. The system offers human employees several options on how to repair a piece of equipment. Every option includes a percentage confidence interval. So, the user can still consult their “tribal knowledge” and expertise when making a choice. Also, each recommendation can project the knowledge graph output together with the input used in the training phase.
The need to compromise on the predictive power
Black box algorithms, such as neural networks, have high predictive power but offer no output justification. As a result, users need to blindly trust the system, which can be challenging in certain circumstances. White box AI offers the much-needed explainability, but its algorithms need to remain simple, compromising on predictive power.
For example, AI has applications in radiology where algorithms produce remarkable results classifying brain tumors and spotting breast cancer faster than humans. However, when doctors decide on patients’ treatment, it can be a life and death situation, and they want to understand why the algorithm came up with this diagnosis. It can be daunting for doctors to rely on something they do not understand.
The concept of explainability
There is no universal definition of explainability. It is often a subjective concept. Users might expect one type of explanation, while developers are willing to provide something else. Also, different audiences require tailored justifications, which results in one XAI system having to explain the same output in several different ways.
Security and robustness-related issues
With XAI, if clients gain access to the algorithm’s decision-making process, they might apply adversarial behaviors, meaning they will take deliberate actions to change their behavior to influence the output. One study published a concern that someone with technical skills can recover parts of the dataset used for algorithm training after seeing the explanation, thereby violating privacy regulations.
When your company is preparing to deploy responsible XAI solutions, the first thing would be to determine what exactly you need to explain and to whom.
Some of the questions addressed during the planning stage can be:
Next, your company needs to decide on the degree of explainability. Not all AI-based tools require the same degree of interpretability. PwC identifies six application criticality components that will help you determine your desired XAI level:
Finally, even after explainable artificial intelligence is in place, it is best to take action to ensure your data usage is ethical. Perform timely audits of your algorithms. A strong explainability feature can reveal any bias sneaking into your software. However, if you settled on a limited explainability solution, bias could remain unnoticed. Also, you can subscribe to the Partnership on AI consortium. Or even develop your own set of ethical principles of data usage, as Microsoft did. They included fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability.
Implementing traditional AI is full of challenges, let alone its explainable and responsible version. Despite the obstacles, it will bring relief to your employees, who will be more motivated to act upon the system’s recommendations when they understand the rationale behind it. Moreover, XAI will help you comply with your industry’s regulations and ethical considerations.
If you have an idea of an explainable AI solution to build, or if you are still unsure of how explainable your software needs to be, consult ITRex XAI experts.