Trust, transparency, and the rise of explainable AI

AI

Most organizations are currently in the process of investigating, planning, or deploying artificial intelligence (AI) implementations, but there’s a problem: businesses -- or even AI designers -- don’t understand how or why the AI arrived at a specific decision. This is a big hurdle for businesses who want to begin relying on AI-based dynamic systems for their decision making. In fact, a recent PwC survey found that 37 percent of executives said ensuring AI systems were trustworthy was their top priority, and 61 percent would like to create transparent, explainable, and provable AI models.

The need for transparent, explainable AI goes beyond individual business preferences. Interpretability, fairness, and transparency of data-driven decision support systems based on AI and machine learning are serious regulatory mandates in banking, insurance, healthcare, and other industries. In addition, regulations like GDPR’s right to explanation clause or the upcoming California Consumer Privacy Act will compel businesses to know what their AI algorithms are thinking. The solutions to these issues of trust and explainability typically have been to stick with simpler models, improving transparency at the expense of accuracy. From my perspective, understanding how to create trust -- more so than creating transparency -- in AI is going to be crucial to success.

Making AI more explainable

First, let’s take a look at how AI can be made more "explainable." The easiest way to do this is to stick with the subset of machine learning algorithms that tend to create more interpretable models, such as linear regression, logistic regression, and decision trees. Linear classifiers and decision trees can be directly inspected, and the reason for a particular decision can be traced fairly easily by examining the model.

Another option is to use model-agnostic interpretation tools, which can be applied to any supervised machine learning model. One example is Local Interpretable Model-Agnostic Explanations (LIME). LIME basically takes a single prediction instance from a black-box model and generates a simpler model to approximate the decision-making characteristics of the black-box model. This simpler model can then be interpreted and used to explain the original black-box prediction.

Explainability comes at a cost

However, putting the focus on making AI more transparent and explainable comes at a cost. Since one of the main reasons for using AI is to predict complex situations, it stands to reason that the model behind it is similarly complex and thereby difficult to explain.  By switching to more explainable methods, we’re limiting the AI’s range of methods, which could ultimately lead to worse performance. In many domains, assumption-based models have worse predictive performance on untouched test data than black box machine learning models. "Deep learning" methods like neural networks do a far better job at responding to complex situations than more interpretable methods, but they’re also much more opaque.

Even limiting our selection of algorithms may not result in an AI that’s particularly "explainable" for the average human. Linear models are capable of taking in a far greater number of features than any human could possibly deal with in one sitting, and decision trees, often championed for their interpretability, can be quite deep and typically must be combined with hundreds or thousands of other decision trees. While the models themselves may be more transparent, it’s challenging or impossible to make sense of them when you consider real-world scale -- which means we’re compromising the AI’s performance and not getting any actual benefit for our efforts. For that reason, many AI practitioners will prioritize trust over transparency.

Prioritizing trust over transparency in AI

It might seem unusual to focus our efforts on increasing trust in the AI over transparency, so let’s think of an example. Imagine that you have the choice of getting to work in one of two autonomous vehicles. The first one has never transported an actual human to work, but its algorithms are completely explainable and understandable. The second one’s inner workings are a mystery, but it has been tested over millions of miles of real-world driving conditions with excellent results. Which one would you choose? I’m sure we’d all rather ride in the one that works well, even if we don’t understand how it works. Following that same logic for AI, testing should become much more thorough so businesses can have confidence in the AI.  At the same time, businesses can introduce a "responsible AI" initiative, which prioritizes accountability, testing, and governance.

Another way to improve trust in AI is to measure it according to business outcomes. Currently, AI is being measured using metrics and models that only data scientists understand -- and that needs to change if organizations are to become more comfortable using AI. To measure the effectiveness of AI initiatives in business terms, we can develop relevant business metrics for each machine learning initiative, and then use those metrics to evaluate the success of the models. Doing away with more abstract measures such as model fit or accuracy makes AI much more understandable for the businessperson -- even though we’ve changed nothing about how the AI itself works.

As with any new technology, there are plenty of initial hurdles to jump before people are truly comfortable with it. By emphasizing testing and tying AI outcomes to business metrics, AI becomes easier to trust and a natural part of our business environments.

Image Credit: Mopic / Shutterstock

Cameron O’Rourke, is Director, Technical Product Marketing at GoodData. GoodData provides businesses with end-to-end AI-enhanced analytics at-scale, to place actionable business intelligence at the point of work.

One Response to Trust, transparency, and the rise of explainable AI

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.