In part 1 of this blog series, we understood various AI/ML models and how they function. In this part, we will learn various methods by which the AI/ML models can be better explained to the consumers or persons who derive benefits from such models.

Explaining AI Models

Now that we have a better context on the types of AI/ML models and how they function, let’s move on to the crux of this blog series – what methods can you use to better explain them.

While it may not be possible to fully explain the workings of the model, here are some ways you can establish trust in their capabilities:

Visualization

Visualizing the data is the first step towards understanding the model. There are many methods to visualize the data. However, the following two methods are preferred.

  • Scattered clusters: Visualize the data using scatter charts to observe the clusters. Cluster visualization enhances the understanding and trust if it confirms the business understanding of the dataset. Subsequently, the AI/ML model should treat the data in the various cluster differently.
  • Correlation Network Graphs: These graphical representations show how strongly the input features of the data are related amongst themselves. The strength of these derived relationships should tally with a commonly accepted understanding of the data. When an AI/ML model uses the same variables that are showing strong correlation and leaves out the unconnected variables, it increases the trust in the AI/ML Model.
Simple Alternates

This modeling technique involves using a simpler AI/ML model (like a decision tree) to approximate the results of a black box model, as both may give a similar predictive performance. In such cases, interpretation of the decision tree model can be considered as a proxy to the black box AI/ML Model. However, we should train the simpler model to be as close as possible to the black box model. If its interpretation is in tune with the domain knowledge or business understanding of the problem, it can enhance trust in the black box model as well.

The biggest advantage of simple, alternate model techniques is that it is very easy to explain to people who are skeptical or not conversant with AI / ML models.

Local Explanation

This method does not explain an AI/ML model as a whole; instead, it gives us an insight into their working by explaining the individual predictions. Local explanation aims to break down the workings of an AI/ML model into individual predictions and explains how these individual predictions are made. An individual prediction can be better accepted (like why a customer might churn), and so, this increases the trust. The following are the two methods that explain the model locally:

LIME

LIME (Local Interpretable Model-agnostic Explanations) is a good but approximate method to explain individual predictions. It explains a specific case or row.

LIME uses linear regression as a proxy or surrogate to explain the individual prediction. It is an approximate method, but it generates simple explanations using only the most important input variables. It is advisable not to consider more variables in the explanation task because it will make the interpretation difficult.

LIME is a versatile tool and works well with unstructured data like text and images. For example, it can identify which part of the text content is responsible for its classification as spam. Similarly, for images, it can pinpoint the part of the image that is causing its identification. LIME makes the AI/ML model more transparent because it exposes the most important variables around that specific prediction. When the explanation matches the domain knowledge held by experts, it increases trust. Thus, the LIME method could be a good fit for many situations.

Shapely Values

Shapely values try to provide the contribution of input features for a specific prediction and are based on game theory. This method assumes that each input/feature is a player in a game to predict the value of the target. Here, the prediction is the payout, and players are cooperating. Shapely values are a way to distribute the payout among the players i.e. input/features. It tells us to fairly distribute the payout between the features. Because for a particular prediction, some features will make more contribution to the predicted value than the other.

Interpretation of Shapely values can be a little confusing for a new person. Shapely value is a single value for each input feature. The value could be positive or negative. For a specific row of data, Shapely Values can be interpreted as the contribution towards the difference between local prediction and overall prediction.

One of the disadvantages of Shapely Values is that they require a lot of computation time because it takes into consideration all permutations of feature values. This makes it cumbersome for real-world situations. Hence, this method is better suited to low to medium complexity data. Shapley Value provides a simple value per input feature, but unlike LIME, it does not provide a local prediction model.

A matter of trust

Technology is moving fast, and the space of AI/ML is where we can hope to see a lot of innovation and application. For businesses to be able to jump on this upward trajectory, it is essential that they adopt these technologies.

For most businesses, what holds them back from jumping on this bandwagon is trust. We hope that this article helped you gain a better understanding of how you can make AI/ML a more transparent and easily comprehensible technology. We hope that you are able to use this information to inspire a connection of trust among businesses for AI.

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading...


Muneesh Bajpai

Posted by Muneesh Bajpai

Leave a reply

Your email address will not be published. Required fields are marked *