The concepts of Artificial Intelligence (AI) and Machine Learning (ML) are impossible to ignore in today’s world, especially if you’re in the field of technology or you’re running an organization. Although these technologies are improving products and processes, it is not always clear how they reach the right conclusion.

While the simpler ML models used in the past, like regression, are easier to understand and explain, newer models, based on deep learning, for example, can be quite complicated to comprehend. This creates a barrier in adoption because key stakeholders are often reluctant about accepting technology they cannot understand. However, the fact of the matter is that these complex models are often more accurate in their predictions, and thereby, more effective than traditional models.

In this two-part series, we will outline the need for and gap in decoding the workings of AI/ML models in part 1 and broadly discuss a few popular methods in part 2 to help you interpret models and predictions.

Why does AI need to be ‘explainable’?

While stakeholder acceptance can be one, here are a few more reasons which point towards the need for explainable AI/ML Models:

  • The trust factor: It will be easier to trust a model if we understand it. Even if a model is highly accurate, we will be curious to know its workings and understand its methods.
  • Better analytics & newer models: Increased understanding of a complex AI/ML model will make analytics more robust, further facilitating the adoption of newer models over linear ones.
  • Debugging & fine-tuning: If any of its predictions are not proven to be correct in the real world, a model needs to be debugged. However, this debugging can only happen if the model is understood clearly. Hence, an understanding of the model is necessary to fine-tune it.
  • Ethical reasons: AI/ML models are increasingly being used with demographic data and may overuse any demographic attribute (like race or gender) to make the prediction. Even if the predictions are correct, the model may not be considered ethical. Hence, to determine the fairness of the analytical process or avoid ambiguity, the model needs to be explainable.

White Box vs. Black Box: Types of AI Models

From the perspective of explainable AI, these models/systems can be broadly divided into two categories – White Box (or interpretable) models and Black Box models.

White Box Models

Models based on regression and decision trees are called interpretable or white box models. As the name suggests, these prediction models are relatively easy to interpret. Here’s a quick overview explanation of how regression and decision tree models function.

Regression

Regression models, especially linear regression models, are very popular. These simple models predict the target variable as a weighted sum of input features. Each weight indicates the importance of the input feature to the prediction outcome. Linear regression creates true explanations in cases where a linear equation is an appropriate representation of the data representing the prediction task. If the actual relationship is not linear, then linear models will provide a poor explanation. Another reason why Linear Regression models may not perform well is that they over-simplify the real-world relationship between input features.

Decision Tree

Decision tree-based models split the data on cutoff values and create a tree-like structure made of nodes. Decision trees are self-explanatory. If you read the tree from the top node to the terminal node and join them with split conditions, you will get many ‘If-then’ kind of business rules. Decision trees are very interpretable if they are not very deep. They also provide great visualization that can be easily understood, even by a layperson. Decision trees are popular in the business world where data (like a customer, a patient, or product) is represented in rows.

Black Box Models

In the context of AI/ML, a model falls under the black box category if its prediction methods cannot be understood by only looking at their parameters. These models include older AI/ML models like Support Vector Machines (SVM), Random Forest, Gradient boosting, and newer ones, including all deep learning-based models.
Here’s a quick overview that will help you interpret SVM, Random Forest, Gradient Boosting, and deep learning models.

Support Vector Machine

Support Vector Machine (SVM) is traditionally used for classification problems, where the objective is to predict the outcome into two classes. It can be considered as an evolution of linear regression. SVM works by making a decision-boundary or a hyperplane to separate data points into two classes. The SVM method is accurate, but it does not offer much insight in terms of explanation.

Random Forest

Random Forest, as the name suggests, is a collection of decision trees built over a random variation of input data. The random forest method consists of selecting a part of data and building decision trees. Both the selection of rows of data and the input variables are done randomly over multiple numbers of trees. Hence, Random Forest requires a lot of computation power and time.

On the interpretation front, Random Forest provides variable importance. It ranks input variables in the order of importance to provide some insight into prediction. An important variable can be either negatively or positively correlated to the target variable. Since Random Forest creates multiple trees, a single specific decision tree used in the random forest can be visualized. Still, how a forest of multiple decision trees arrived at its conclusion is not evident.

Gradient Boosting

Like the Random Forest method discussed above, boosting methods are an extension of decision tree models. Boosting methods generally build decision tree models one after the other by modifying the dataset used in the previous decision tree.

Again, like Random Forest, gradient boosting methods also provide variable importance to give some insight into the relative importance of input features.

Deep Learning

We are not covering all types of deep learning methods here. However, we can assume that deep learning models are large neural networks with many layers consisting of nodes. The nodes of the first layer are connected to input features on one side, and intermediate layers are connected. The weights of the nodes are calculated and optimized (by, for example, gradient descent) to produce the prediction. However, due to multiple combinations of nodes and weights, it is not easy to explain the workings of deep learning models. Hence, they are treated as black-box models.

Now that we have a better context on the types of AI/ML models and how they function from part 1 of this blog series, let’s move on to part 2, which is the crux– what methods can you use to better explain AI/ML models

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading...


Muneesh Bajpai

Posted by Muneesh Bajpai

Leave a reply

Your email address will not be published. Required fields are marked *