Swarajya Logo

Tech

Simple Guide To Machine Learning Interpretability

  • Interpretability has been proposed to be the key to connect the two worlds of machine learning algorithm development and real-world implementation.

Karthik SrinivasanApr 20, 2022, 06:16 PM | Updated 06:16 PM IST
There are fundamental aspects of machine learning interpretability that are underexplored.

There are fundamental aspects of machine learning interpretability that are underexplored.


The areas of machine learning (ML) and artificial intelligence (AI) have grown by a large extent in the last 20 years, thanks to software, hardware, and algorithmic advancements.

In the early 1950s, AI meant the use of expert rules for a machine to follow so as to reach a particular conclusion. These rules were often painstakingly coded by experts (and hence the name “expert rules”) and were not flexible to be applied to different applications.

ML, considered as a subset of AI, became popular mainly due to an abundance of data as organisations were able to store more data for less cost. Data mining, considered as a subset of ML, involved two main strategies to mine data to get some value out of it — identify patterns such as groups of data points (unsupervised learning) and identify correlations between input variables in data and an outcome of interest (for example, information about a loan applicant is correlated to the outcome of getting approved for the loan).

Over the years, several machine learning models have been proposed and shown to have promise in multiple applications, leading to new disciplines related to the usability of ML, such as fairness, federated learning, explainability, and interpretability.

The discipline of interpretable ML (MLI) gained importance due to the need for closing the gap between demand and supply of ML systems. While on the supply end, ML technologies such as image processing and natural language processing are able to perform much better in terms of making highly precise predictions, on the demand side, decision makers are still uncertain about the veracity of the output of ML applications in the real world.

Several examples have been highlighted related to decision fallacies of ML systems, such as false incrimination of innocent people due to errors in face detection software and declining credit card applications to worthwhile applicants due to systematic bias in data. Interpretability has been proposed to be the key to connect the two worlds of ML algorithm development and real-world implementation.

There exist simpler ML models such as the decision tree that provide rules such as ‘if age > 25 and salary > 10 lakh per annum, then approve credit card application’, or linear regression, which provide correlation coefficients such as 'increase in the use of X fertiliser by one unit increases crop yield by four units'.

On the other hand, there also exist deep neural networks that operate similar to the human brain by connecting neurons across thousands of layers to make a decision.

Intuitively, the latter perform better in decision making than linear equations or rule-based models as they can capture a wider range of hidden relationships embedded in the data. The former are considered as transparent ML models while the latter class of better-performing ML are termed as ‘black-box’ models.

Current efforts in MLI are geared towards two broad directions — generate transparent models that are close to the prediction performance of the black-box models and develop methods that can explain the decisions made by black-box models.

Just a decade ago, the majority of ML researchers were mainly concerned about making models that are more precise in predictions, but today, there is a sizable proportion of interest in also explaining the rationale for the predictions.

Surrogate modelling has become a popular direction of enquiry wherein a black-box model is first trained to make optimal predictions, followed by using another model to explain the predictions made by the first model.

Alternative approaches include counterfactual analysis that examines the change in conditions that result in the reversal of a decision made by an ML model, and game theory, where each piece of information is compared against another, analogous to deciding the individual contributions of team members working towards a single goal.

Though computer scientists and mathematicians are eagerly working on proposing new transparent models that can either augment or substitute existing complex and opaque ML models, there are fundamental aspects of MLI that are very much underexplored. Philosophical and ontological questions such as ‘what is interpretability’, ‘how do humans interpret systems’, ‘to what extent do we need to interpret machine learning models and AI systems’ are questions requiring a wider gamut of skill sets and expertise beyond mathematical formulations and code implementations.

Perhaps, understanding the extent of the human need for interpretability can ultimately help us reconcile (on a pessimistic note) or arrest (on an optimistic note) singularity, the much-prophesied hypothetical situation when technological growth becomes uncontrollable, negating the need for humans.

This article has been published as part of Swasti 22, the Swarajya Science and Technology Initiative 2022. Read other Swasti 22 submissions.

Also Read:

Join our WhatsApp channel - no spam, only sharp analysis