More

    How to Convince Your Boss to Trust Your ML/DL Models

    Machine learning model interpretability using LIME, or how to explain why a model made a specific prediction

    Accuracy vs Explainability (Image by author)

    Introduction

    Some company managers or stakeholders are pessimistic about machine learning model predictions. Therefore, it is data scientists’ reasonability to convince them that the model prediction is credible and also understandable to humans. Therefore, we need to focus not only on creating powerful machine learning/deep learning models, but also make the models interpretable by humans.

    Interpretability helps in many ways, such as helping us to understand how a model makes a decision, it justifies model prediction and gaining insights, building trust in the model, and it helps us improve the model. There are two types of ML model interpretation — global and local.

    • Local interpretation answers the question, why did the model make this specific prediction?
    • Global interpretation answers the question, which is the most important feature for the prediction?

    Interpretability is the degree to which a human can understand the cause of a decision [Miller, Tim 2017]

    In this article, we will focus on local interpretability and we will cover:

    1. Inherently interpretable models
    2. Local interpretation method: LIME
    3. Practical work — explaining XGBoost model prediction on a toy dataset
    4. Pros & cons of LIME

    1. Inherently Interpretable Models

    Good Examples of inherently explainable models are linear regression and decision trees.

    1.1 Linear regression

    The intuition behind linear regression is that it predicts the target as a weighted sum of the input features.

    linear regression
    linear regression formula (Image by author)

    Based on the hypothesis function of linear regression, interpretation is so easy. It is clear which feature contributed more and which is the most important feature for the prediction. But everything comes with a cost — Model correctness depends on whether the relationship in the training data meets certain assumptions such as linearity, normality, homoscedasticity, independence, and no multicollinearity.

    you can see a detailed explanation of the linear regression algorithm in the previous article.

    1.2 Decision Tree

    The decision tree is also easily interpretable. Additionally, it can cope with non-linear relationships between features, which was impossible with a linear regression algorithm. The decision tree is built by doing splits based on some criteria (i.e. Gini index) in which different subsets of the datasets are created through splitting. We can follow the structure of the tree starting from a root node to the leaf nodes and understand why the model made the specific prediction. Here is a decision tree example visualization below.

    Decision tree example (Image by author)

    If you want to see more about how a decision tree works you can visit my previous article.

    2. Local interpretation method: LIME

    As the model becomes more complex with more predictive power capability, the explanation of the predictions becomes complicated. Complex models, also called black box models, such as XGBoost, Random Forest, and Neural Networks are not inherently interpretable, therefore they need additional methods to understand their predictive nature. LIME is one of the local interpretation methods which explains how individual predictions of black-box ML models are made. The intuition behind LIME is that it creates the surrogate model (i.e. linear regression, decision tree) that is trained to approximate the prediction of the underlying black box model. Instead of training a global surrogate model, LIME is focused on training local surrogate models in order to explain individual predictions. Here is the LIME recipe shown in the figure below.

    LIME receipt
    LIME recipe (Image by author)

    The mathematical expression of local surrogate models are:

    where…

    • x – an instance we want to interpret
    • f- original model (i.e. Deep Neural Network, XGBoost)
    • g-surrogate model(i.e. linear regression, decision tree)
    • π_x proximity, which defines how large the newly generated dataset should be.
    • L-loss (i.e. MSE, Cross-entropy)
    • Ω(g)- model complexity
    • G- family of possible explanations (i.e. all possible linear regression models, linear regression, lasso, ridge)

    In practice, a user should determine the model complexity, which means selecting the maximum number of features and samples that the surrogate model may use.

    3. Practical work — interpretability of XGBoost model

    In this section, we will discuss the practical implementation of LIME. We will do an experiment on a toy dataset — load_breast_cancer from sklearn which is a labeled dataset containing information about breast cancer conditions, i.e. whether it is benign or malignant.

    Step 1: Import libraries and dataset

    First, we need to install the lime package using “pip install lime

    Dataframe result (photo by the author)

    Step 2: Build the XGBoost model

    As the dataset is loaded, let’s split the data into train and test parts and create a simple XGBoost model with default hyperparameters and calculate the confusion matrix and the prediction accuracy.

    As the result show, the accuracy of the model is 96%; now it is time to explain individual predictions.

    Step 3: Create a surrogate model

    Now we create a lime tabular explainer object which will try to explain individual samples. The surrogate model will be trained on a newly created dataset containing 5000 samples of perturbed samples.

    Step 4: Interpret Individual Samples.

    In this case, I will try to do an explanation of two random samples. One sample is classified correctly, whereas another is misclassified.

    Sample 1 (Image by author)
    Sample 2 (Image by author)

    As the visualization shows, the colors blue and orange represent negative and positive associations, respectively.

    The first sample is a correctly classified example, which was classified as a benign class. let’s answer the question — Why did this sample was classified as benign? because the surrogate model said that if the feature “worst texture” ≤21.05 or “worst concave points” >0.06 or “worst concavity”>0.12 or “area error” ≤ 18.17 or “mean concave points” > 0.1 then they tend to predict as benign. All the feature values of that sample satisfy these conditions, therefore the example is classified as benign.

    The second sample is misclassified. The question — Why was this sample misclassified as benign even though it is malignant? Because from the visualization above, the surrogate model said that three features out of the top five features satisfy benign properties and their weighted sum of the features is greater than the weighted sum of the features which satisfy malignant properties. That is why this specific sample was misclassified.

    4. Pros and Cons of LIME

    The advantage of the LIME method is that it makes the explanation human-friendly. Even though we discussed examples with tabular data, LIME can be used for textual and image data as well. One of the disadvantages that LIME has is that data points that are sampled are from gaussian distribution ignoring the correlation between features. In addition, sometimes there might be instability of explanations. As LIME is an approximation it is not a sufficient method when you will be legally required to fully explain the prediction.

    Conclusion

    To sum up, LIME is a powerful method that answers the questions, why should I trust the ML model, and why did the model make this specific prediction? It creates a local surrogate model with a good approximation of the initial black box model which makes the individual explanation easier. As the model is interpretable, it is also easier for data scientists to convince managers and stakeholders that the prediction is credible.

    I hope you liked it 🙂 , Here is also the full code on my GitHub.

    Next article will be about global interpretability of any ML/DL models.

    You can follow me on medium to keep notified.

    References:

    [1] Molnar, Christoph “Interpretable machine learning. A Guide for Making Black Box Models Explainable” (2019)

    [2] Miller, Tim “Explanation in artificial intelligence: Insights from the social sciences” (2017)

    [3] Marco Tulio Ribeiro “Model-Agnostic Interpretability of Machine Learning” (2016)

    How to Convince Your Boss to Trust Your ML/DL Models Republished from Source https://towardsdatascience.com/how-to-convince-your-boss-to-trust-your-ml-dl-models-671f707246a8?source=rss----7f60cf5620c9---4 via https://towardsdatascience.com/feed

    Recent Articles

    spot_img

    Related Stories

    Stay on op - Ge the daily news in your inbox