How the SHAP Model can be Used to Answer the Question ‘why’

Posted by Helene Stafferöd Westerlund on 11/5/18 4:36 PM
Helene Stafferöd Westerlund
Find me on:

AI has been gaining popularity in recent years, particularly as it’s been improving in leaps and bounds for a whole range of tasks: from predicting the risk of child abuse in a given family to beating world champion Go players. 

However, one critical problem with using it for decision making in business is that the machine learning models can't explain the predictions they make: a customer might, reasonably, want to know why their cred it card or home loan application was denied. 

SHAP Values as a Tool to Answer the Question ‘why’

Image: Andy Kelly 

Wouldn’t it be great if we could use these highly effective machine learning models to help us to make decisions, but also be able to give our customers detailed and accurate explanations for the reasoning behind them? 

Instant Access as a tool for decision making

While there has been real progress recently in extending or supplementing machine learning algorithms with this ability to explain their decisions, many of the techniques are specific to one kind of machine learning. This makes it frustrating to keep up with new techniques and to understand how they work, how to use them, and how to interpret them. 

SHAP Values Providing an Explanation 

The SHapley Additive exPlanation (SHAP) framework provides clear explanations for every kind of machine learning model – from tree classifiers to deep convolutional neural networks. Every feature used in the model is given a relative importance score: a SHAP value. 

This tells us how much that particular feature contributed to the decision of the model. For example, it might be that when computing risk scores for customers applying for a new home loan, the model uses the credit score of the applicant as the main deciding factor and salary as the second. Specific values of these features can also be considered cut-offs for categories such as “high” or “low” risk. 

The SHAP model is based on Shapley values, named after Lloyd Shapley who derived them in his work on fairly allocating profits and costs in collective action game theory problems. They are values that represent relative contribution – of the agents in the cooperation task – or of different features in the machine learning model. 

The art of Being Consistent and Accurate

In fact, SHAP values are probably the only possible measure of the importance of different features for a prediction that is both consistent and accurate. In this context, consistency means that if we were to modify the given model to rely more on a particular feature, then the importance of that feature shouldn’t decrease.

SHAP Values as a Tool to Answer the Question ‘why’

Image: Dan Gold

Does that make sense? Yes, because if our explanation framework isn’t consistent, then a feature being assigned a higher importance doesn’t imply that the model actually relies more on that feature, which is in fact what we need to know to get a true explanation. 

Accuracy can be thought of like the idea that the significance of all the features together should sum to the total importance of the model. If our explanation framework is not accurate in this sense, then we can’t read the importance scores off as relative scores compared with the other features. We can’t say:

The main factors contributing to your rejection were your credit score and your current salary.

We need both consistency and accuracy in our explanation framework, and the SHAP values technique is the only one that provides both. It also works for every kind of machine learning model, so no matter what your favourite is – you can start using SHAP values immediately.

Learn more about our solutions  

Topics: AI, Risk, Tech, ML, Credit scoring, SHAP values