Friday October 29 11:00 AM – Friday October 29 1:00 PM in Workshop/Tutorial II

Behind the Black Box: How to Understand Any ML Model Using SHAP

Jonathan Bechtel

Prior knowledge:
No previous knowledge expected

Summary

This workshop explains how to take the highest performing ML models such as gradient boosting and neural networks and understand what contributes to their predictions at a local and global level to make their output easily understood by practitioners and non-practitioners alike.

Description

Part 1: An Introduction to Interpretable ML

  • Interpretability is why bad models are used more than they ought to be
  • The current state of interpreting non-linear ML models, and their major shortcomings
  • What's currently missing in the toolkit for understanding black box ML models

Part 2: An Introduction to SHAP

  • A brief history of understanding black box models, and how it lead to the need for SHAP
  • Why SHAP is a theoretically sound application of game theory to understand any ML model, regardless of how it generates predictions
  • A close look at SHAP's source code to understand how it's used to compute its results

Part 3: SHAP in the Wild

  • How to derive local explanations for a single model prediction (or how to be more like linear regression)
  • Creating odds ratios
  • Using SHAP to understand feature interaction effects among correlated data
  • Using SHAP with neural networks and unstructured data: understanding word contributions to a Transformed NLP model
  • Examples of SHAP being used in production