- Prior knowledge:
- No previous knowledge expected

This workshop explains how to take the highest performing ML models such as gradient boosting and neural networks and understand what contributes to their predictions at a local and global level to make their output easily understood by practitioners and non-practitioners alike.

- Interpretability is why bad models are used more than they ought to be
- The current state of interpreting non-linear ML models, and their major shortcomings
- What's currently missing in the toolkit for understanding black box ML models

- A brief history of understanding black box models, and how it lead to the need for SHAP
- Why SHAP is a theoretically sound application of game theory to understand any ML model, regardless of how it generates predictions
- A close look at SHAP's source code to understand how it's used to compute its results

- How to derive local explanations for a single model prediction (or how to be more like linear regression)
- Creating odds ratios
- Using SHAP to understand feature interaction effects among correlated data
- Using SHAP with neural networks and unstructured data: understanding word contributions to a Transformed NLP model
- Examples of SHAP being used in production