Friday 10:45 AM–12:15 PM in Central Park West (#6501)

Open the Black Box: an Introduction to Model Interpretability with LIME and SHAP

Kevin Lemagnen

Audience level:
Novice

Description

What's the use of sophisticated machine learning models if you can't interpret them? This workshop covers two recent model interpretability techniques that are essentials in your data scientist toolbox: LIME and SHAP. You will learn how to apply these techniques in Python on a real-world data science problem.

Abstract

What's the use of sophisticated machine learning models if you can't interpret them? In fact, many industries including finance and healthcare require clear explanations of why a decision is made. This workshop covers two recent model interpretability techniques that are essentials in your data scientist toolbox: LIME (Local Interpretable Model-Agnostic Explanations) and SHAP (SHapley Additive exPlanations). You will learn how to apply these techniques in Python on a real-world data science problem. You will also learn the conceptual background behind these techniques so you can better understand when they are appropriate.

Content available here (see README for setup instructions)

Subscribe to Receive PyData Updates

Subscribe