Monday 12:35–13:05 in Main Track

Can you trust neural networks?

Mateusz Opala

Audience level:
Experienced

Description

Recently neural networks have become superior in many machine learning tasks. However, they are more difficult to interpret than simpler models such as decision trees. Such a condition is not acceptable in industries like healthcare or law. In this talk, I will talk on unified approach to explain the output of any machine learning model, especially neural networks.

Abstract

Talk is going to be build around 3 main points: 1) Why interpretability is important? 2) Introducing Shapley Additive Explanations 3) SHAP framework in Python

In 1) I will elaborate on need for interpretability of machine learning models. Latter, I will introduce SHAP framework from theoretical standpoint and provide intuitions behind it. SHAP has been introduced year ago at NIPS conference and tends to perform better than LIME or original DeepLift with respect to consistency with human intuition. In last part I will show how to use SHAP in Python on several examples like image classification and text classification among the others.

Subscribe to Receive PyData Updates

Subscribe