Wednesday Oct. 7, 2020, 3:30 p.m.–Oct. 7, 2020, 4 p.m. in Online

Understanding Deep Neural Networks

Vladimir Osin

Audience level:
Intermediate

Description

As deep learning practitioners, we would like to know what input features are responsible for our model decision and start treating our models as white boxes. In the literature, this problem is known as attribution. During this talk, we discuss this problem and several available solutions that you can start using already now in PyTorch ecosystem.

Abstract

At Signify Research, we have a number of use cases where we are using deep neural networks in the modelling process, thus the question of model interpretability becomes quite important for an understanding of model performance.

In the moment of debugging deep neural networks, we asked ourself three important questions:

This talk will be focused on the first question, which is commonly known as the attribution problem. We briefly discuss available methods (propagation-based, such as grad-CAM and perturbation-based), try them in action and discuss use cases where you can start using them.

Participants will have the opportunity to access the hands-on part of this tutorial hosted on Github using Colab Notebook. The hands-on part will be based on TorchRay and Captum interpretability libraries in PyTorch ecosystem.

Subscribe to Receive PyData Updates

Subscribe