Sunday 11:45 AM–12:30 PM in Auditorio UTN

Know what you don't know: Tools to understand uncertainty in DL and use it in your favor [ES]

Julian Eisenschlos

Audience level:
Intermediate

Description

As ML finds its way into critical applications like healthcare and autonomous vehicles, important concerns arise. When should it defer to a human to evaluate? How much risk is reasonable? Through the lens of Bayesian Neural Networks, we will show how to measure model uncertainty in Deep Learning models in practical settings, with immediate applications to active sampling and reinforcement learning

Abstract

In this talk, we want to bridge the gap between practical methods for Deep Learning and Bayesian Inference in a practical setting.

As motivation, we will start by introducing common problems in machine learning systems that arise from the lack of understanding of how uncertain models are when given specific inputs. This causes limitations on applications that need robust solutions and can impact people lives, such as healthcare, financial trading, and autonomous vehicles.

We will present Bayesian Neural Networks and cover the fundamentals of Bayesian Inference. Dropout layers and other stochastic regularization techniques, when viewed in the lens of BNNs, offer us out-of-the-box tools to measure uncertainty that we can implement with little or no cost to existing architectures.

To close, we will go over real-life applications of these techniques down to some code snippets. Better estimation of what the model doesn't know enables faster explore-exploit tradeoffs in reinforcement learning problems and more efficient use of annotations through active sampling.

Subscribe to Receive PyData Updates

Subscribe

Tickets

Get Now