Thursday 15:35–16:05 in Main Track

Posterior Collapse in Deep Generative models

Michał Jamroż

Audience level:
Intermediate

Description

Generative models are powerful Machine Learning models useful at extracting information from high-dimensional data, but they sometimes suffer from the problem called "posterior collapse" which prevents them from learning representation having practical value. I am going to show why and when it happens, also how to deal with it.

Abstract

Why

Deep generative models like Variational AutoEncoders (VAEs) and Generative Adversarial Networks (GANs) turned out to be very successful in real-world applications of machine learning, including: natural image modelling, data compression, audio synthesis and many more. Unfortunately, it appears that models belonging to VAEs family - under some conditions may suffer from an undesired phenomenon called "posterior collapse" which causes them to learn poor data representation. The talk's purpose is to present this problem and its practical implications.

What

The presentation will comprise following elements:

Audience

Being familiarised with the topic of generative modelling will be helpful for anyone attending the talk, but it's not required. In fact, everyone having basic understanding of neural networks, representation learning and probability can gain useful information. Presentation won't be overloaded with mathematical formulas, I will do my best to present math-related aspects in an intuitive form.

Subscribe to Receive PyData Updates

Subscribe