Friday 16:30–17:15 in Hall 5

Bayesian Optimization and it's application to Neural Networks

Moritz Neeb

Audience level:
Intermediate

Description

This talk will be about the fundamentals of Bayesian Optimization and how it can be used to train ML Algorithms in Python. To this end we'll consider it's application to Neural Networks. The NNs will be implemented in keras, the Bayesian Optimization will be optimized with hyperas/hyperopt.

Abstract

Have you ever failed to train a Neural Network? Spent hours, to get it even learn anything? If you ask an expert on how she does this, the answer might be something like: "It needs a lot of experience and some luck". If you know this problem, then this talk is for you. Also, let's steer our luck with ML!

When tuning hyperparameters, an expert has built a model, that means some expectations on how the output might change on a certain parameter adaption. For example, what happens to your Convolutional Neural Network if you set the dropout from 0.5 to 0.25.

Bayesian Optimization is a method that is able to build exactly this kind of model. It uses for example Gaussian Processes to take decisions on which parameter-change might bring you the most benefit, and if it does not, the model is adapted accodingly.

This talk will be about the fundamentals of Bayesian Optimization and how it can be used to train ML Algorithms in Python.

To this end we'll consider it's application to Neural Networks. The NNs will be implemented in keras, the Bayesian Optimization will be optimized with hyperas/hyperopt.

I am planning to split this talk 50:50 into theory and practice.