Friday October 29 11:00 AM – Friday October 29 11:30 AM in Talks I

Cutting edge hyperparameter tuning made simple with Ray Tune

Antoni Baum

Prior knowledge:
No previous knowledge expected

Summary

Hyperparameter tuning is a major bottleneck in the machine learning pipeline. We will overview both standard and cutting-edge methods for tuning. Then, we will showcase Ray Tune and its sklearn-wrapper - tune-sklearn - which provide a unified framework for distributed hyperparameter optimization allowing for easy integration within existing scikit-learn based pipelines.

Description

Modern ML model performance is very dependent on the choice of model hyperparameters and hyperparameter tuning is a major bottleneck in the machine learning pipeline. ML models have hyperparameters that can alter the predictive quality, while gradient boosting models, the current state of the art for tabular data, have multiple hyperparameters that can drastically affect their working.

In this talk, we will overview standard methods for hyperparameter tuning: grid search, random search, and bayesian optimization. We will also showcase the cutting edge methods such as BOHB, BlendSearch and HyperSched. We will discuss the challenges of using diverse libraries and algorithms in order to experiment and implement cutting edge optimization. Then, we will showcase Ray Tune and its sklearn-wrapper, tune-sklearn, which present a unified API for distributed hyperparameter optimization and how simple tune-sklearn is to use and integrate within existing scikit-learn based pipelines.

This talk is catered towards data scientists/engineers familiar with the machine learning development process. Newcomers to the field are welcome.