Friday October 29 12:30 PM – Friday October 29 1:00 PM in Talks I

Exploring Tools for Interpretable Machine Learning

Juan Orduz

Prior knowledge:
Previous knowledge expected
Basic concepts ini machine learning (e.g. linear models and tree ensembles)

Summary

In this talk we want to explore various ways of getting a better understanding on how some families machine learning models generate predictions and how features interact with each other. We do so via a hands-on approach: the task is to predict daily counts of rented bicycles. We present both model specific and model agnostic approaches. https://juanitorduz.github.io/interpretable_ml

Description

In this talk we want to explore various ways of getting a better understanding on how some families machine learning models generate predictions and how features interact with each other. We do so via a hands-on approach: the task is to predict daily counts of rented bicycles as a function of time and other external regressors like temperature and humidity (http://archive.ics.uci.edu/ml/datasets/Bike+Sharing+Dataset). For this purpose, after a first EDA phase, we will train two type of models: (1) regularised linear regression and (2) XGBoost regressor.

Next we explore model specific ways to understand the models predictions: (1) For the linear model we explore the beta coefficients and weight effects (2) For the XGBoost regressor we explore metrics like gain and cover.Finally we move to model agnostic methods such as (1) partial dependency (PDP) and individual conditioning expectation (ICE) plots (2) permutation importance and (3) SHAP values.We will describe the pros and cons of each methods. We do not focus on the theory behind but rather use the concrete use case to highlight their strength and limitations.

This talk is based in the article: https://juanitorduz.github.io/interpretable_ml/ where all code in provide to reproduce the plots and results.

Two great references on the subject are:

  • Interpretable Machine Learning, A Guide for Making Black Box Models Explainable by Christoph Molnar
  • Interpretable Machine Learning with Python by Serg Masís