Friday 15:30–17:00 in Tower Suite 1

Understanding and diagnosing your machine-learning models

Gaƫl Varoquaux

Audience level:
Intermediate

Description

Given a predictive model, questions immediately arise: How to improve this prediction? What drives it? Can we operate changes to the system based on the predictions? All these questions require understanding how good is the model prediction, and how do the model predict.

This tutorial will focus on statistics and interpretation rather than improving prediction.

Abstract

Often achieving a good prediction is only half of the job. Questions immediately arise: How to improve this prediction? What drives the prediction? Can we operate changes to the system based on the predictions? All these questions require understanding how good is the model prediction, and how do the model predict.

This tutorial assumes basic knowledge of scikit-learn. It will focus on statistics, tests, and interpretation rather than improving the prediction. Below is a tentative outline.

Understanding how well a classifier predicts

Metrics to judge the success of a classifier

There are many metrics, for regression (r2 score, mean squared error, mean absolute error), and for classification (zero-one accuracy, area under the ROC curve, area under the precision-recall curve). I will explain the pros and con in terms of interpretation for each of these.

Cross-validation: some gotchas

The variance of measured accuracy

Confounding effects and non independence

Permutation to measure chance

Underfit vs overfit: do I need more data, or more complex models?

Train error versus test error

Learning curves

Tuning curves

Understanding why a classifier predicts

Black-box interpretation of models: LIME

https://marcotcr.github.io/lime/ LIME can be used to understand which features locally drive the predictions of a model.

Interpreting linear models

Conditional versus marginal relations (and the link to univariate feature selection)

The challenge of correlated features

Gauging significance of observed associations

The effect of regularization

Interpreting random forests

How the random forests makes their decision, and how feature importances can be interpreted.

Partial dependence plots

Subscribe to Receive PyData Updates

Subscribe