Tuesday 10:15 AM–11:00 AM in The Trojan Ballroom / ML

Measuring Model Fairness

Stephen Hoover

Audience level:
Intermediate

Description

Machine learning models are increasingly used to make decisions that affect people’s lives. With this power comes a responsibility to ensure that model predictions are fair. In this talk I’ll introduce several common model fairness metrics, discuss their tradeoffs, and finally demonstrate their use with a case study analyzing anonymized data from one of Civis Analytics’s client engagements.

Abstract

When machine learning models make decisions that affect people’s lives, how can you be sure those decisions are fair? What does it even mean for an algorithm to be "fair"? As machine learning becomes more prevalent in socially impactful domains like policing, lending, and education these questions take on a new urgency.

In this talk I’ll introduce several common metrics which measure the fairness of model predictions. Next I’ll relate these metrics to different notions of fairness and show how the context in which a model is used determines which metrics (if any) are applicable. Finally I’ll illustrate the difficulties of applying fairness measures to real-world problems with a case study using data from one of Civis Analytics’s client engagements.

Takeaways

By the end of this talk you should be familiar with several common fairness metrics, be aware of the tradeoffs between them, and understand the subtleties of applying these metrics to real-world problems.

Intended audience

This talk is for data scientists or researchers who are training models whose decisions can affect people. It will be accessible to anyone with an understanding of binary classification problems.

Subscribe to Receive PyData Updates

Subscribe