Saturday 13:30–14:15 in Tower Suite 1

Evaluating fairness in machine learning with PyMC3

Oliver Laslett

Audience level:


Machine learning and data science applications can be unintentionally biased if care is not taken to evaluate their effect on different sub-populations. However, by using a "fair" approach, machine decision making can potentially be less biased than human decision makers.


In this talk we will present various approaches for evaluating the fairness of machine learning algorithms. We measure the effect of protected variables, which should not influence decision making, on the output. As an example, we demonstrate a Bayesian model of fairness constructed using PyMC3 and apply the model to open datasets.

Subscribe to Receive PyData Updates


Get Now