Friday 9:15 AM–10:00 AM in Main Room, Tutorial Room

Responsible AI Practices: Fairness in ML

Alex Hanna

Audience level:
Novice

Description

This talk will highlight recent work and recommended practices for building AI that's fair and inclusive.

Abstract

The development of AI is creating new opportunities to improve the lives of people around the world. It is also raising new questions about the best way to build fairness, interpretability, privacy, security, and other moral and ethical values into these systems. This talk will highlight recent work and recommended practices for building AI that's fair and inclusive. Starting from Google's AI Principles, this talk will provide an overview of types of bias which can become embedded in machine learning systems. We will discuss how to design your model using concrete goals for fairness and inclusion, the importance of using representative datasets to train and test models, how to check a system for unfair biases, and how to analyze performance.

Subscribe to Receive PyData Updates

Subscribe

Tickets

Get Now