This talk will highlight recent work and recommended practices for building AI that's fair and inclusive.
The development of AI is creating new opportunities to improve the lives of people around the world. It is also raising new questions about the best way to build fairness, interpretability, privacy, security, and other moral and ethical values into these systems. This talk will highlight recent work and recommended practices for building AI that's fair and inclusive. Starting from Google's AI Principles, this talk will provide an overview of types of bias which can become embedded in machine learning systems. We will discuss how to design your model using concrete goals for fairness and inclusion, the importance of using representative datasets to train and test models, how to check a system for unfair biases, and how to analyze performance.