The issue of inequalities and biases in AI systems has gained enormous attention recently, both in the scientific and technical community, as well as in the general public and media. This hands-on tutorial will take attendees through the end-to-end process of development of bias-free AI models using real-life datasets, Python machine learning packages, and open-source Python fairness libraries.
The tutorial will introduce various types of unwanted bias and algorithmic fairness. Then, using samples of working code (Jupyter notebooks) and several real-life datasets from domains such as credit approval, prison sentencing, and healthcare, participants will be lead through the process of building fair AI models by • measuring model bias using a variety of fairness metrics • mitigating bias via several techniques, including pre-processing of data, changing the algorithm itself, or post-processing of results • defining bias policies that would allow users to define what bias means with respect to a particular application • using bias explainers to articulate the results of bias checks and mitigation
For model building, we will use Python’s Scikit-learn, Pandas, and NumPy libraries. Bias detection/remediation etc. will be via open-source Python libraries.