Sunday 10:15–11:00 in Tower Suite 3

Interpretable AI or How I Learned to Stop Worrying and Trust AI

Ajay Thampi

Audience level:
Intermediate

Description

One of the main reasons why companies are blocking the deployment of AI across the enterprise is trust. The lack of understanding of complex machine learned models is hugely problematic. In this tutorial, I will cover various data science techniques that have been successfully applied at Microsoft to gain customer trust and improve model understanding.

Abstract

In the last five years alone, AI researchers have made significant breakthroughs in areas such as image recognition, natural language understanding and board games. As companies are considering handing over critical decisions to AI in industries like healthcare and finance, the lack of understanding of complex machine learned models is hugely problematic. This lack of understanding has detrimental effects on trust and this, from my experience, is one of the main reasons why companies are resisting the deployment of AI across the enterprise. Moreover, there is EU regulation now to explain AI under the GDPR "right to explanation". In this talk, I will cover various techniques that one can add to their data science arsenal to improve model understanding. These techniques include partial dependency plots (PDPs), LIME, SHAP and representational learning.

Subscribe to Receive PyData Updates

Subscribe