One of the main reasons why companies are blocking the deployment of AI across the enterprise is trust. The lack of understanding of complex machine learned models is hugely problematic. In this tutorial, I will cover various data science techniques that have been successfully applied at Microsoft to gain customer trust and improve model understanding.
In the last five years alone, AI researchers have made significant breakthroughs in areas such as image recognition, natural language understanding and board games. As companies are considering handing over critical decisions to AI in industries like healthcare and finance, the lack of understanding of complex machine learned models is hugely problematic. This lack of understanding has detrimental effects on trust and this, from my experience, is one of the main reasons why companies are resisting the deployment of AI across the enterprise. Moreover, there is EU regulation now to explain AI under the GDPR "right to explanation". In this talk, I will cover various techniques that one can add to their data science arsenal to improve model understanding. These techniques include partial dependency plots (PDPs), LIME, SHAP and representational learning.