Thursday October 28 10:30 AM – Thursday October 28 11:00 AM in Talks II

Interpretable ML models at scale

Aishwarya Agrawal

Prior knowledge:
Previous knowledge expected
Python, Machine Learning

Summary

In this talk a participant can expect to understand the following: 1. Building a self service interpretable ML framework for stakeholders 2. Incorporating feedback and autoML workflows 3. Interpretable ML supporting early data/concept drift detection

This talk will take a deep dive into the thought process involved in the system's design, the application and the importance of designing such.

Description

  • Building an interpretable system
  • Summarized explanations of the predictions
  • Running multiple inference models in production
  • Lessons learnt while optimizing interpretable AI at scale

With the increase in machine learning models being used in a variety of business applications, the need to make explainable ML inference is ever evident. Reducing risk, ethical ML, avoiding biases, debugging bad predictions etc are usually cited as key reasons for model interpretability. Making it interpretable i.e. approachable to end users makes it more acceptable to anyone involved or affected by the algorithm. Even more critical are ML models built for domains heavy on compliance and low tolerance to false positives - like healthcare.

In some cases this is not just a drive toward being more ethical but the need of the business itself. We need to be able to showcase where decisions are being made and how they are being made.

Episource’s Machine learning and NLP platform serves a wide variety of ML based solutions and provides ML inference on healthcare data. Since the model impacts business workflows and revenue outcomes directly, stakeholders deem explanations for each inference as important as the inference itself.

During this talk, a participant can expect to understand the following: 1. Building a self service interpretable ML framework for stakeholders 2. Incorporating feedback and autoML workflows 3. Interpretable ML supporting early data/concept drift detection

This talk will take a deep dive into the thought process involved in the system's design, the application and the importance of designing such a system.