Friday October 29 9:00 AM – Friday October 29 9:30 AM in Talks II

Counter Factual Analysis for Explainable AI

Shashank Shekhar

Prior knowledge:
No previous knowledge expected

Summary

AI models are getting increasingly advanced and so is the need of explaining them. Counter Factual Analysis (CFA) explores outcomes that did not actually occur, but which could have occurred under different set of conditions. In this talk, I will discuss the theoretical aspects of CFA, state-of-the-art algorithms, and its relationship with feature attribution methods like Shapley Values.

Description

The talk will be delivered in three parts.

Part – I: Introduction to Counter Factual Analysis - 10 minutes

This part discusses the following concepts

  • Definition of counter factual explanations and its significance
  • The desirable properties for actionable counter factual generation and related optimization challenges
  • Brief discussion on CF explanation generation methods
  • Introduction to state-of-the-art algorithms

Part – II: Introduction to CFA tools - 10 minutes

In this part, I will walk through a use case using state-of-the-art tools. Following tools will be introduced.

  • DICE - This tool is introduced by Microsoft. Unlike many methods, it gives the options to generate multiple number of CF instances and control for users in modifying features.
  • Alibi - Alibi is a general-purpose explainable AI (XAI) tool. We will briefly discuss the CFs generated by class prototypes provided in the tool.

Part – III: Research Directions - 10 minutes

In this part, I will briefly discuss about the further research directions in CFA and two related XAI paradigms.

  • Relation between CFE methods and feature attribution methods such as Shapley values and LIME.
  • Checking ethical aspects of AI using CFEs.