Friday November 12 11:35 – Friday November 12 12:10 in Auditorium

Explainable causal inference results

Thomas Nägele

Prior knowledge:
No previous knowledge expected

Summary

Bayesian networks (BN), a widely used formalism for modelling of causal relations, are very large when used for real-life use cases: in the order of thousands or millions of nodes. Understanding the inference results computed by the models is not a trivial task. This talk will explain how to present the inference results in an explainable way by employing properties of BNs.

Outline

Description

A Bayesian network is a graph representation of a joint probability distribution over a number of variables of interest. Once instantiated with prior probabilities and observations, the inference algorithm updates all probabilities for all variables, i.e., nodes in the graph. In large networks it is difficult to understand what observations are relevant for the inferred probability of a node of interest. To come to the set of observations relevant to a node of interest, we compute an extended Markov Blanket for this node. This approach provides to the user a subgraph with only the relevant observations, and, therefore, supporting the explanation over the inferred probabilities.

The talk consists of 1) an introduction into Bayesian networks and some of the available BN Python libraries, 2) a description of our method to make the inference results more explainable, and 3) some examples, both showing the method’s capabilities and limitations. The talk does not require any prior knowledge, but a rough understanding of probability theory and graphs could help. After the talk, the audience will be familiar with Bayesian networks and how these can be scoped based on relevance.