Friday 12:00–12:30 in Track 3

Bayesian A/B Testing

Marc Garcia

Audience level:
Novice

Description

A/B testing is a controlled experiment, where a possible improvement challenges the current version of a product. It is the most common approach to improve websites and their conversions.

Abstract

A/B testing is a controlled experiment, where a possible improvement challenges the current version of a product. It is the most common approach to improve websites and their conversions. In an A/B test, half of the users are kept in the current version as a control group, while a randomly selected half of them is presented with the challenger version. In this context, it is expected that one of the groups performs better than the other. But the important question is, is this difference in performance caused by the differences in the versions, or by randomness.

While the question may sound simple, the widely used approach of statistical significant is tricky and confusing. Some parameters need to be decided, and counter-intuitive statistics based on null hypothesis and p-values need to be performed.

But there is an alternative, Bayesian statistics. With simple techniques such as Thompson sampling, the problem can be implemented as an Artificial Intelligence system, that manages the uncertainty in the data, and adapts to it, to automatically make the optimal decision for us.

In this talk it will be covered an introduction to the main two schools of statistics, frequentist and Bayesian, and it will be shown how to implement an A/B test system based on both of them. A step by step simulation will be implemented in Python and shown, so the audience can see how both systems perform, and how they can be monitored.

Subscribe to Receive PyData Updates

Subscribe

Tickets

Get Now