In this talk I will explain the idea of sampling to get to your model and I will demonstrate it with examples. The goal is to start with a for loop and to end with understanding how MCMC algorithms work.
A lot of people understand the scikit-learn models of todays world but feel uneasy about the whole MCMC method of training. Why are these algorithms different? How is it that you don't use a gradient method but a sampler instead? It can feel a bit misterious if you've not properly been introduced to this other way of thinking.
In this talk I will explain the idea of sampling to get to your model and I will demonstrate it with examples. The goal is to start with a for loop and to end with understanding how MCMC algorithms work. As a consequence the audience will also get a proper introduction to PyMC3. In particular I will discuss the following;
Parts of this talk are readily available on my blog;
Let me know if there are any questions. I am submitting multiple talks that I think are interesting and relevant to the PyData crowd, I'll gladly leave it to the committee which (or if any) of them are relevant to the local community.