Friday November 12 15:10 – Friday November 12 15:45 in Auditorium

Dealing with the versioning of production-ready models

Corné Vriends

Prior knowledge:
No previous knowledge expected

Summary

Having a working model in production is a feat by itself. However, what happens when the environment surrounding the model is continuously improved? At Eneco, we had to solve the interesting problem of keeping up with the blistering pace of development at Databricks. As we’re not that thrilled to let our environment (and models) slide into the realm of legacy. Interested to know how we solved it?

Outline

Description

In this talk we will start with what we do at Eneco and how we operate at scale with our smart thermostat Toon®. Furthermore, we will discuss how Databricks and Python allow us to serve predictions from our models at scale. The good and the bad of this approach and how maintaining this setup has its own set of obstacles.

Especially, we will delve into our specific problem that we encounter with maintaining our models that are in production. Instead of focusing on the more model related aspects (e.g. model drift) we focus on the operational aspect to make sure that the environment is not as old as when the first iteration of the model was made. This ensures that the latest innovations from Databricks (and Spark) flow through to our environment that Data Scientists and Data Engineers rely upon.

We will cover several possible solutions to this problem, from considering a portable format, such as ONNX, to complete containerization of each of the models in production using MLFlow Projects. In the end, we will showcase, what in our specific case is to our knowledge the most pragmatic solution. This talk is addressed to anyone who is interested in the more operational aspects of ML at scale.