The recent advances in machine learning and artificial intelligence are amazing! Yet, in order to have real value within a company, data scientists must be able to get their models off of their laptops and deployed within a company’s data pipelines and infrastructure. In this session, I'll demonstrate how one-off experiments can be transformed into scalable ML pipelines with minimal effort.
The recent advances in machine learning and artificial intelligence are amazing! Yet, in order to have real value within a company, data scientists must be able to get their models off of their laptops and deployed within a company’s data pipelines and infrastructure. Those models must also scale to production size data. In this talk, I’ll show you how to implement an end-to-end machine learning pipeline using kubeflow and tensorflow. We will then take that model and deploy both it's training and inference in a scalable manner to a production cluster with Pachyderm, an open source framework for data versioning and processing. We will also learn how to update the production model online, track changes in our model and data, and explore our results.