This tutorial will discuss the use of internal machine learning tooling in real self-driving car projects. We'll download a human-annotated image dataset from the Udacity Self-Driving Car Project and discuss how stakeholders would approach this data in a real company. Finally we will live-code an app using Streamlit to semantically search and visualize the dataset and run models against it.
We've all seen poor tooling slow down data science and machine learning projects. In fact, most projects develop their own unique ecosystem of bug-ridden and unmaintainable internal tools to analyze data, often through a patchwork of Jupyter Notebooks and Flask apps.
In this workshop, we'll discover a new workflow to write ML tools as Python scripts using Streamlit, the first app framework for ML engineers.
Part one will be a whirlwind tour of Streamlit, creating apps, UIs, and data caches. Then we'll download a human-annotated image dataset from the Udacity Self-Driving Car Project and explore it.
Part two will be about product management. We'll discuss how stakeholders would approach this data in a real self-driving car project. How would a machine learning engineer or product manager want to understand this data? We'll then live-code a Streamlit app to facilitate their needs.
Part three will get nerdier! We'll integrate an object detection model (YOLO v3) into our app to explore the potential tooling benefits from interactive inference.
At the end of the workshop you will have (1) a beautiful demo to show off to friends, and (2) a new weapon to tackle tooling problems in your own projects.
See the GitHub: https://github.com/streamlit/demo-self-driving