Deploying a deep model on a mobile device to be used for real-time detection is not quite trivial yet. Defining your Deep Learning architecture, gathering the right data, designing your training process, evaluating your models and turning this into a pipeline that keeps everyone on the team (somewhat) sane - these all have their pitfalls.
Deep Learning has gone through the hype phase where it seemed like a skeleton key, followed by a phase of despair for many who found the building blocks to be too esoteric and the training code and process too unreliable. Deploying on a device with strong hardware limitations adds that extra spice to the mix.
This talk addresses a very specific use case: preparing a Deep Neural Network to be used for detection in real time in a mobile phone app. It is meant for hands-on engineers and data scientists who live in that area where writing scalable and testable code is every bit as important (and troublesome!) as understanding your loss function.
We will cover different steps of the process, such as: Defining a good model for you: * Your device: it is what it is. * Cargo cult or what is this layer doing and do we need it? Gathering the right training data: who, where and how will be using your app? Training: * Transfer learning as a small company's best friend. * Integrating different sources in your data pipeline. Evaluating earlier rather than later.
Depending on time and interest, we may also go over data augmentation and/or model persistence.