Friday November 12 14:05 – Friday November 12 14:40 in Auditorium

Applied AI on the edge

Maurits Kaptein

Prior knowledge:
No previous knowledge expected

Summary

While the potential of Machine Learning (ML) and Artificial Intelligence (AI) is widely recognized in various sectors (health, industry, commerce, etc.), regretfully many ML/AI projects do not make it past the Proof of Concept (PoC) stage. In this talk I will share a number of my own experiences with “failed” AI projects, and I will examine the root causes of these failed AI projects.

Outline

Description

While the potential of Machine Learning (ML) and Artificial Intelligence (AI) is widely recognized in various sectors (health, industry, commerce, etc.), regretfully many ML/AI projects do not make it past the Proof of Concept (PoC) stage. In this talk I will share a number of my own experiences with “failed” AI projects (i.e., projects that easily passed the PoC stage, but never made it into production), and I will examine the root causes of these failed AI projects. To do so, I will have to provide a bit more background regarding the various types of AI models/projects that exists, explain how AI works in some detail, and discuss the common production/deployment patterns that companies use in their attempts to scale their ML/AI activities. Effectively, I will describe the AI deployment process from data collection, to AI model development, to model evaluation, and finally towards large scale model deployment. At each of these stages I will highlight the challenges involved and the common points of failure.

Next, I will turn my talk to potential solutions: although it is hard to find a uniform solution for scaling every possible ML/AI solution in the book, for a large class of applied AI/ML models efficient and effective deployment methods have recently been developed. I will explain how deploying AI models on the edge (i.e., not in the cloud) solves a number of common AI deployment problems. Furthermore, I will explain how modern technological advances enable the effective deployment of trained ML/AI models on edge devices despite the diversity in device types (e.g., different hardware, different computational constraints). Finally, I will argue that deployment on the edge makes applied AI more scalable, reduces the energy footprint of AI, improves user privacy, and reduces operational costs of many AI applications.