Cycling in central London is an extreme sport! To give me the competitive edge I've built a machine vision model that runs on an Edge TPU that keeps me informed about the dangerous vehicles that are behind me in blistering real time. I'll show you the end-to-end process of developing and deploying the model to the recently released Edge TPU hardware from Google.
Back in 2017 I gave a talk about how I did realtime machine vision on a Raspberry Pi for recording rugby, however it focused on squeezing the model down in complexity just so that could run 2 predictions a second... barely.
Come along and see how I used new TPU hardware (specifically the Coral Edge TPU), so that I didn't have to compromise on model quality or prediction speed when building a bike gadget that performs low latency object detection for better danger awareness whilst cycling. All on a device the same size as the Raspberry Pi!
I will walk through the processes and pitfalls of the complete project, which includes:
After all is said and done you should have a solid grounding of the strengths and weaknesses of Edge TPUs and the end-to-end process of using them in your projects. No doubt that you'll walk away with some new quirky ML project ideas :)