Deep learning is hot and happening, also in the medical field, however implementation is slow. Why is that? How do you approach such problems and how to make this a success? Where do you get the privacy-sensitive data from? Do pre-trained networks work? Can we make it scalable in the cloud? I will address these issues with a case study: segmentation of 4D heart MRI for heart function analysis.
Deep learning is hot and happening, also in the medical field, however implementation is slow. Why is that? How do you approach such problems and how to make this a success? Where do you get the privacy-sensitive data from? Do pre-trained networks work? Can we make it scalable in the cloud? I will address these issues with a segmentation problem of 4D heart MRI: a project by the UMCG, Siemens, GoDataDriven, Binx.io and Google Cloud. Currently, doctors manually contour structures of the heart in the images to assess heart function and aid diagnosis and treatment planning. We have built a model and application in Python (Keras) ran on the cloud to automate segmentation. I’ll discuss where to start with your network architecture and explain more about the deep learning technique of transposed convolutions or deconvolutions (and why they should not be called the latter). Also, I will share our (good and bad) experiences with the implementation of Google Cloud. Let’s (semi-)automate cardiac image analysis and soon we will be able to predict a heart attack!
- Diagnosis
- Medication
- Pacemaker indication
- Prognosis
- Future: heart attack prediction
- Scalability
- Hassle to get it to work