I would like to show how I have been using Python and its data science tools to interpret large datasets retrieved from modern equipment used in neonatal intensive care, mechanical ventilators and patient monitors. I also would like to talk about my journey to Python: why it is important for professionals without computational background to learn coding and how I think it is the easiest to do it.
Intensive data collection is a cornerstone of modern intensive care*. State-of-the-art computerised life support equipment (e.g. mechanical ventilators, patient monitors) display multiple parameters obtained at a high sampling rate. However, busy clinicians frequently ignore most of these data; moreover, they are very rarely stored or analysed systematically even later.
Around 3 years ago I started to download anonymous data from the mechanical ventilators and patient monitors of the critically ill babies we have been looking after on the Neonatal Intensive Care Unit. Using a sampling rate of 100 per second, by now I have collected well over 1 billion data points. To process, analyse and visualise these large datasets I have employed Python and its data science tools: Jupyter Notebook, numpy, pandas, matplotlib, seaborn, scipy, sckit-learn. In my talk I would like to show how I have used these tools to gain insight into these data. I also would like to demonstrate why it is useful for people with specialist knowledge and interest to learn how to code, why Python is a good choice for them and how I think this is best achieved.
I hope that my talk will be of interest to people who started to use these data tools for their own purposes not too long ago and also for those who are interested in medical big data, its analysis and future challenges.