There are lots of reasons using a cloud service is favorable, but how to make sure consistency between development and deployment? With Docker, we can "ship" the project to different platforms in the same environment. Going one step further we can also create the cloud infrastructure as code with Terraform. As an example, we will deploy a Jupyter notebook on Google Cloud Platform using both tools.
In this talk, we will use a task: hiring a GPU on Google Cloud Platform to train neural network, as an example to show how an application can be deployed on a cloud platform with Docker and Terraform. The goal is to have Jupyter Notebook running in an environment with Tensorflow (GPU version) and other libraries installed on a Google Compute Engine.
First we will briefly explain what it Docker and what is Terraform for audiences who has no experience in either or both of them. Some basic concepts of both tools will also be covered. After that, we will walk-through each steps of the work flow, which includes designing and building a Docker image, setting up a pipeline on Github and Docker Hub, writing the Terrafrom code and the start up script, launching an instance. From that, audiences will have an idea of how both tools can be use together to deploy an app onto a cloud platform and what advantages each tool can bring in the process.
This talk is for people with no experience in application deployment on cloud service but would benefit form computational reproducibility and cloud service, potentially data scientists/ analysts or tech practitioners who didn't have a software developing background. We will use an example that is simple but useful in data science to demonstrate basic usage of Docker and Terraform which would be beneficial to beginners who would like to simplify their work flow with those tools.