Problem statement: Creating Neo4j Based Scalable API Getting Down To Business: Pack your Stuff: Dockerize API + Neo4j Data loading (Py2neo) & Persistence Container registry on ECR, Load Balancer, ASG, ECS.
Scenario Guide: API: Enable Caching + API Key Maintenance: Automated deployment of Code Improvement and Changes Cost: Automated Scale-in - Scale-out Load testing & Unit Testing
How to build a Scalable API which requires the least amount of maintenance afterward? Incoming load for an API changes dramatically in real world scenario. API in this case needs to be flexible enough to handle the varying incoming load. We will start from what is a graph database and will learn till how to deploy it in scalable and secure fashion on cloud. The number of instances running initially would be limited to one. If more number of requests starts hitting the server it should scale out automatically. With diminishing number of requests the number of running instances should scale in to save cost. The discussion will provide insights into how this process can be completely automated and all we are required to do for change maintenance is Docker push and rest is assured and taken care on Cloud.
This talk will discuss on different scenarios like how to enable caching and security at API Level. What are different scenarios and ways in which load balancer can be setup (Classic vs Application LB). There are plenty of hassle free resources for Load Testing. One such example would be discussed here. We will dive into unit testing for our API aswell to support automated travis build.