This story is part of a series “Initial Commit till Running on Cloud”.
Angular JS and Spring Boot BFF — Step 3
Story 3 in “Initial Commit till Running on Cloud” Series
Lord be merciful to this sinner. I was saving the cloud for the desserts but now I am cornered. I have to explain about cloud before we can move any step further. Because it will be tough for the readers to visualize the architecture and where it is all going.
We will be creating an order creation app. There will be an option for user registration and login. The logged in user can create an order and check the status of his/her orders. To accomplish this goal we will need a Virtual Private Cloud in AWS with multiple subnets and multiple clusters of EC2 instances to deploy our dockerized applications.
Before we start learning quickly about setting up a VPC in Aws, you should understand that AWS has different regions where they have deployed the hardware resources. We will choose one which is close to our clients for low latency, taxation and legal requirements. And inside each region for example Frankfurt (eu-central-1), we have multiple availability zones to support highly available cloud resources (eu-central-1a, eu-central-1b). Imagine you have more than 1 buildings in an AWS region and each of these buildings host an availability zone AZ. So even if 1 building collapses we have another one AZ running. If the whole region is consumed by Tsunami, “Then we are shit outta luck”.
Virtual Private Cloud — VPC
VPC is your own virtually isolated network infrastructure created in Aws cloud. You have full control over it starting from IP addresses till the security inside this little cloud of yours. We can create multiple Subnets inside VPC with Public or Private visibility and IP ranges. We will be creating a Highly Available cloud so we will create 2 copies of each subnet in two different availability zones. Subnet is simply a smaller subset of VPC’s IP addresses.
A public subnet is attached to the Internet directly. It can be reached from big bad internet if you allow so. Any application that needs to be open to public should be running with one of the IPs allocated by this Subnet.
The reason why we have a private subnet is that you want certain applications to be insulated from the direct attack of internet. This is also advantageous for AWS because they don’t have to assign a public IP to these instances, as public IPs are limited. The only way you can access the resources in Private Subnet is through Public Subnet. We will deploy our whole microservice fleet in the private Subnet.
Private Subnet instances do not have internet connectivity by default like instances in Public Subnet. So we need NAT instances to allow Private Subnet instances to access internet using outbound connection. This is needed in order to connect with external database or even deploy applications downloaded from internet. To avoid the “Great NAT robbery” of AWS I have included a way to link t3a.nano instances as NAT Gateways. This will reduce the operational cost.
Internal Elastic Load Balancer
As we are deploying a High Availablity (HA) environment, we will need a Load Balancer in both Private and Public Subnets. Imagine you are running 2 instances of user-microservice, 1 in eu-central-1a and 1 in eu-central-1b. If a request to user-microservice is made from bff-service then we need a load balancer to route this request to both user-microservice instance in round robin fashion.
Internet Facing Elastic Load Balancer
The same happens with our UI and API layer, we have multiple instances of UI and API service and hence we need the internet traffic to be balanced by an Internet Facing Elastic Load Balancer. Also it will be easy for us to create an A record in our domain to route all traffic to this Load Balancer to do dynamic routing.
Elastic Container Serice Clusters
We will deploy our applications after dockerizing them to Elastic Container Service (ECS). We will create 3 clusters which are nothing but autoscalable instances where you can deploy your dockerized containers. ECS is very useful in automatic container based service management.
We will create 2 instances in each of these clusters with t3a.nano size. All these clusters should be part of our VPC.
- API public cluster (In PublicSubnetOne and PublicSubnetTwo)
- Web public cluster (In PublicSubnetOne and PublicSubnetTwo)
- Microservice private cluster (In PrivateSubnetOne and PrivateSubnetTwo)
We will route any request with host api.* to API public cluster instances with port 80. We will route all other port 80 requests to Web public cluster. We will route port 8081 and 8082 to Microservice private cluster. These settings are done in Target groups which we will be seeing during Load Balancer configuration.
To create this cloud in your AWS account run the below cloud formation yaml script.
You can also view the cloudformation in the designer to understand all the resources visually.
Be ready to delete your stack immediately after checking, because the next few part of the tutorial will take some time for me to publish. You wouldn’t want a bill waiting for you until you run something in it :)
So now that we know the architecture, let’s create those microservices.
Going good? Please read the next part of this series. Also kindly leave your feedback :)