CLOUD DEPLOYMENT WORKFLOW

Serikiayodele
7 min readApr 21, 2021

I’ll be trying to break down things as little as possible, this is a workflow that helps with the complete process of automation and deployment of projects it gives the technical details of important concept which aids with learning and understanding of these concepts from other sources, these are not things I see explained this way. we’ll start with:

Terraform as an infrastructure

Continuous Integration and Continuous Deployment

Networks and Subnets

Bastion Host

TERRAFORM

Allows you to manage your infrastructure as code, Infrastructure as code is the process of managing and giving computer data centers through machine-readable files, instead of physical hardware configuration or interactive configuration tools, it uses text files to deploy our infrastructure containing whatever we need to run our code i.e. IP address, clusters, networks etc. If your infrastructure is in code form that means you don’t have to manually create and delete your infrastructure resources for example, let say we create 10 servers today and the next day we decide that we don’t need 4 of these servers anymore and we decide to delete them, then another day we realize those four servers we deleted could actually be helpful and then we start to create them manually again, instead of this tedious process we can just write terraform scripts which helps to create and delete infrastructures in a more efficient way whenever we want, it also helps us to move from one cloud provider to another with minimal changes, you can completely move your infrastructure, create and delete them too.

An additional feature is that we can manage changes, let say we create a server and configure it in a particular subnet manually and set up rules, and we occasionally make edits, it would be pretty difficult to keep track of these changes, but if we are using terraform after editing our scripts and deploying we can simply do Git add, Git commit, Git push and it ends up in your GitHub, so you can check your Git history for changes you might have made, that’s what terraform allows us to do.

CI/CD (CONTINOUS INTEGRATION ANDCONTINOUS DEPLOYMENT)

CONTINOUS INTEGRATION

allows us to automatically integrate our code, if we have code and we want to make the changes available to end users, but before that is done, we want to first of all lint it, that is check if the style and structure is in order, then maybe build a Docker container and test the containers.

Linting our code is process A and testing our Docker container is process B, with continuous integration we can apply logic, if we want our code to deploy only when process A and B is satisfies or when only process A is satisfied, workflows like this regarding testing, building and deploying, can be automated with continuous integration.

So, we have pipelines and these pipelines have stages and we can configure the different stages using workflows.

CONTINOUS DEPLOYMENT

Is like the other part of the equation, it typically involves taking the new output we have i.e. (Docker containers) and making it available to end users, if we are using kubernetes for example, we have built our Docker containers and pushed them to Docker hub and we want to make kubernetes aware of changes that might have taken place, we pull new containers from docker hub and once they are pulled, there are different deployment strategies we can use, let say there is an update made to container (A) Kubernetes keeps track of updates, so it knows it has to replace the previous version of container (A) with the new version from docker, so how do we go about this? We can choose to delete all the current running version of the service, but that means there is going to be down time because some people might be using this service and they won’t be able to carry out any task until the new set of services start up. There are better deployment strategies, for example, if we have 10 instances of the service running, we can configure it in such a way that, we can kill 2 and then run 2 new ones, if these new ones work properly we kill another 2 and repeat the process till the previous instances are completely replaced by the new ones, also another configuration could be to kill 30% of the old services and replace with 30% of the new ones and leave them running for a while, if everything’s okay we can then decide to kill the rest and replace them, one last configuration could be to replace the old services with the new one and run them, then if anything goes wrong we switch them back to the old services and trouble shoot what happened, the idea of CD is that we want to be able to automate deployment without downtime, because the original approach is definitely going to give us downtime.

So basically, CI and CD work together CI manages building, testing and making the end product available and then CD continues from there by trying to deploy in an intelligent manner.

CI/CD is for the actual application i.e. the frontend, backend, services etc. while infrastructure as code covers the databases, servers, clusters, file storage etc.

ABOUT NETWORKS AND SUBNETS

In a network we can create subdivisions called subnets, it works by splitting up IP addresses, let say we have 200 IP addresses in your network, you can allocate a fraction of it to one subnet and another fraction to another subnet.

Every device in your network has an IP address, but the issue is the ip address can’t be always public, because public IP addresses are always accessible from anywhere on the internet and are expensive, there’s a protocol called NAT (network address translation), you can have a few public addresses available on the internet, NAT has a gateway, we can configure this gateway such that your devices in the network will be able to access the internet using a NAT gateway, so if you want to visit a website on one server and you want to visit another website on another server, they both use the same NAT gateway, so anyone outside the network will see the same ip address and might think it’s just one computer, but in reality it’s multiple computers using the same ip address.

The idea is that when building a cloud infrastructure, its common practice to have a public subnet that contains data that is not so sensitive and then a private subnet that contains the sensitive data, the public ones will be connected to the NAT gateway so they can directly access the internet and the private ones will be internal and hence will not be connected to the NAT gateway, we can also access the internet with our private subnets, by configuring routing tables (we have routers that contains rules on how we can connect to different places on the internet, these routers have routing tables that store information on different network addresses and they know exactly the shortest path to get there). If any computer in the private subnet wants to access any resource that is not within the private subnet it can use the routing table to access the NAT gateway, which then uses the internet to make their request but no one on the internet can access resources in the private subnet.

BASTION HOST

let’s say you want to access a computer in a private subnet, if you set up SSH, you would be able to access the computer, you would have to use the terminal because SSH is terminal controlled, another protocol similar to SSH that provides video keep ability, which is being able to operate the computer with the GUI like it’s a normal computer, for example it’s typically used in Raspberry Pi’s, so let’s assume we have a raspberry pi, we simply connect it to the network, we can connect a monitor to the raspberry pi(because they don’t have screens) and use it like a normal computer, but we wouldn’t want to get a special monitor for raspberry pi so we can set up VNC on the raspberry pi, then use our normal computer to VNC in. We can then operate raspberry pi like a normal computer. SSH is just terminal access which is just as powerful as GUI.

If we have a Kubernetes node in the private subnet we cannot directly SSH into that private subnet, because it is not in the public subnet and hence we cannot directly access it, so here’s the trick, we have computers in the public subnet and both the private and public subnets are in the same VPC and can communicate with each other, so we can SSH into a computer in the public subnet and then inside that public subnet within that SSH connection, we can use that SSH connection to open another SSH connection to a computer in the private subnet, the computer in the private subnet is called the bastion host, so it’s like double SSH.

This makes the bastion host a point of failure that people can use to access what should not be accessible, which means we have to severely protect the bastion host.

There are no references cause I wrote this originally :).

--

--