Replacing monolithic apps — or building greenfield ones — with microservices is a growing consideration for development teams that want to increase their agility, iterate faster and move at the speed of the market. By providing greater autonomy amongst different teams, allowing them to work in parallel accomplishing more in less time, microservices offer code that is less brittle, making it easier to change, test and update.
Docker containers are a natural fit for microservices as they inherently feature autonomy, automation, and portability. Specifically, Docker is known for its ability to encapsulate a particular application component, and all its dependencies, thus enabling teams to work independently without requiring underlying infrastructure or the underlying substrate to support every single one of the components they are using.
In addition, Docker makes it really easy to create lightweight, isolated containers that can work with each other while being very portable. Because the application is decoupled from the underlying substrate, it is very portable and easy to use. Last, it is very easy to create a new set of containers; Docker orchestration solutions such as Docker Swarm, Kubernetes, or AWS ECS make it easy to spin up new services that are composed of multiple containers — all in a fully automated way. Thus Docker becomes a natural fit for microservices when creating a microservices substrate on which Docker containers can run.
All that said, there are several process and technology design points to consider when architecting a Docker-based microservices solution. Doing so will help avoid costly rework and other headaches down the road.
Process considerations:
1. How will an existing microservice be updated?
The fundamental reason developers use microservices is to speed up development, which increases the number of updates they have to perform to a microservice. To leverage microservices fully, it is critical that this process be optimized.
However, there are several components that make up this process, and there are decisions that come with each step in the process. Let us explain with the help of three examples. First, there is the question of whether to set up continuous deployment or set up a dashboard where a person presses a button to deploy a new version. The tradeoff is higher agility with continuous deployment versus tighter governance with manually triggered deployment. Automation can allow implementation of security with agility and allow both benefits to co-exist. Developers need to decide their workflows and what automation they require, and where.
Second, it is important for businesses to consider where the actual container will be built. Will it be built locally, pushed and travel through the pipeline? Or will actual code first be converted into artifacts, and then to a Docker image that travels all the way to production? If you go with a solution where the container is built in the pipeline, it is important to consider where it will be built and what tools will be used around it.
Third, the actual deployment strategy must also be thought through. Specifically, you can update a microservices architecture through a blue-green deployment setup where a new set of containers are spun up and then the old ones are taken down. Or, you can opt for a rolling update as you go through the multiple service containers, creating one new container and putting it in service while you take out one of the old ones. These decisions are multifaceted and require consideration of several factors including current flows, the skill levels of operators, and any technology inclinations.
2. How will developers start a brand new service?
Starting a new service is a fundamental requirement of microservices. As a result, the process for starting a brand new service should be made as easy as possible. In this vein, an important question to ask is, ‘How will you enable developers to start a new service in a self-service fashion without compromising security and governance?’ Will it require going through an approval process such as filing an IT request? Or, will it be a fully automated process?
While I recommend erring on the side of using as much automation as possible, this is definitely a process point development teams will want to think through in advance to ensure they correctly balance the need for security, governance and self-service.
3. How will services get a URL assigned?
This question really goes hand-in-hand with starting a brand new service. A new URL or subcontext (e.g., myurl.com/myservice) needs to be assigned to a new service each time it is created and a process for assigning them should ideally be automated. Options can include a self-service portal for assigning URLs manually or a process whereby the URL is automatically assigned and pulled from the name of the Docker container and any tags that are applied to the Docker container.
Again, just as with starting a new service, I recommend erring on the side of using as much automation as possible — and therefore spend some ample time thinking through this important design point well in advance.
4. How will container failure be detected and dealt with?
A key requirement of modern infrastructure is that it doesn’t require ‘babysitting;’ it can self-heal and self-recover if it goes down. As a result, it is paramount to have a process to detect failure and a plan for how it will be handled when it does occur. For example, it is important to have a pre-defined process to detect a container application that is no longer running, whether through a networking check or log parsing. Additionally, there should be a defined process for replacing the container with a new one as a possible solution. While there are many approaches to this process, the design point is to make sure that the requirements are met, ideally via automation.
5. How will the code for each microservice be structured?
We want a fully automated process for building and deploying new services. Yet, if the number of services is going to be large, it can quickly become cumbersome to manage. As a result, multiple versions of the process — one for each service — should be created. In these cases, it is imperative that each process is kept homogeneous.
A very important decision in this is how each microservice will be structured. For example, the Dockerfile should always appear in the exact same place and whatever is specific to the service should be contained with the Dockerfile. In this way, the process can be made microservice agnostic. Similarly, other files such as a Docker compose file or a task definition for AWS ECS should consistently be put in the same place — across all services — so that processes can run consistently in a homogeneous fashion.
Technology considerations:
6. What tool will be used to schedule containers on compute nodes?
Schedulers are important tools as they allocate resources needed to execute a job, assign work to resources and orchestrators ensure that the resources necessary to perform the work are available when needed. There are many tool choices for container orchestration. Those typically considered are: ECS for customers in AWS, and Docker Swarm or Kubernetes for those who would like a vendor-agnostic solution. There are several angles for organizations to weigh in making this decision including portability, compatibility, ease of setup, ease of maintenance, the ability to plug-and-play, and having a holistic solution.
7. What tool will be used to load balance requests between the containers of the same service?
High availability and the ability to have multiple container services in the environment make it critical to support more than one container per microservice. For services that are non-clustered, for example web-based microservices developed in house, there is need for an external load balancer to balance incoming traffic between different containers on the same server. For load balancers within the same service, there are several options — from taking advantage of AWS ELB in Amazon to open source tools that can act as load balancers such as NGINX or HAProxy.
This is an important technology decision that should be thoroughly evaluated. Some salient design points to consider in your evaluation: Requirements for session stickiness; the number of services you plan to have; the number of containers you have per service; and any web load balancing algorithms you would like to have.
8. What tool will be used to route traffic to the correct service?
This design point goes hand-in-hand with load balancing as it directly addresses application load balancing. As pointed out earlier, individual URLs or sub contexts are assigned per service. When traffic hits the microservices cluster, another task is to ensure that the traffic coming in is routed to the right microservice given the URL that the traffic is addressed to. Here we can apply HAProxy, NGINX or AWS Application Load Balancing (ALB).
AWS ALB was introduced in August, and in the short time it’s been available, a debate has emerged as to which tool is best for application load balancing. Two key questions you might ask to make the right decision are, how many microservices do you plan to have and how complex do you want your routing mechanism to be.
9. What tool will be used for secrets?
With the number of microservices in a given application expected to increase over time, and modern applications relying more and more on SaaS extended solutions, security simultaneously becomes really important and more difficult to manage. For microservices to communicate with each other, they typically rely on certificates and API keys to authenticate themselves with the target service. These API keys, also known as secrets, need to be managed securely and carefully. As they proliferate, traditional solutions, such as manually interjecting at time of deployment, don’t work. There are frankly just too many secrets to manage, and microservices require automation.
Organizations need to settle on an automated way to get secrets to containers that need them. A few potential solutions include:
In-house solution built for saving secrets in encrypted storage, decrypting them on the fly and and injecting them inside the containers using environment variables.
AWS IAM rules that can interject Amazon API keys. However, this solution is limited to Amazon API keys and can only be used to access secrets stored in other Amazon services.
HashiCorp Vault uses automation to effectively handle both dynamic and static secrets. Vault is a very extensive solution with several features unavailable in other solutions, and we are finding it to be a more and more popular choice going forward.
Your answer to this technology question depends on how many secrets you have; how you expect that number to grow; your security and compliance needs; and how willing you are to change your application code to facilitate secret handling.
10. Where will SSL be terminated?
One question that arises frequently, especially around microservices that service web traffic is: Where should SSL be terminated? Typical design factors to consider include your security and compliance requirements. Typical options are at the application or network load balancer, for example terminating them at AWS ELB or ALB. A second option is to terminate SSL at an intermediate layer such a NGINX, or at the application container itself.
Certain compliance initiatives, like HIPAA, require that all traffic be encrypted. Thus, even if you decrypt at the load balancer, it needs to be re-encrypted before it is sent to containers running the application. On the flip side, the advantage of terminating at the load balancer is that you have a central place for handling SSL certificates. And fewer things have to be touched when an SSL certificate expires or needs to be rotated.
Elements to consider as you make a design decision include your specific compliance and security requirements; the ability of your applications to encrypt and decrypt data; and your container orchestration platform as some have the ability to encrypt data seamlessly. The combination of all the above should be the basis for your SSL termination decision.
While all these design and technology points may feel overwhelming, making the right choices will have long-term implications to your organization’s success with its microservices architecture. Like painting a house, more than half the work is in the preparation. From choosing the right primer to properly taping the wall, setting the right foundation and process boundaries are of significant importance in planning Docker-based microservices. Don’t short-change your preparatory process and you’ll end up with an end-product that delivers on your organization’s most critical microservices goals.