paint-brush
Docker Tutorial Series Part 2: Microservice Architectureby@adityadixit06
240 reads

Docker Tutorial Series Part 2: Microservice Architecture

by Aditya DixitAugust 21st, 2018
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Before taking a deep dive into <a href="https://hackernoon.com/tagged/docker" target="_blank">Docker</a>, I feel it is important to understand <a href="https://hackernoon.com/tagged/microservice" target="_blank">Microservice</a> architecture and why it is a better alternative to traditional Monolithic applications. After that we will discuss how Docker can be leveraged to create loosely coupled microservices.
featured image - Docker Tutorial Series Part 2: Microservice Architecture
Aditya Dixit HackerNoon profile picture

Before taking a deep dive into Docker, I feel it is important to understand Microservice architecture and why it is a better alternative to traditional Monolithic applications. After that we will discuss how Docker can be leveraged to create loosely coupled microservices.

Here’s a link to previous article in this series:


Docker Tutorial Series Part 1: Images, Layers and Dockerfile_This article is first in a series of articles on Docker. Today, we are going to introduce the core concepts of Docker…_medium.com

Monolithic applications consist of an inner core that houses the business logic and an outer layer of adapters that interface with the real world. The business logic is segregated into modules that handle services, events and objects. While adapters provide access to databases, payment systems, messaging engines, web APIs and user interfaces. The application is packaged and deployed as a monolith, in spite of having logically modular design.

During initial stages, such applications are very easy to develop, test, deploy and scale. But, as the complexity of application increases, each incremental change becomes very expensive. Developers, both old and new, face an uphill task trying to grasp the impact any code change would have on the application. Anyone who has worked with enterprise apps, with millions of lines of code that take forever to boot up and are nearly incomprehensible, as I did understands the sheer dread it invokes. Equally terrifying is trying to explain what you have learned to new hires. Adopting new technologies becomes almost impossible as it would involve rebuilding the application from scratch.

To overcome the above problems, organisations have come up with the idea to split the application into several smaller interconnected services. At this point of time partitioning a system is more of an art than science, although several strategies exist to make the task easier. An application can be decomposed on the basis of service domain, business capability or operational responsibility. The goal is that simultaneous changes to more than one microservice should be minimized.

A microservice is a mini-application with its own business logic and adapters. It implements a set of distinct features or functionality to serve its goal. It is developed and maintained by a small cross-functional team. It is only loosely coupled with other microservices. The above statements together imply that the team responsible for managing a microservice is free to develop new features at its own pace. Moreover the team is free to choose the technology stack it will use, thereby ensuring that it can adopt new technologies as it sees fit. This means that it can choose new hires from a deeper pool of applicants.

Now, we come to the question how Docker can help us implement ‘Microservice Architecture’. Docker packages each service into a self-contained entity that is independent of underlying operating system environment. Each service can be deployed on its own at any physical or virtual location, independent of other services. Multiple instances of same services can be spawned. Functionally linked services can be placed next to each other on the same machine. This freedom is achieved is building a Docker container and putting everything that a service needs to run inside this container.

First step in building a container is to create a configuration file called Dockerfile. A Dockerfile contains a list of instructions to install application dependencies such as compilers, interpreters, libraries and 3rd party modules, copy source code files and run system commands. This Dockerfile decouples a microservice from rest of application and allows the dev team to choose its technology stack, independent of rest of organisation. Moreover it specifies environment variables and open ports for transferring data and communicating with the outer world.

The second step to create a Docker image, which is a blueprint of our microservice, by executing steps mentioned in Dockerfile. The Docker image is collection of layers built on top of each other in response to instructions issued in Dockerfile. What makes these layers special is that on each rebuild the Docker engine calculates checksum of each layer and uses previous version stored in cache if no changes are introduced in a layer. Armed with this information, we create our Dockerfile in such a manner that layers most likely to remain unchanged on successive rebuild are placed at the beginning and layers that change often such as one containing our source code are placed later. This makes rebuilding an image a far quicker process and improves speed of deploying new code changes.

Lastly, we can create a Docker container, which is a runnable instance of our Docker image by issuing Docker run command. A Docker container encapsulates our microservice and makes it available on all platforms, irrespective of underlying environment, underlining Docker’s ‘build once, run anywhere’ motto. Now, we can run multiple parallel copies of this Docker image on a single system or on a cluster of machines.

Containerization makes it easy to scale our application, i.e. we can dynamically increase or decrease the number of running instances of our microservice based on demand. But what makes scaling so efficient and powerful, when compared to monolithic applications, is that we can scale each microservice according to demand. Let’s take an example, suppose there is a sudden spike in user registration module inside our monolithic application. In this case to cater to new load, we must spawn new instances of our entire application, which will have a large footprint both in terms of memory and CPU usage. Whereas, in case of an application built using microservices we only have to scale up instances of user registration service, without affecting other services. This ability to fine tune our production environment is a huge blessing for all especially the Dev Ops team.

This brings us to the end of this primer on microservice architecture. We discussed the limitations of traditional monolithic applications, which have been dominating enterprise development culture for a long time. Then we introduced microservice architecture as a valid alternative to manage complexities of software design and development. Finally, we elaborated on how Docker can help us build, deploy and scale microservices using container based approach.

Originally published at blog.adityadixit.me.