Application builds when broken down into multiple smaller service components, are known as Microservices. When compared to the traditional Monolithic way, a Microservice Architecture treats each microservice as a standalone entity/module, essentially helping to ease the maintenance of its code and related infrastructure. Each microservice of an application can be written in a different technology stack, and further be deployed, optimized and managed independently.
Though in theory, a Microservice Architecture specifically benefits build of complex large-scale applications, however, it is also widely used for small-scale application builds (for example, a simple shopping cart) - with an eye to scale further.
A modern cloud-native application running on Microservice Architecture relies on the following critical components -
The above three are the most important components of a Microservice Architecture which allow applications in a cloud-native stack to scale under load and perform even during partial failures of the cloud environment.
A large application when broken down to multiple microservices, each using a different technology stack (language, DB, etc.), requiring multiple environments form a complex architecture to manage. Though Docker containerization helps to manage and deploy individual microservices by breaking each into multiple processes running in separate Docker Containers, the inter-services communication remains critically complicated as you have to deal with the overall system health, fault tolerance and multiple points of failure.
Let us understand this by how a shopping cart works on a Microservice Architecture. Microservices here would relate to the inventory database, the payment gateway service, the product suggestion algorithm based on the customer's access history, etc. While all these services remain a stand-alone mini-module theoritically, they do need to interact among each other. It is important to note that a service-to-service communication is what makes microservices possible.
Now that you know the importance of a service-to-service communication in a Microservice Architecture, it seems apparent that the communication channel remains fault-free, secured, highly-available and robust. This is where a Service Mesh comes-in as an infrastructure component, which ensures a controlled service-to-service communication by implementing multiple service proxies. A Service Mesh is responsible for fine-tuning communication among different services rather than adding new functionalities.
In Service Mesh, proxies deployed alongside individual services enabling inter-service communication is widely known as the Sidecar Pattern. The sidecars (proxies) might be designed to handle any functionalities critical to inter-service communication like load balancing, circuit breaking, service discovery, etc.
Through a Service Mesh, you can -
Business Logic
This contains the core application logic and the underlying code of a microservice. A business logic also retains the application's computation as well as the service-to-service integration logic. Due to the beauty of a Microservice Architecture, the business logic can be written on any platform and remains completely independent from a different service.
Primitive Network Functions
This includes basic network functions used by a microservice to initiate a network call and connect with the service mesh sidecar proxy. Though major network functions among Microservices are handled through the Service Mesh, a given service must contain the basic network functions to connect with the sidecar proxy.
Application Network Functions
Unlike Primitive Network Functions, this component through a service proxy maintains and manages critical network functions including circuit breaking, load balancing, service discovery, etc.
Service Mesh Control Plane
All service mesh proxies are centrally managed and controlled by a Control Plane. Through a Control Plane, you can specify authentication policies, metrics generation, and configure service proxies across the mesh.
While there are several others, being the most popular, we will explore how Istio can be used to implement a Service Mesh architecture for a cloud-native application.
As explained in the sections above, in a Microservice Architecture, Istio does this by forming an infrastructure layer to connect, secure and control communication among distributed services. Istio deploys an Istio proxy (called an Istio sidecar) next to each service with few or no code changes to the service in itself. All inter-service traffic is directed to the Istio proxy, which uses policies to control inter-service communication alongside implementing essential policies of deployments, fault injections, and circuit breakers.
Istio being platform-independent can be run in a variety of environments, including Cloud, On-Premise, Kubernetes, and more. Istio currently supports:
Core Istio Components (Image Source - istio.io)
An Istio service mesh consists of a data plane and a control plane.
The control plane eventually ends up managing and maintaining components of the data plane, and hence forms to be the most important layer of the Istio Service Mesh.
In this article today, we got an understanding of how a Service Mesh is critical towards the implementation of a Microservice Architecture, and how Istio solves the purpose of achieving those.
Taking a step further, in the next article, we would go through the steps involved in installing Istio on different platforms, including Kubernetes.