Skip to main content

Understanding Microservices communication and Service Mesh

Microservices architecture (aka MSA) is reaching a point where handling the complexity of the system has become vital to reap the benefits of the very same architecture brings into the table. Implementing microservices is not a big deal given the comprehensive set of frameworks available (like Spring Boot, DropWizard, NodeJs, MSF4J, etc.). The deployment of the microservices also well covered with containerized deployment tools like docker, kubernetes, Cloud Foundary, Open Shift, etc.

The real challenge with the microservices type implemention is the inter-microservice communication. Now we have gone back to the age where we had the spaghetti deployment architecture where service to service communication happens in a point to point manner. Enterprise has come a long way from that position through ESB, API Gateways, Edge proxies, etc. Now it is not possible to go back to the same point in the history. This requirement of inter-microservice communication was not an afterthought. The proposed method at the beginning of microservices wave was to use

  • Smart endpoints and
  • Dumb pipes

Developers have used message brokers as the communication channel (dumb pipe) while making the microservice itself a smart endpoint. Even though this concept worked well for initial microservices implementations, with the advancements of the implementations and the scale and the complexity, this concept was no longer enough to handle the inter-microservice communication.

This is where the concept of service mesh concept came into picture with a set of features which are required for pragmatic microservices implementations. Service mesh can be considered as the evolution of the concepts like ESB, API Gateway, Edge proxy from the monolithic SOA world to the microservices world.


Service mesh follows the side car design pattern where each service instance has its own sidecar proxy which handles the communication to other services. Service A does not need to be aware of the network or interconnections with other services. It will only need to know about the existence of the sidecar proxy and do the communication through that.

At a high level, service mesh can be considered as a dedicated software infrastructure for handling inter-microservice communication. The main responsibility of the service mesh is to deliver the requests from service X to service Y in a reliable, secure and timely manner. Functionality wise, this is somewhat similar to the ESB function where it interconnects heterogenous systems for message communication. The difference here is that there is no centralized component rather a distributed network of sidecar proxies.

Service mesh is analogous to the TCP/IP network stack at a functional level. In TCP/IP, the bytes (network packets) are delivered from one computer to another computer via the underlying physical layer which consists of routers, switches and cables. It has the ability to absorb failures and make sure that messages are delivered properly. Similary, service mesh delivers requests from one microservice to another microservice on top of the service mesh network which runs on top of an unreliable microservices network.
Eventhough there are similarities between TCP/IP and service mesh, latter demands much more functionality within a real enterprise deployment. Given below are a list of functionalities which are expected from a good service mesh implementation.

  • Eventually consistent service discovery
  • Latency aware load balancing
  • Circuit breaking/ Retry/ Timeout (deadlines)
  • Routing
  • Authentication and Authorization (Security)
  • Observability

There can be more features than the above list. But we can consider any framework a good one if it offers the above mentioned list. The above mentioned functionalities are executed at the side car proxy where it directly connects with the microservice. This sidecar proxy can live inside the same container or within a separate container.

With the advent of more and more features within the service mesh architecture, it was evident that there should be a mechanism to configure these capabilities through a centralized or common control panel. This is where the concept of “Data plane” and “Control plane” comes into the picture.

Data plane


At a high level, the responsibility of the “data plane” is to make sure that requests are delivered from microservice X to microservice Y in a reliable, secure and timely manner. So the functionalities like

  • Service discovery
  • Health checking
  • Routing
  • Load balancing
  • Security
  • Monitoring

are all parts of the data plane functionality.

Control plane

Eventhough the above mentioned functionalities are provided within the data plane on the sidecar proxy, the actual configuration of these functionalities are done within the control plane. The control plane takes all the stateless sidecar proxies and turns them into a distributed system. If we correlate the TCP/IP analogy here, control plane is similar to configuring the switches and routers so that TCP/IP will work properly on top of these switches and routers. In Service mesh, control plane is responsible for configuring the network of sidecar proxies. Control plane functionalities include configuring

  • Routes
  • Load Balancing
  • Circuit Breaker / Retry / Timeout
  • Deployments
  • Service Discovery

The following figure explains the functionality of data plane and service plane well.

As a summary of things which we have discussed above,

  • Data plane touches every requests passing through the system and executes the functionalities like discovery, routing, load balancing, security and observability
  • Control plane provides the policies and configurations for all the data planes and make them into a distributed network

Here are some of the projects which has implemented these concepts.

Data planes - Linkered, Nginx, Envoy, HAProxy

Control planes - Istio, Nelson

Even though these are categorized into 2 sections, some frameworks has functionalities which are related to both data plane and control plane.

References:




Comments

Post a Comment

Popular posts from this blog

WSO2 ESB tuning performance with threads

I have written several blog posts explaining the internal behavior of the ESB and the threads created inside ESB. With this post, I am talking about the effect of threads in the WSO2 ESB and how to tune up threads for optimal performance. You can refer [1] and [2] to understand the threads created within the ESB. [1] http://soatutorials.blogspot.com/2015/05/understanding-threads-created-in-wso2.html [2] http://wso2.com/library/articles/2012/03/importance-performance-wso2-esb-handles-nonobvious/ Within this blog post, I am discussing about the "worker threads" which are used for processing the data within the WSO2 ESB. There are 2 types of worker threads created when you start sending the requests to the server 1) Server Worker/Client Worker Threads 2) Mediator Worker (Synapse-Worker) Threads Server Worker/Client Worker Threads These set of threads will be used to process all the requests/responses coming to the ESB server. ServerWorker Threads will be used to pr...

How puppet works in your IT infrstructure

What is Puppet? Puppet is IT automation software that helps system administrators manage infrastructure throughout its lifecycle, from provisioning and configuration to orchestration and reporting. Using Puppet, you can easily automate repetitive tasks, quickly deploy critical applications, and proactively manage change, scaling from 10s of servers to 1000s, on-premise or in the cloud. How the puppet works? It works like this..Puppet agent is a daemon that runs on all the client servers(the servers where you require some configuration, or the servers which are going to be managed using puppet.) All the clients which are to be managed will have puppet agent installed on them, and are called nodes in puppet. Puppet Master: This machine contains all the configuration for different hosts. Puppet master will run as a daemon on this master server. Puppet Agent: This is the daemon that will run on all the servers, which are to be managed using p...

Understanding Threads created in WSO2 ESB

WSO2 ESB is an asynchronous high performing messaging engine which uses Java NIO technology for its internal implementations. You can find more information about the implementation details about the WSO2 ESB’s high performing http transport known as Pass-Through Transport (PTT) from the links given below. [1] http://soatutorials.blogspot.com/2015/05/understanding-wso2-esb-pass-through.html [2] http://wso2.com/library/articles/2013/12/demystifying-wso2-esb-pass-through-transport-part-i/ From this tutorial, I am going to discuss about various threads created when you start the ESB and start processing requests with that. This would help you to troubleshoot critical ESB server issues with the usage of a thread dump. You can monitor the threads created by using a monitoring tool like Jconsole or java mission control (java 1.7.40 upwards). Given below is a list of important threads and their stack traces from an active ESB server.  PassThroughHTTPSSender ( 1 Thread ...