ServiceMesh Microservice Design

ServiceMesh

The general literal interpretation is “service grid”. As the most popular dynamic linker of distributed system architecture microservices, it is in the dedicated infrastructure layer of service-to-service communication. This layer is independent of applications and services. Communication provides lightweight and reliable delivery. If you simply describe it, you can compare it to TCP/IP between applications or microservices, responsible for network calls, current limiting, fusing and monitoring between services. The same use of ServiceMesh eliminates the need for relationships between services. Those things that were originally implemented through applications or other frameworks, such as the Spring Cloud architecture, now only need to be handed over to ServiceMesh. The emergence of ServiceMesh is mainly due to the development of application virtualization technology, such as Kubernetes, Kato and other projects, which greatly reduces the complexity of application deployment and operation and maintenance.

Microservice Architecture Comparison

Why Use ServiceMesh

ServiceMesh does not bring us new features. It is used to solve problems that other tools have already solved, but this is an organic and automated management of the complex manual operation and maintenance work in the past under the cloud native environment of Cloud Native.

Under the traditional MVC three-tier Web application architecture, the communication between services is not complicated and can be managed within the application itself. However, in the current complex large-scale website situation, the single application is decomposed into numerous micro The dependency and communication between services and services are very complicated. There are “fat client” libraries such as Finagle developed by Twitter, Hystrix developed by Netflix, and Stubby by Google. These are the early ServiceMesh, but they are all suitable for specific Environment and specific development languages ​​cannot be used as platform-level ServiceMesh support.

Under the Cloud Native architecture, the use of containers gives more feasibility to heterogeneous applications. The horizontal expansion capability of kubernetes enhanced applications allows users to quickly compile applications with complex environments and complex dependencies. At the same time, developers There is no need to be overly concerned about the cumbersome things of application monitoring, scalability, service discovery, load balancing and distributed tracing, but focus on program development, giving developers more creativity. If you meet the following scenarios, it is recommended to choose ServiceMesh architecture:

  1. The legacy huge system gradually transitions to the microservice architecture
  2. The business system is developed by multiple development languages

ServiceMesh’s Advantages Over Otherther Microservice Architectures

Maximum Transparency

ServiceMesh controls the calling relationship between services and the service governance strategy through the global control layer. For the service itself, it is only necessary to consider the dependent communication of upstream services from the stand-alone dimension, and use simple communication protocols such as HTTP, gRPC, etc. The Mesh layer transparently discovers upstream targets, retry/timeout, monitoring, and tracking. Give distributed system capabilities to stand-alone services.

Low Learning Cost

In the past, we had to design and build a complete microservice architecture, such as SpringCloud, Dubbo, etc. It was inevitable that we needed to change our traditional programming ideas and learn complex architecture frameworks, such as SpringCloud, which contains more than 10 various components, basic and service businesses It is not directly related. It is a high threshold for most business developers. But using the ServiceMesh architecture, due to its maximum transparency, developers hardly need to learn additional framework technologies that have nothing to do with business. Greatly reduce the cost of learning.

Flexible Structure

For different team composition, there may be members who master different development languages, or have mature and implemented business systems. If you switch to a microservice architecture to support a larger number of users, if you use the SpringCloud architecture, you will inevitably refactor or even rewrite the system. In the face of reality and the future, we need to gradually implement the microservice architecture, develop appropriate services using appropriate development languages, and even integrate multiple lightweight architecture models, such as Dubbo, SpringBoot and LNMP architecture.

ServiceMesh Architecture Performance

Some people have suggested that adding two layers of proxies between services will have a greater impact on performance. For performance issues, we should look at the overall situation and analyze from the following aspects:

  1. The problem of increasing proxy response performance exists in all microservice architectures.
  2. The network agent layer of ServiceMesh is generally implemented by lightweight and efficient agents, and its performance is usually superior.
  3. In order to provide better management function support in ServiceMesh, the communication model is generally at the application layer, such as processing (http, grpc, mongo, mysql) and other protocols. If the performance requirements are relatively high, you can also directly use the 4-layer network model.
  4. ServiceMesh is generally oriented to medium and large distributed systems. Distributed systems directly consume communication. On the contrary, the Mesh layer can use more efficient communication protocols such as http/2 to optimize the service communication process through http/1.1 protocol communication.

Does ServiceMesh only manage the network?

The ServiceMesh architecture framework is to provide a series of service management functions at the network communication level, including:

  • Service discovery and load balancing
  • Advanced routing
  • Communication monitoring and analysis
  • Communication security

For Kato’s architecture design, the following functions are also added through plug-in extensions:

  • Log processing
  • Data backup and recovery
  • Service operation and monitoring
  • Service operating environment guarantee

Kato and ServiceMesh

Kato natively provides a full range of ServiceMesh governance function solutions, and also provides plug-in expansion strategies. Users can customize plug-ins to achieve in addition to the default solutions. The implementation of Kato and Istio have something in common, and there are also natural differences.

The same point is that they all implement a global control layer based on the XDS specification, and support envoy and istio-proxy.

The difference is that Istio needs to work on platforms such as Kubernetes. The support of the microservice architecture needs to be considered from the underlying storage and communication to the upper application layer configuration. Large-scale microservice architecture cannot leave the PaaS platform for automated management applications. Kato implements different control logics from the hardware layer, communication layer, and platform layer. It is compatible with the existing microservice architecture and provides a complete ServiceMesh microservice architecture practice. The inclusive form of architecture makes existing applications service-oriented.

The experience Kato provides to users is maximized transparency, that is, users running services on Kato have constituted a microservice architecture, without having to first learn the knowledge of microservice architecture, and then consider how to transform their own services, and finally land.

As shown in the figure below, Kato’s network management plug-in dynamically starts third-party plug-in services in the same network namespace, the same storage space, and the same environment variable space through the Sidecar method, such as the envoy service, through the runtime of the Kato application. Communication completes the data exchange from the application space to the platform space, and realizes the platform’s control of application communication.