This article applies to application developers and operation and maintenance personnel
Service registration and service discovery are both concepts of microservice governance, and communication between microservices needs to be carried out through a combination of service registration and service discovery mechanisms. Here we explain that the first chapter of the communication between Kato components is about service registration and service discovery. The reason is that all components in the Kato platform are governed by microservices, and components are services. Therefore, the communication between components is the communication between microservices. Maybe you don’t know much about microservices yet, and you think that service registration and service discovery are a very complicated concept. ? In Kato, we shield all complex parts and provide you with the simplest communication mode between components.
Next, we will explain the secrets of communication between Kato components in a task mode.
- Deploy component A based on Demo Java source code Refer to creating component documentation
- Deploy Mysql database component B based on the cloud application market
After deploying component A and component B, visit component A and switch to the Mysql page. You will find that the page shows that the connection to the database failed. At this time, you may have questions, how does component A connect to component B of the database? Only two steps are required:
- Edit dependency: Go to the application view/application topology page, click “switch to edit mode” to switch the topology map to edit mode, and click the focus of component A to connect to component B. A prompt box will pop up to remind you to update component A.
- Update component: Confirm the update, wait for the update to complete and revisit the Mysql page of component A.
At this time, you will see that the database connection information is printed on the page, including the communication address and account secrets (for demonstration purposes, our Demo program displays the database password, please do not refer to the actual scene), if there are tables in the database The information can also be displayed normally, indicating that the A and B components communicate successfully.
Understanding the Principle
Do you have a lot of questions about the above procedure? Why can I communicate after connecting? How to get the connection information? How is the code implemented? and many more. Next we will take you to analyze the implementation mechanism of the whole process.
Understand the Current Situation
In the traditional deployment mode, whether it is a physical machine or a virtual machine, if the component directly needs to communicate, it must know the fixed communication address of the communication target, and write it in the configuration file or code. For example, a Web service needs to connect to a database, and it needs to know the host address and port of the database. In an environment where a container is deployed as a carrier, the communication address of the service itself generally changes with each deployment, so we definitely can’t directly specify the communication address of the component as in the past.
In the Kubernetes native environment, in order to solve the problem of service access, a resource type Service is defined. When we access components, we access them through the name of the Service or virtual IP address. This access process actually establishes a layer of proxy on each node through the kube-proxy system component. The implementation mode of this layer of proxy includes two types: iptables and ipvs. The name of the Service can be predetermined and then can be directly pre-defined in the code or configuration file. The solution of Kubernetes certainly requires users to have a clearer understanding of the relevant principles of Kubernetes in order to understand this process and create corresponding resources. This is obviously more complicated for users who do not understand Kubernetes.
Kato works on Kubernetes, so the entire implementation model has a lot to do with Kubernetes-related technologies, but it is very different. We restore the essence of communication between components, nothing more than telling the initiator of the communication what the target communication address you need to communicate with. Therefore, we put forward the concept of dependency, the user uses the way of dependence to display and describe the relationship between the components that need to communicate, A needs to request B, then A needs to rely on B. This process is described in another language. This is service discovery. The dependency is to inform platform A that it needs to communicate with B, and component A needs to be given the ability to find the communication address of component B, and it needs to be notified after obtaining the address Go to component A. These things are an additional part of the business, and the complexity must not be brought to the developer. Kato started to propose Sidecar proxy 4 years ago to solve basic network governance models such as service discovery and load balancing among components.
Components with upstream dependencies will be automatically injected into the default communication management plug-in (envoy implementation) at startup, using the default configuration. The plug-in will discover the relevant configuration from the control panel API (standard envoy service discovery API) for port monitoring, and provide it to the component business itself to call. The Sidecar plug-in and the business component are in the same network space (the same Pod), so the communication address is the local loopback address 127.0.0.1, and the communication port is consistent with the port configured by the communication target component. Therefore, for component A in the above figure, the communication address of the dependent service can be determined, such as (127.0.0.1:8080). This mode is also very useful in development scenarios. Most code development depends on the service. The addresses may all be 127.0.0.1, so there is no need to make any changes during deployment. If the dependent upstream service has multiple instances, the Sidecar plug-in will perform load balancing or dynamic routing based on configuration.
In addition, the prerequisite for the service discovery process to take effect is service registration. In Kato, the service needs to be registered explicitly, that is, in the port management of the component, the open range of the port is set to support open internal services for communication between components. Open to external services, used to access components through the gateway.
Therefore, for users, communication between components is required, and only the two parties need to establish a dependency relationship in which the communication direction is consistent. The platform will complete other things. The business itself can be understood as the target that needs to communicate exists locally (127.0.0.1)
Cannot find the component you want to depend on when establishing a dependency
If the component has been normally deployed on the platform or the third-party component has been normally created, the main reason for not being able to find it should be that the port of the target component is not open_internal service_, the port of the component is open, the internal service is actually *service In the process of registration*, there is first registration and then discovery.
Rely on whether multiple component ports conflict
According to the principle described above, if you rely on multiple components with the same port, there will be port conflicts in the current component network space. There are two ways to solve this problem:
- The port monitoring of all components consider reading the environment variable
PORT, so that the component monitoring port can be changed by the platform, and then setting a different monitoring port for each component on the platform.
- If they are all
HTTPservices, you can open the network management plug-in to replace the default plug-in, and use domain names to distinguish different components and realize port reuse. Refer to Dynamic routing of network management between components
Are there restrictions on the communication protocol between components
Communication between components currently supports TCP and UDP protocol matching 99.99% of component types. Advanced management of the application layer currently supports HTTP protocol, and in the future it supports GRPC, MYSQL, MONGODB, REDIS, DUBBO and other protocols.
Whether the dependencies between components can pass configuration
When a component provides services externally, it can automatically inject its own connection information (such as the user name, password, and database name for which the database provides services) and other information into the relying party environment to realize the automatic information required by the relying party injection. Refer to the document in the next section Communication Variable Injection
The ports of the microservice components are the same, what should I do with each other?
If you need to use Kato’s microservice communication management mechanism:
- Modify the basic image of the service to support reading the PORT variable to establish a monitor.
- Set a port for each service component on the platform and set the port alias. For example, USER_SERVER PAY_SERVER.
- The configuration file of the code supports variable control. Use the variables defined in step 2 to define the communication address between services.
- Sort out the communication relationship between components and establish dependencies.
If you are Dubbo, SpringCloud or other microservice architecture patterns, use a third-party service registry. Then you can do service registration and service discovery based on a third-party service registry without using Kato’s dependent communication mode, just communicate directly. In this case, there is no port conflict problem.