Plug-in Design and Development

Performance Analysis Plug-in Support System

The performance analysis plug-in analyzes the performance indicators of the service in a bypass mode, or exposes the performance indicators of the service itself to be captured by the plug-in.

Kato provides a statsd service on each computing node to receive the performance analysis results of the performance analysis plug-in, and store and display it. This is a self-defined monitoring system. In the future, performance analysis will be transformed into business monitoring plug-ins. Business monitoring plug-ins will expose business monitoring data based on Prometheus index specifications. The Kato monitoring system will automatically discover these endpoints for monitoring data collection. After the monitoring data is collected, it supports subsequent visualization, alarm, and automatic scaling and control.

Entrance Network Plug-in Support System

The entrance network plug-in is mainly used for ServiceMesh network or firewall security control. For example, when we deploy a web application, we do not want illegal requests (such as SQL injection) to reach the web. At this time, we can install one for the web application. Security plug-ins, used to control all requests to access the web, are like an ingress controller, so we call this type of plug-in an ingress network plug-in.

Working Principle

When an entry network plug-in is installed for an application, the plug-in is placed in front of the application. It must listen to a new port allocated by Kato to intercept all requests of the application, such as port 8080 in the figure below. Then we can Perform the necessary processing on the received request inside the plug-in, and then forward the processed request to the port that the application monitors, such as port 80 in the following figure. The relationship between the portal network plug-in and the application is shown in the following figure:

The plug-in needs to dynamically discover the configuration from the Kato application runtime. The discovery method is as follows:

Configure the discovery address (constituted by environment variables): ${XDS_HOST_IP}:${API_HOST_PORT}${DISCOVER_URL_NOHOST}

# Can be executed in the plugin container
curl ${XDS_HOST_IP}:${API_HOST_PORT}${DISCOVER_URL_NOHOST}

The plug-in configuration structure is as follows:

{
    "base_ports":[
        {
            "service_alias":"gr23cb0c",
            "service_id":"a55f140efae66c46219ccc1e8d23cb0c",
            "port":5000,
            "listen_port":65530,
            "protocol":"http",
            "options":{
                "OPEN":"YES"
            }
        }
    ],
}

The plug-in interception of traffic is to establish a listener on port 65530, and forward it to 127.0.0.1:5000 after business processing.

Export Network Class

Egress network plug-ins are one of the most commonly used plug-ins. Kato will automatically inject such plug-ins for services that depend on other services by default. At the same time Kato also provides a default network management plug-in based on envoy. The egress network plug-in mainly provides governance requirements when the current service accesses the upstream service.

Working Principle

The Kato application runtime provides XDS specification services and configuration discovery services, and supports envoy types or other plug-in types that support this specification. You can also generate the plug-in’s own configuration by obtaining the standard configuration information of the native plug-in. The export network plug-in can do the following functions as needed: dynamic routing, fusing, log printing, link tracking, etc.

Developers designing export network management plug-ins need to dynamically discover the configuration from the runtime of the Kato application. Plug-ins that support the XDS specification can directly use the XDS API for configuration discovery:

Configure the discovery address (composed of environment variables): ${XDS_HOST_IP}:${XDS_HOST_PORT}

The configuration discovery method that does not support the XDS specification is as follows: Configure the discovery address (constituted by environment variables): ${XDS_HOST_IP}:${API_HOST_PORT}${DISCOVER_URL_NOHOST}

# Can be Executed in the Plugin Container
curl ${XDS_HOST_IP}:${API_HOST_PORT}${DISCOVER_URL_NOHOST}

The plug-in configuration structure is as follows:

{
    "base_services":[
        {
            "service_alias":"gr23cb0c",
            "service_id":"a55f140efae66c46219ccc1e8d23cb0c",
            "port":5000,
            "protocol":"http",
            "options":{
                "OPEN":"YES"
            }
        }
    ],
}

Initialization Class

The working principle of the initialization plug-in is based on the Kubernetes init-container, that is, the initialization container. It generally completes the data initialization work, and the nature of its work must be to exit normally within a limited time. The service container will only start after the initialization plug-in exits normally. Kato is based on this type of plug-in to complete the sequence control of the batch startup of multiple services, refer to the kato service component rbd-init-probe.