Kato Service Log Management

1. KATO Self-document Management Mechanism

Node Service

The node service monitors the Docker process and observes its creation and destruction of containers. Obtain the path of the container log in the file system, monitor the standard output and standard error output of the container, and distribute it to the rbd-eventlog component using the UDP protocol.

rbd-eventlog Component

Receive pushes from the self-node service, and use the websocket protocol to push log content to the application console operated by the user.

Troubleshooting ideas for logs not being pushed normally:

First check the status of the node service, and then check the running status of the rbd-eventlog service to see if you have received the logs pushed by the node service. If it is normal, check if the security group of the websocket port (port 6060) is open. The push address is in the console library in the database. region_info table.

Kato Log Management Interface

Pause Push:

Logs are always generated when the service is running. You can pause the push when you are particularly concerned about a certain section of the log, so that you can check this section of the log carefully.

History log download:

According to the date, divided by days, log download is provided.

Last 1000 logs:

Starting from the current log, roll back the previous 1000 logs and provide static log analysis.

2. Docking with Elasticsearch

In the form of a plug-in, run the Filebeat plug-in together with the application. Mount the file storage to the log file, and Filebeat can collect the specified log file and report it to Elasticsearch.

2.1 Build the Filebeat plugin

Filebeat Github project address Filebeat

Know Filebeat configuration files

filebeat.config:
 #Related module configuration path
   prospectors:
    path: ${path.config}/prospectors.d/*.yml
    reload.enabled: true
  modules:
    path: ${path.config}/modules.d/*.yml
    reload.enabled: true

# config file input
filebeat.inputs:
-type: log
  enabled: true
  paths: #Filebeat cognitive log file path
      -${INPUT_PATH:/volume/*.log}

processors:
-add_cloud_metadata:

# elasticsearch output info
output.elasticsearch: #elasticsearch configuration
#Configure the host port through environment variables
  hosts: ['${ES_HOST:127.0.0.1}:${ES_PORT:9200}']
#Configure users through environment variables
  username: ${ES_USERNAME:elastic}
#Configure password through environment variables
  password: ${ES_PASS:changeme}
# index: "${INDEX_NAME:filebeat}-%{[beat.version]}-%{+yyyy.MM.dd}"
  timeout: 180
  backoff.max: 120

# kibanan setup
setup.kibana: #kibanaConfiguration
#Configure the host port through environment variables
  host: "${KIBANA_HOST:127.0.0.1}:${KIBANA_PORT:5601}"
#Configure username through environment variables
  username: "${ES_USERNAME:elastic}"
#Configure password through environment variables
  password: "${ES_PASS:changeme}"

 setup.template.enabled: true

# if you change output.elasticsearch you have to change this two settings
# setup.template.name: "${INDEX_NAME:filebeat}"
# setup.template.pattern: "${INDEX_NAME:filebeat}-*"

# enable dashboards
setup.dashboards.enabled: true
# setup.dashboards.index: "${INDEX_NAME:filebeat}-*"
setup.dashboards.retry.enabled: true
setup.dashboards.retry.interval: 3
setup.dashboards.retry.maximum: 20

# enable modules
 #Filebeat Available modules
filebeat.modules:
-module: nginx
-module: mysql
-module: apache2
-module: mongodb

Kato Build Filebeat Plug-in Practice

(1) New plugin

Plug-in type classification:

Entrance network: Take effect at the front of the flow into the application, deal with traffic hijacking, can be used to limit flow, set black and white lists; Egress network: Contrary to the ingress network, it works at the egress to facilitate the management of downstream networks; Export and entrance co-governance network: Combining the common characteristics of the first two, it can work on both the ingress network and the egress network; Performance analysis: The default performance analysis plug-in, supports Http protocol, MySQL protocol; Initialization Type: Initialize the database; General Type: Only start together with the main container in the same pod.

(2) Add environment variable configuration

(3) According to the code, configure the environment variable options

(4) The Filebeat plugin was successfully built

(5) View the configured items

2.2 Nginx Log Docking with ES

One-click construction of Elasticsearch and kibana applications in the application market

Take Nginx application as an example here

Create nginx application, created from docker image, the image address is nginx:1.11

Before the test is successfully created, click Advanced Settings –> Open External Service –> and Persistence Target Application Log Path

There are two containers in the Pod: nginx and filebeat. The resources are isolated from each other. The purpose of this step is to mount Nginx logs and share them with Filebeat, so that filebeat can collect nginx logs

Establish the dependency of Nginx, Elasticsearch, and kibana, see Component Dependency

The final topology is as follows

Nginx adds the already made Filebeat plugin

View configuration information, change wrong configuration, update configuration

From here you can view the login information of kibana

Visit the web page of kibana, you can see that the access log of Nginx has been synchronized to kibana