Kato v5.3.0 Features

If you have not installed the historical version of Kato, please refer to Quick Installation

If you still have questions about the Kato project, please refer to What is Kato?

New Features

Cloud Native Application Governance Mode Switch

Kato is committed to a non-intrusive, loosely coupled application management concept. Loose coupling is reflected in many aspects:

  • Loose coupling between services

    The core concept of microservices is that each service in the system can be independently developed, deployed, and upgraded independently, and each service is loosely coupled. The concept of cloud native application architecture is to further emphasize loose coupling of architecture and reduce the degree of interdependence between services. Kato’s out-of-the-box service governance ideas enable applications deployed to the platform to naturally form a microservice architecture.

  • Loosely coupled application and operating environment

    Application development and packaging are independent and standardized, and delivered to any operating environment through a standardized platform. Kato provides full link support for application model development, release, sharing, and installation, serving application delivery scenarios.

  • Decoupling service governance capabilities from business logic

    This is the focus of our new version. We have introduced the application-level governance mode switching function, which enables dynamic switching of service governance capabilities without changing business logic, and provides different governance capabilities for the business. In the current version, we support direct switching between the built-in ServiceMesh governance mode and the Kubernetes native mode. With this system, user-defined governance models will be implemented in future versions, and mature ServiceMesh frameworks such as Istio and linkd will be introduced.

For detailed instructions, refer to the document App governance mode switch

Component Custom Business Monitoring and Visualization

Kato hopes to provide developers with a full range of monitoring capabilities for applications. The past version mainly included dimensions such as resource monitoring, performance analysis, and status detection. This release provides developers with the ability to customize monitoring and visualization in the business dimension. Prometheus has successfully standardized the facts in the field of cloud-native monitoring. Kato supports developers to define monitoring indicators based on the Prometheus specifications. After configuring the monitoring points, Kato will automatically discover and collect monitoring data, and provide users for query and visualization. Users can use plug-ins to reflect and install the existing Exporter plug-ins in the community, which can easily expand their business monitoring capabilities.

Detailed instructions refer to the document Business Custom Monitoring

Brand New Console and Cluster Installation

In order to further reduce the threshold for users to use Kato, in version 5.3 we decouple the installation and operation of the console from the installation and operation of the cluster. Users only need one Docker run command to run Kato console in any Docker environment. In the cluster installation dimension, various methods such as Alibaba Cloud ACK, docking with existing Kubernetes clusters, and easy installation of clusters from the host have been added to help users quickly complete resource pooling.

Detailed instructions refer to the document Quick Installation

Application Configuration Group

Cloud native applications recommend using environment variables for configuration management. Therefore, we often need to add the same configuration to multiple components of the same application. For example, there are multiple components in an application that use the same Oracle database, and we configure the connection information of the Oracle database through environment variables. Management and configuration need to do a lot of repetitive things. With the help of the application configuration group, the configuration information can be uniformly managed at the application level and take effect in batches, which greatly reduces the number of operations for developers.

Detailed use reference document application configuration group

Other New Features and Changes

  • Application component library supports version management and detailed settings of application models
  • Application model offline export is improved, and the export file size is significantly reduced (downwardly incompatible).
  • Improved offline import of application templates, supporting parallel import of multiple application models.
  • Support console data backup and migration
  • Improve Oauth2.0 support, now supports third-party authentication such as Github, Gitlab, Gitee, DingTalk, and Alibaba Cloud.
  • The gateway supports session maintenance load balancing algorithm
  • Improved the sorting of the app list in the team view, sorting based on the active status of app operations
  • Added data statistics of application dimension resource occupancy
  • The application release process is improved to support flexible editing of the number of released components during release.
  • Application upgrade support plugin
  • Support Java Maven configuration management
  • Remove rbd-repo component to reduce resource consumption
  • Kato project switched to gomod management
  • Kato console python version upgraded to 3.6

Console Upgrade

To upgrade from v5.2.2-release to v5.3.0-release, you need to upgrade the console and cluster separately. The current chapter describes the console upgrade steps. Video tutorials are provided throughout the process, see the end of the article for details.

Database Backup

Before upgrading the console, you need to back up and upgrade the database used by the console. By default, you need to enter the rbd-db-0 database container environment. If your Kato is connected to an external database, please backup and upgrade (sql) as appropriate.

  • Backup
# Log in to rbd-db-0 for data backup, the object is the console region library
kubectl exec -ti rbd-db-0 -n rbd-system - bash

mysqldump -uroot -p${MYSQL_ROOT_PASSWORD} --databases console> /var/lib/mysql/console.sql
mysqldump -uroot -p${MYSQL_ROOT_PASSWORD} --databases region> /var/lib/mysql/region.sql

# The backup file is located in the following location of the master node by default. If your cluster does not use the default nfs shared storage, you need to manually find the persistent path of rbd-db
ls /opt/kato/data/db*/mysql/*.sql
  • Upgrade the Console Library
# Log in to rbd-db-0 for operation
kubectl exec -ti rbd-db-0 -n rbd-system - bash

curl https://gitee.com/kato/kato-console/raw/V5.3/sql/5.2.2-5.3.0.sql -o /5.2.2-5.3.0.sql

mysql -uroot -p${MYSQL_ROOT_PASSWORD} -Dconsole -e'source /5.2.2-5.3.0.sql'

Install a New Version of the Console

  • Install the app from the open source app store Kato-open source

  • Register the console database originally used by the cluster as a third-party component into the deployed Kato-open source application, fill in the static registration address: rbd-db-rw.rbd-system:3306, add connection Environment variables required for information, replacing the Mysql5.7 database in Kato-open source. Database If your Kato is connected to an external database, please directly add the corresponding environment variables for the Kato-Console and Cluster Installation Driver Service that depend on the database. After replacing the database, update the above two service components. The specific environment variables added are shown in the following table:

Connection information environment variable nameEnvironment variable value
MYSQL_PASS<Query the value of ${MYSQL_ROOT_PASSWORD} in the rbd-db-0 container>
  • Change the cluster API address to the gateway IP. If your cluster has multiple gateways, please fill in the VIP or load balancing address.

Process the Original Console

The original console refers to the rbd-app-ui-xxxxxxxxx-xxxx pod created in the rbd-system namespace by default during installation, and the rbdcomponents, deployment and other resources involved in maintaining these pods. Due to the issue of the replacement of new and old resources, it is strongly recommended to perform the operation steps of the original console after the subsequent cluster-side upgrade operation.

By editing Kato’s custom rbdcomponents resource, you can perform the following configuration for the original console to complete the upgrade action:

  • The mirror address uses registry.gitlab.com/gridworkz/kato:v5.3.0-release-allinone
  • Add environment variable DB_TYPE=mysql
  • Add new pvc mount

All changes are made in the spec paragraph:

kubectl edit rbdcomponents.kato.io rbd-app-ui -n rbd-system
    -name: DB_TYPE
      value: mysql
    -mountPath: /root/.ssh
      name: app
      subPath: ssh
    image: registry.gitlab.com/gridworkz/kato:v5.3.0-release-allinone
    imagePullPolicy: IfNotPresent
    priorityComponent: false
    replicas: 1
    resources: {}

After installing the Kato-Open Source console in the cluster, the original console can exist as an O&M backdoor, which can only be turned on when Kato-Open Source fails to be accessed. Normally, you can set the replicas To 0, to close the pod, to achieve the purpose of saving resources.

Cluster-side Upgrade

To upgrade from v5.2.2-release to v5.3.0-release, you need to upgrade the console and cluster separately. The current chapter describes the upgrade steps on the cluster side. Video tutorials are provided throughout the process, see the end of the article for details.

Update Kato CRD Resources

# Load CRD configuration file
kubectl apply -f https://raw.githubusercontent.com/gridworkz/cloud-adaptor/main/chart/crds/monitoring.coreos.com_servicemonitors.yaml

Update Kato-operator Component

# Delete the old kato-operator statefulset
kubectl delete sts kato-operator -n rbd-system 

# Create a new kato-operator deployment
kubectl apply -f https://raw.githubusercontent.com/gridworkz/cloud-adaptor/main/hack/deployment.yaml

Update and Build Private Server Components

# Delete the rbdcomponent of rbd-repo
kubectl delete rbdcomponents.kato.io rbd-repo -n rbd-system

# Create a new component rbd-resource-proxy
kubectl apply -f https://raw.githubusercontent.com/gridworkz/cloud-adaptor/main/hack/rbd-resource-proxy.yaml

Upgrade the Cluster Mirror

# Extract the rbdcomponent resource as a file
kubectl get rbdcomponents.kato.io -n rbd-system -o yaml> rbdcompontent.yaml

# Modify the tag in this file
sed -i "s/v5.2.2-release/v5.3.0-release/g" rbdcompontent.yaml

# Load configuration
kubectl apply -f rbdcompontent.yaml

Update Other Functional Mirrors

# log in 
function Login_registry(){
  inner_registry=$(kubectl get katocluster -n rbd-system -o yaml | grep domain | awk'{print $2}')
  registry_user=$(kubectl get katocluster -n rbd-system -o yaml | grep username | awk'{print $2}')
  registry_pass=$(kubectl get katocluster -n rbd-system -o yaml | grep password | awk'{print $2}')
  docker login --username ${registry_user} --password ${registry_pass} ${inner_registry}

# Pull mirror and replace, push

function Other_images(){
  for image in builder runner rbd-init-probe rbd-mesh-data-panel
      docker pull $from_registory/$image:v5.3.0-release
      docker tag $from_registory/$image:v5.3.0-release $to_registory/$image
      docker push $to_registory/$image

# Start Update

Login_registry && Other_images

Update grctl Command

# Download command and replace the old version command

docker run -it --rm -v /:/rootfs registry.gitlab.com/gridworkz/rbd-grctl:v5.3.0-release copy

mv /usr/local/bin/kato-grctl /usr/local/bin/grctl && grctl install

Create Maven Source Code Build Default Configuration (optional)

This step is used to create a default settings.xml configuration required for the java maven source code build. In most scenarios, it is used to declare the internal private server address, user name, password and other information of the enterprise. When you do not need to customize, Kato will generate Alibaba Cloud private server addresses by default. Therefore, this step is optional.

grctl build maven-setting add --file <absolute path of user-defined settings.xml file>

Rebuild the Plugin

Enter the plug-in management page, click Build in the respective management pages of Export Network Management Plug-in and Service Integrated Network Management Plug-in to update them. For service components that have already installed the above plug-ins, you need to re-install the plug-ins to make them effective after building the plug-ins.

Back up the Latest Console

Upgrade Verification

  • Observe whether the platform version is v5.3.0-release in the corporate information column of the overview page
  • Observe whether the platform version of the connected cluster is v5.3.0-release on the cluster page
  • Check rbd-api rbd-chaos rbd-eventlog rbd-gateway rbd-monitor rbd-mq rbd-node rbd-webcli rbd-worker running in the cluster Service, whether the tag of the image used is v5.3.0-release`
  • Try to build a java maven project from source code and verify whether the rbd-resource-proxy service is working properly
  • Follow the New Features chapter, try new features one by one, and verify that they work properly

Offline Upgrade

Currently Kato v5.3.0-release version does not provide offline upgrade package. However, all resources in the upgrade process, including configuration files and images, can be processed offline. Users can download and localize these resources by themselves, and import them into the offline Kato v5.2.2-release environment for upgrade operations.


  • It is not necessary to install the Kato-open source console for the offline environment, just refer to the document to process the original console
  • To add environment variable DISABLE_DEFAULT_APP_MARKET = true for console components. This variable is used to avoid frequent requests from the console to the open source application store in an offline environment
  • When preparing for offline mirroring, do not miss the mirrors corresponding to kato-operator and rbd-resource-proxy. The mirror addresses are defined in the configuration file of deployment.yaml that starts them