Continuous Build and Delivery

Purpose

Realize a continuous delivery system for enterprise development, testing, and production through Kato. Quickly build a DevOps system in multple environments.

Meaning

Kato can reflect its value in all aspects of enterprise development, testing, and production. The content of this document will help users quickly build the entire DevOps system based on the Kato system.

Prerequisites

  • Follow the document to install the Kato environment for online development and testing. If there is a final production environment, you need to install Kato in the production environment.

  • The source code of the business application.

  • Existing Jenkins CI/CD system. (Optional)

  • Existing mirror repositories, such as Harbor. (Optional)

  • Existing code hosting platform, such as Gitlab. (Optional)

Environment Segmentation and Connection

General IT companies divide their environment into development environment, test environment, and production environment. In Kato, the environment can be divided by team (tenant-level isolation) or cluster (deployed in different clusters). There are differences in the way of interaction between environments with different segmentation methods. Among them, the multi-cluster segmentation method requires the support of the Kato enterprise version or the public cloud version.

Recommended way:

  • Separate the development and test environment by creating a team, see Team Management and Multi-tenancy.

  • Use clusters to distinguish development and test clusters from production clusters.

  • For enterprise users, the development and test cluster and the production cluster can be opened up through Kato’s multi-cluster management function.

  • For open source users, you can achieve offline delivery through shared libraries to complete the final delivery.

Integration

The actual operation starts with the establishment of the development environment. In this process, we provide relevant solutions to quickly integrate various development tools to implement common scenarios such as code hosting, automatic CI/CD, and static code detection. Even if there is no relevant tool in the enterprise, Kato shared library can be quickly synchronized from the cloud to the local shared library for installation.

Integrate Gitlab

Refer to Integrate Git Repository Quick Deployment Components to integrate existing Gitlab. If you do not have your own code repository, it is recommended to synchronize to the local shared library installation through the cloud.

Integrate Jenkins CI/CD system

Refer to Docking Jenkins CI/CD System to connect to the existing Jenkins CI/CD system of the enterprise. If you don’t have your own Jenkins yet, and you want to use it quickly, it is recommended to synchronize to the local shared library installation through the cloud.

Integrate SonarQube

Refer to Docking with SonarQube to complete static code detection, if you do not have your own SonarQube, it is recommended to synchronize to the local shared library installation through the cloud.

Continuous Integration/Continuous Build

In this link, examples will be used to demonstrate how to carry out continuous integration and continuous construction.

Presentation Business Preparation

The whole process of combining an entire DevOps exercise will be demonstrated with a practical use case. The selected demonstration use case is the open source GVP project NiceFish Mermaid Blog Building System. Thanks to the open source spirit of author Big Desert Qiuqiu.

The entire business system can be subdivided into the following four service components:

  • Front-end project: NiceFish-UI The project will be based on Mirror Deployment, the entire CI/ The CD process is: After submitting the code, Jenkins dispatches the remote server to pull the latest code and make a mirror image, and push it to the Harbor mirror repository in the development environment after the production is completed. After the push is completed, the NiceFish-UI service component in the test environment is triggered to automatically pull the image and continue to build.

  • Back-end project: NiceFish-CMS The project is deployed based on source code, and the entire Ci/CD process is: After submitting the code, Jenkins dispatches SonarQube for static code quality After the detection is passed, the NiceFish-cms service component in the test environment is triggered to automatically pull the source code and continue to build.

  • Mysql database: NiceFish-DB The project is deployed based on Dockerfile.

  • Redis: One-click installation and deployment directly from the shared library.

Follow the above instructions to deploy the test service in the test environment.

For the three projects of NiceFish-UI, NiceFish-CMS, and NiceFish-DB, enable the function based on API trigger automatic build:

Configure CI/CD

Take the NiceFish-cms project as an example to demonstrate the entire continuous integration/continuous build (CI/CD) process.

As mentioned in the previous article, how to pull the code through Gitlab, deploy it in the test environment on Kato, and turn on the automatically built OpenAPI.

Next, the main flow of the free style task created in Jenkins will be described. Through this flow task, the entire continuous integration/continuous construction (CI/CD) process will be implemented.

Mission Details

This example is a Jenkins task created using Free-Style style:

  1. Complete the code push event to trigger the Jenkins task

  2. After Jenkins gets the latest code, analyze the code with sonarqube

  3. After the analysis is complete, call the API to start the construction of the Kato project

Define the Code Repository Address and Trigger it Through GitLab WebHooks

The setting of this link can trigger the start of the entire task after receiving the Push event in the code repository of the NiceFish-cms project.

Specify code repository address

Define WebHooks

Settings in Jenkins:

Settings in GitLab:

Code Static Detection

Through the setting of this link, you can define the code static detection of the current project as a link in the continuous integration process. Other continuous integration processes can be connected in accordance with the Jenkins system.

Add build step: call the top-level Maven goal

After this link is over, the continuous integration phase ends, followed by continuous construction.

Continue to Build

After all the above links are successfully passed, the setting of this link can complete the setting of continuous construction. This step triggers the deployed NiceFish-cms to automatically build. The automatic build process will trigger the service components running on Kato, pull the latest code from the build source (source code in this example) to build, and bring the latest changes online.

Add build step: execute shell

Continuous Delivery

Based on Kato to implement continuous delivery system, it is recommended to use shared library to achieve.

After the above continuous integration/continuous construction, the resulting business can be continuously released to the shared library to form different versions. Repeatedly execute [Upgrade of application template])(/docs/get-started/upgrade-from-market/) to complete continuous delivery of the application.

If the final production environment is offline, please refer to Offline Delivery via Shared Library.

Production Environment Operation and Maintenance

For applications deployed on Kato, we provide a full range of automated operation and maintenance enablement:

  • Application topology: graphical display of service component relationships, and drag-and-drop splicing assembly of dependencies.

  • Application-level backup/recovery: The entire application is backed up with one-click data, free to migrate and recover between different clusters and teams.

  • Gateway strategy: powerful custom gateway strategy settings, certificate management, [implement rolling upgrade, A/B testing, gray release, blue-green release] ().

  • Life cycle management: You can manage the opening, closing, restart, update, and construction of the full life cycle of applications and service components.

  • Version management: Kato comes with a version management system, which can quickly perform one-click online/rollback based on version number.

  • Operation record audit: Identify the executor and execution time of the key operation of each service component.

  • Operation log: Each operation of the service component will have a corresponding operation log. Such as building, upgrading, etc.

  • Monitoring: Provide real-time performance analysis for WEB and Mysql, provide visual average response time, throughput rate, online number of people and other information, and monitor service status in real time.

  • Log: Provides the real-time push function of service component logs, supports log segmentation and download, and can support ELK log collection through plug-ins

  • Scaling: Supports real-time scaling of memory and the number of instances, and can be configured with automatic scaling and automatic load balancing. Ensure that online business responds flexibly to traffic.

  • Environment configuration: support custom environment variable input, configuration file mounting and sharing among multiple service components.

  • Dependency: Native Service Mesh microservice support, quick splicing and assembly between components based on dependencies, flexible configuration of connection information based on environment variables.

  • Storage: Support multiple storage types (block devices, shared file storage, memory storage, etc.), multi-component storage sharing.

  • Plug-in system: For plug-in extensions of service components, a variety of plug-ins are natively provided to quickly realize real-time performance analysis, microservice network management functions, Mysql data backup and recovery.

  • Build source: Graphical and flexible setting of various development language build environments. Automatically build configuration, provide entrance for CD/CD.

  • Deployment type configuration: Multiple deployment types are available to deal with different types of service components.

  • Health detection: Effectively monitor the current health status of service components, and automated unhealthy processing strategies.

  • Authority management: Different users have different operation authority for service components.