Deploy Mysql Cluster App

Mysql Master-Slave Ssynchronization Principle

  1. Execute the start slave command on the Slave server to turn on the master-slave replication switch and start the master-slave replication.

  2. At this time, the IO thread of the Slave server will request to connect to the master server through the authorized replication user permissions on the master, and request the specified location of the binlog log file from the execution (the log file name and location are executed when the master-slave replication service is configured After the change master command specified), the binlog log content will be sent.

  3. After the Master server receives the request from the IO thread of the Slave server, the IO thread responsible for replication on it will read the binlog log information after the specified location of the specified binlog log file in batches according to the information requested by the IO thread of the Slave server, and then Return to the IO thread on the Slave side. In addition to the content of the binlog log, the returned information also contains IO threads recorded on the Master server. Except for the next specified update location in binlog, the returned information.

  4. When the IO thread of the Slave server obtains the log content, log file, and location sent by the IO thread on the Master server, it will write the binlog log content to the Relay Log (relay log) file (Mysql, and record the name and location of the new binlog file in the master-info file, so that the next time you read the new binlog on the master side, you can tell the Master server to specify the file and location of the new binlog log Start reading the new binlog log content.

  5. The SQL thread of the Slave server will detect the new log content of the IO thread in the local Relay Log in real time, and then parse the content in the Relay LOG file into sql statements in time, and execute them in the order of the parsed SQL statements on the Slave server Apply this sql statement, and record the file name and location point of the current application relay log in

Mysql Master-Slave Synchronization Notes

  • The uuid of the master node and the slave node are different
  • The server_id of the master node and the slave node are different
  • The slave node needs to automatically perform the operation of registering with the master node

Make Mysql Container Image

The Same Image Creates uuid of Different Containers

When creating a mysql master-slave cluster with the same mysql image, it is found that the uuid of each mysql service is the same, because the uuid is written in the /var/lib/mysql/auto.cnf file during data initialization, causing each The uuid of the container is the same.

In order to solve the different uuid problems of different containers, it is necessary to randomly generate a uuid after mysql startup to generate the configuration file and write it to /var/lib/mysql/auto.cnf before startup, so as to ensure the uuid of the container generated by the same image They are all different.

In order to achieve this goal, we modified the startup script /usr/local/bin/ that comes with the mysql mirror:

if [ ! -d "$DATADIR/mysql" ]; then
			echo >&2 'error: database is uninitialized and password option is not specified '
			exit 1

		mkdir -p "$DATADIR"

		echo 'Initializing database'
		"$@" --initialize-insecure
		echo 'Database initialized'
                # Located on line 90 of the mysql startup script, add the following statement
                str = $ (cat $ psd)
                echo "[auto]" > /var/lib/mysql/auto.cnf
                echo $uuid >> /var/lib/mysql/auto.cnf

Server_id processing of different instances of the same service

When creating a MYSQL master-slave cluster with the same MYSQL mirroring, how to ensure that the server_id of each MYSQL service is different?

When k8s creates a container, it will create a host name for each container (such as gr78648d-0), and the numbers after creating multiple containers will increase sequentially, so you can use this feature to generate different server_id (host name numeric part) + Specify a number), and then use different numbers in the maser and slave.

Automatic initialization from library

When creating the slave database, we hope that each instance under the salve application will automatically register with the main database after expansion.

This requires the automatic execution of the specified SQL script when the instance in the Salve application is initialized. This depends on the specific function provided by the official MYSQL mirror: when the database is initialized, it will automatically read and execute the *.sql file in /docker-entrypoint-initdb.d/.

In order to achieve the above two goals, we specify in the custom startup script /run/ of the mirror:

# define server_id and anyother cluster configuration
# Use environment variables to distinguish the current mirror to create a master library or a slave library
if [ ${MYSQL_ROLE} == "master" ];then
   # With the help of stateful application hostname characteristics, intercept the numbers
   # The main library ID is set to 1
   MYSQLC_MYSQLD_SERVER_ID=`expr $server_id + 1`
   # Specify the specific configuration of the main library
   export MYSQLC_MYSQLD_binlog_ignore_db=mysql
   export MYSQLC_MYSQLD_log_bin=mysql-bin
   # With the help of stateful application hostname characteristics, intercept the numbers
   # From each instance ID of the library, sort from 2
   MYSQLC_MYSQLD_SERVER_ID=`expr $server_id + 2`
   # Specify to generate a specific configuration from the library
   export MYSQLC_MYSQLD_replicate_ignore_db=mysql
   export MYSQLC_MYSQLD_log_bin=mysql-bin
   # Copy the initialization script template needed from the library to the specified directory
   cp -a /tmp/init-slave.sql /docker-entrypoint-initdb.d/
   # Make changes to the init script template based on instance-specific environment variables
             -e "s/MYSQL_USER/${MYSQL_USER}/g" /docker-entrypoint-initdb.d/init-slave.sql

Regarding generating the specified configuration through environment variables in the script, refer to the project env2file

Dockerfile analysis for mirroring

FROM percona:5.7.23-stretch
LABEL creater="gridworkz"
ENV TZ=America/Toronto

RUN sed -i 's/' /etc/apt/sources.list; \
    rm -rf /etc/apt/sources.list.d/percona.list && apt-get update; \
    apt-get install -y --no-install-recommends wget net-tools vim; \
    rm -rf /var/lib/apt/lists/*; \
    wget -O /usr/local/bin/env2file -q; \
    chmod +x /usr/local/bin/env2file;
# Custom startup script
ADD /run/
# mysql official startup script
ADD ./run/ /usr/local/bin/
ADD ./run/mysqld.cnf /etc/mysql/percona-server.conf.d/mysqld.cnf
# Copy the slave initialization script template to the mirror, ready to call
ADD ./sql /tmp/
VOLUME ["/var/lib/mysql", "/var/log/mysql"]
ENTRYPOINT [ "/run/" ]
CMD [ "mysqld" ]

Create and Configure mysql-master Service

Create the mysql-master component

Code address:

Code branch: cluster

Create component reference documentation through Dockerfile Create component based on Dockerfile source code

  • Open 3306 port internal service, and change the alias to MYSQL

  • Configure key environment variables

Environment VariablesValueDescription
MYSQL_ROOT_PASSWORDchangeme (default) specify by yourselfroot password
MYSQL_USERSpecify by yourself, such as adminmysql working user
MYSQL_PASSWORDSpecify by yourselfWork user password
MYSQL_DATABASESpecify by yourselfInitialize and generate database
MYSQL_ROLEmasterAssign role

Except for MYSQL_ROLE, other environment variables should be transferred to the connection information after the service is created.

  • In the deployment properties, modify the application type to Stateful Application

Create Slave Service

Create mysql-slave Component

The creation method is the same as the mysql-master component.

Different from the mysql-master component, the mysql-slave component is configured as follows:

  • Open 3307 port internal service, and change the alias to MYSQL_SLAVE

  • Configure environment variables

Environment VariablesValueDescription
MYSQLC_MYSQLD_PORT3307mysql-slave listens on port 3307
  • mysql-slave service depends on mysql-master service

The mysql-slave component can be expanded at will, and the logic written in the script will automatically register it with mysql-master.

So far, a basic MYSQL master-slave cluster has been built. If you need to publish to the application market for download and use at any time, please refer to Application Sharing and Publishing

Read and Write Separation


One advantage of the MYSQL master-slave cluster is that the master library can be configured to be responsible for writing, the slave library is responsible for query, and the slave automatically synchronizes the read-write separation structure of data from the master.

If set properly, such a structure can greatly improve database performance while reducing the pressure on the main database.


If the user’s business program already supports read-write separation, then only need to set:

  • The database write address is the mysql-master service address. If you use the Kato service to rely on, you can specify the connection address with ${MYSQL_HOST}:‚Äč${MYSQL_PORT}.
  • The database query address is the mysql-slave service address. If you use Kato service dependency, you can specify the connection address in the way of ${MYSQL_SLAVE_HOST}:${MYSQL_SLAVE_PORT}.

If the user’s business program does not support read-write separation, it must be implemented by middleware that supports read-write separation.

Atlas Middleware

Atlas is an open source database middleware by Qihoo 360, which is improved based on the mysql-proxy officially provided by mysql. By re-implementing the LUA script of mysql-proxy in C language, Atlas provides more powerful performance than mysql-proxy. Through the proxy of the middleware, the user only needs to configure the database connection address as the Atlas service address, and the writing and query of the database are managed by Atlas.

Learn more about Atlas

The dockerized Atlas component we provide, users can directly build this project based on the Dockerfile source code:

Atlas-docker project address

Kato has released Mysql master-slave cluster application, which has integrated the middleware.

High-level Implementation

Current Architecture Shortcomings

The MYSQL master-slave cluster currently built is a master node that connects to multiple slave nodes. Such an architecture is no problem in a small-scale cluster. But if the cluster is large and there are too many slave nodes, the process of synchronizing data from the master to all slave nodes will become a performance bottleneck.

Architecture Optimization

After the user has fully mastered how to build a MYSQL master-slave cluster based on Kato, you can try it yourself and create a slave node specifically for use as a data synchronization node.

This node connects to the master node upwards to synchronize data; connects to the slave cluster downwards to distribute the data synchronized by the master node.

The advantage of this is that the master node only needs to connect to a data synchronization node to synchronize data, and can focus more on data writing. Other slave nodes synchronize data from the data synchronization node.

If some users have achieved this optimization, they are welcome to share it in the application market for more people to use.