ceph-rbd Block Storage

ceph-rbd Block Storage Introduction

Ceph has introduced a new RBD protocol, which is Ceph Block Device. RBD provides reliable, distributed, and high-performance block storage for clients. RBD blocks are distributed in stripes on multiple Ceph objects, and these objects themselves are distributed across the entire Ceph storage cluster, so data reliability and performance can be guaranteed. RBD has been supported by the Linux kernel. In other words, the RBD driver has been well integrated with the Linux kernel in the past few years. Almost all Linux operating system distributions support RBD. In addition to reliability and performance, RBD also supports other enterprise-level features, such as full and incremental snapshots, streamlined configuration, copy-on-write cloning, and other features. RBD also supports full-memory caching, which can greatly improve its performance.

Ceph block devices fully support cloud platforms, such as OpenStack, CloudStack, etc. It has been proven to be successful in these cloud platforms and has a wealth of features.

Users are required to prepare the ceph environment, please refer to Official Installation Document

Install ceph Cluster

Test environment restrictions, we use Ubuntu single node to install ceph cluster, install ceph’s jewel version, multi-node ceph cluster and other versions, please refer to the official tutorial.

ceph update source

Since ceph is slow to use the default source during the installation process, the source of Alibaba Cloud is used here

echo "deb http://mirrors.aliyun.com/ceph/debian-jewel xenial main" >> /etc/apt/sources.list
apt-get clean
apt-get update

Install the deployment program

The official recommendation is to use the ceph-deploy execution program to install the ceph cluster.

apt-get install ceph-deploy

Create a folder and record the ceph configuration file

All operations are in this folder, which will record the configuration file of ceph when it was created and the secret key of the admin account.

mkdir ceph-cluster && cd ceph-cluster

Install monitoring node

Install the monitoring node to node1

node1 is the host name and must be able to resolve to the IP corresponding to the host of node1

ceph-deploy new node1

Due to the single-node installation, the configuration file needs to be modified and the default data pool size is set to 1

echo "osd pool default size = 1
osd max object name len = 256
osd max object namespace len = 64
mon_pg_warn_max_per_osd = 2000
mon clock drift allowed = 30
mon clock drift warn backoff = 30
rbd cache writethrough until flush = false" >> ceph.conf

Install ceph service

ceph-deploy install node1

Deploy monitoring nodes

ceph-deploy mon create-initial

Prepare ceph storage path and prepare osd service

Use /data/ceph as ceph’s data sharing directory

mkdir -p /data/ceph/osd
ceph-deploy osd prepare node1:/data/ceph/osd

Activate osd service

chown -R ceph:ceph /data/ceph/
ceph-deploy osd activate node1:/data/ceph/osd

If there is a permission problem, you can try to reactivate after granting

Confirm the status of the ceph cluster

ceph health

The result shows that HEALTH_OK indicates that the cluster deployment is normal. If an exception occurs, you can execute ceph -s to determine the exception information in ceph.

If you have trouble in some places and want to start all over again, you can use the following command to clear the configuration:

ceph-deploy purgedata node1
ceph-deploy forgetkeys

Use the following command to remove the Ceph installation package together:

ceph-deploy purge node1

If purge is executed, you must reinstall Ceph.

Install Driver

Use the ceph official csi project ceph-csi to install the ceph driver.

Download project

git clone https://github.com/ceph/ceph-csi.git && cd ceph-csi

Create rbac account

kubectl create -f csi-provisioner-rbac.yaml
kubectl create -f csi-nodeplugin-rbac.yaml

Prepare configuration file

The content of the configuration file specifies the ceph cluster to be used by the csi driver and is recorded in the ConfigMap. The test example is as follows,

$ ceph -s
cluster 9660aec4-16a2-4929-b179-c28cef2b5ab0
health HEALTH_OK
monmap e1: 1 mons at {node1=172.24.203.202:6789/0}
      election epoch 3, quorum 0 node1
osdmap e7: 1 osds: 1 up, 1 in
      flags sortbitwise,require_jewel_osds
pgmap v882: 64 pgs, 1 pools, 7488 kB data, 14 objects
      17357 MB used, 20972 MB / 40188 MB avail
            64 active+clean

clusterID - is the id of the ceph cluster, you can see the cluster information through ceph -s monitors - is the monitoring service address of the ceph service, corresponding to the node information in monmap

apiVersion: v1
kind: ConfigMap
data:
  config.json: |-
    [
      {
        "clusterID": "9cab3178-b0e1-4d4c-80d6-d1b6cd399cda",
        "monitors": [
          "172.24.203.202:6789"
        ]
      }
    ]
metadata:
  name: ceph-csi-config

Create ConfigMap

kubectl create -f csi-config-map.yaml

Create drive service

kubectl create -f csi-rbdplugin-provisioner.yaml
kubectl create -f csi-rbdplugin.yaml

Confirm service start

According to the document installation, there will eventually be such a pod instance running normally

$ kubectl get po
NAME READY STATUS RESTARTS AGE
csi-rbdplugin-provisioner-688c49bd49-7cwcw 6/6 Running 0 59m
csi-rbdplugin-provisioner-688c49bd49-ffbmc 6/6 Running 0 59m
csi-rbdplugin-provisioner-688c49bd49-s4nh2 6/6 Running 0 59m
csi-rbdplugin-qsnlb 3/3 Running 0 58m

Use the official demo to test whether the driver is normal. Test cases are given in the example directory of the official project. Use the nginx mirror to mount the ceph storage.

kubectl create -f secret.yaml
kubectl create -f storageclass.yaml
kubectl create -f pvc.yaml
kubectl create -f pod.yaml

Determine whether the nginx container starts normally

$ kubectl get po
NAME READY STATUS RESTARTS AGE
csi-rbd-demo-pod 1/1 Running 0 46m
csi-rbdplugin-provisioner-688c49bd49-7cwcw 6/6 Running 0 59m
csi-rbdplugin-provisioner-688c49bd49-ffbmc 6/6 Running 0 59m
csi-rbdplugin-provisioner-688c49bd49-s4nh2 6/6 Running 0 59m
csi-rbdplugin-qsnlb 3/3 Running 0 58m

If it does not start successfully, you can pass, observe the log of the provisioner component to determine the problem.

kubectl logs -f csi-rbdplugin-provisioner-688c49bd49-7cwcw -c csi-provisioner

Please pay attention to replace the pod name. -c specifies the container in the provisoner instance, and the csi-provisioner instance is started in the provisoner instance to provide storage

Use of Storage

The storageClass object was created when using the official demo above. At this time, the Kato platform will monitor the creation of the storageClass and record it in the database. The user can select the storage type corresponding to the storageClass through the Kato console to use on the stateful component.

The storage must be activated by restarting the component or updating the component

Test Results

Whether the storage is effective can be judged by whether the component can be started normally. The normal start of the component means that the component has been normally mounted with the storage. You can also go to the ceph cluster to determine the storage status, determine whether there is a storage of the corresponding size, and whether its status is in use In the state.