Glusterfs Distributed Storage

Install Glusterfs Cluster via Kubernetes

Through the content of this article, explain to users how to install a Glusterfs cluster through Kubernetes and provide highly available storage for Kato

Prerequisites

  • In the installed Kubernetes cluster, each of the three nodes should mount an SSD disk with a space of not less than 500G

  • Format and mount the prepared disk to the specified directory

# View available disks
fdisk -l
# Partition and format
mkfs.xfs  /dev/vdb1
mkdir  -p /data
echo "/dev/vdb1  /data  xfs  defaults 1 2" >>/etc/fstab
# Mount
mount -a
# Confirm /data mount
df -h | grep data
  • Install the corresponding version of the Glusterfs client tool on all Kubernetes nodes and load the required kernel modules

    • Ubuntu 16.04/18.04
apt install software-properties-common
add-apt-repository ppa:gluster/glusterfs-7
apt update
apt install glusterfs-client -y
modprobe dm_thin_pool
  • CentOS 7
yum -y install centos-release-gluster
yum -y install glusterfs-client
modprobe dm_thin_pool

Deploy Glusterfs Cluster

The following operations can be performed once on any master node of Kubernetes

Get the corresponding item

git clone https://gitee.com/liu_shuai2573/gfs-k8s.git && cd gfs-k8s

Set the node label, specify the corresponding node to run the Glusterfs component

#Set the label, change Glusterfs1/2/3 to the corresponding Kubernetes node node
kubectl label node Glusterfs1 Glusterfs2 Glusterfs3 storagenode=glusterfs
#After performing this operation, the corresponding node will only run the Glusterfs service. If you need to reuse the node, please do not perform this operation
kubectl taint node Glusterfs1 Glusterfs2 Glusterfs3 glusterfs=true:NoSchedule

Create Glusterfs service

kubectl create -f gluster-daemonset.yaml

Check whether the Glusterfs service is running normally on the specified node

kubectl get pods -o wide --selector=glusterfs-node=daemonset
NAME              READY   STATUS    RESTARTS   AGE    IP              NODE            NOMINATED NODE   READINESS GATES
glusterfs-2k5rm   1/1     Running   0          52m    192.168.2.200   192.168.2.200   <none>           <none>
glusterfs-mc6pg   1/1     Running   0          134m   192.168.2.22    192.168.2.22    <none>           <none>
glusterfs-tgsn7   1/1     Running   0          134m   192.168.2.224   192.168.2.224   <none>           <none>

Add Glusterfs service as a unified cluster

#Add the other two Glusterfs services to the cluster through one of the Glusterfs services
kubectl exec -ti glusterfs-2k5rm gluster peer probe Glusterfs2_IP
kubectl exec -ti glusterfs-2k5rm gluster peer probe Glusterfs3_IP
#Detect whether the addition is successful, the successful addition will display the status of the other two Glusterfs services
kubectl exec -ti glusterfs-2k5rm gluster peer status
Configure the Glusterfs Cluster as an available resource for Kubernetes

Create service account and RBAC authorization

kubectl create -f rbac.yaml

Create Glusterfs-provisioner

kubectl create -f deployment.yaml

Create storageclass resources

#Modify the value of parameters.brickrootPaths in storageclass.yaml and replace it with the IP of the Glusterfs node
kubectl create -f storageclass.yaml

Create pvc verification

kubectl create -f pvc.yaml
kubectl get pvc | grep gluster-simple-claim #STATUS is Bound when the creation is successful

Create pod verification

kubectl create -f pod.yaml
kubectl get po | grep gluster-simple-pod #STATUS is Running when it runs normally

Remove verification pod

kubectl delete -f pod.yaml

Complete the installation of Glusterfs and proceed to Kato High Availability Installation.