Kubernetes部署Rook(Ceph)分布式块存储

Rook是一款基于Kubernetes的分布式存储管理系统,通过Operator,支持了Ceph、NFS等存储架构的快速部署、维护等。

本文介绍了其部署及实验。

1 机器准备

准备4台机器

node1 ~ node4,用于部署k8s集群,其中node1是master,其余都是worker。

node2 ~ node4,即worker机器上,挂载100GB的硬盘,只挂载不要格式化,我这里是直接用的阿里云的盘。

这里不再赘述集群的部署,如果你还不熟悉,可以参考《国内部署Kubernetes集群1.22.1》

2 下载Rook、修改镜像

从github上下载最新发行版,当前是1.7.9

git clone --single-branch --branch v1.7.9 https://github.com/rook/rook.git

进入这个目录

cd rook/cluster/examples/kubernetes/ceph

找到operator.yaml的下述行:

# ROOK_CSI_CEPH_IMAGE: "quay.io/cephcsi/cephcsi:v3.4.0"
# ROOK_CSI_REGISTRAR_IMAGE: "k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.3.0"
# ROOK_CSI_RESIZER_IMAGE: "k8s.gcr.io/sig-storage/csi-resizer:v1.3.0"
# ROOK_CSI_PROVISIONER_IMAGE: "k8s.gcr.io/sig-storage/csi-provisioner:v3.0.0"
# ROOK_CSI_SNAPSHOTTER_IMAGE: "k8s.gcr.io/sig-storage/csi-snapshotter:v4.2.0"
# ROOK_CSI_ATTACHER_IMAGE: "k8s.gcr.io/sig-storage/csi-attacher:v3.3.0"

改成如下行:

ROOK_CSI_CEPH_IMAGE: "quay.mirrors.ustc.edu.cn/cephcsi/cephcsi:v3.4.0"
ROOK_CSI_REGISTRAR_IMAGE: "registry.aliyuncs.com/google_containers/csi-node-driver-registrar:v2.3.0"
ROOK_CSI_RESIZER_IMAGE: "registry.aliyuncs.com/google_containers/csi-resizer:v1.3.0"
ROOK_CSI_PROVISIONER_IMAGE: "registry.aliyuncs.com/google_containers/csi-provisioner:v3.0.0"
ROOK_CSI_SNAPSHOTTER_IMAGE: "registry.aliyuncs.com/google_containers/csi-snapshotter:v4.2.0"
ROOK_CSI_ATTACHER_IMAGE: "registry.aliyuncs.com/google_containers/csi-attacher:v3.3.0"

上述改动的主要原因是,由于众所周知的原因,gcr镜像从国内是无法下载的,这里替换为国内镜像,关于更多quay和gcr的镜像,可以参考这篇文章

当前可用的镜像:

  • quay:quay.io/xxx/yyy:zzz -> quay.mirrors.ustc.edu.cn/xxx/yyy:zzz
  • gcr:gcr.io/xxx/yyy:zzz -> gcr.mirrors.ustc.edu.cn/google-containers/yyy:zzz
  • gcr:gcr.io/xxx/yyy:zzz -> registry.aliyuncs.com/google-containers/yyy:zzz

3 部署

kubectl apply -f crds.yaml -f ./common.yaml -f ./operator.yaml

等一会,变成Running后:

kubectl -n rook-ceph get pod
NAME                                 READY   STATUS    RESTARTS   AGE
rook-ceph-operator-985f59659-24448   1/1     Running   0          5m44s

继续部署:

kubectl apply -f ./cluster.yaml

最终效果:

kubectl -n rook-ceph get pod
NAME                                                READY   STATUS      RESTARTS   AGE
csi-cephfsplugin-bq7z6                              3/3     Running     0          81s
csi-cephfsplugin-ds2hh                              3/3     Running     0          81s
csi-cephfsplugin-ns5qc                              3/3     Running     0          81s
csi-cephfsplugin-provisioner-7f4d967bd-hcl2s        6/6     Running     0          80s
csi-cephfsplugin-provisioner-7f4d967bd-jr7p2        6/6     Running     0          80s
csi-rbdplugin-8xqvv                                 3/3     Running     0          82s
csi-rbdplugin-d8vf8                                 3/3     Running     0          82s
csi-rbdplugin-provisioner-746674579d-4pd5m          6/6     Running     0          81s
csi-rbdplugin-provisioner-746674579d-j6d7r          6/6     Running     0          82s
csi-rbdplugin-sndql                                 3/3     Running     0          82s
rook-ceph-crashcollector-host002-74c4d5998b-clznw   1/1     Running     0          55s
rook-ceph-crashcollector-host003-64c5c7b5dc-8vnfx   1/1     Running     0          52s
rook-ceph-crashcollector-host004-7bd9969b54-8nblx   1/1     Running     0          51s
rook-ceph-mgr-a-7ff4f7645d-429wq                    1/1     Running     0          55s
rook-ceph-mon-a-5457dfb858-2sc8c                    1/1     Running     0          89s
rook-ceph-mon-b-d4c648c9-4l6fj                      1/1     Running     0          77s
rook-ceph-mon-c-6f8f855cfb-8h2b6                    1/1     Running     0          66s
rook-ceph-operator-985f59659-g5d86                  1/1     Running     0          2m9s
rook-ceph-osd-prepare-host002--1-fv5w9              0/1     Completed   0          52s
rook-ceph-osd-prepare-host003--1-p4b9s              0/1     Completed   0          52s
rook-ceph-osd-prepare-host004--1-m5b6l              0/1     Completed   0          51s

4 验证

kubectl apply -f ./toolbox.yaml
kubectl exec -it $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') -n rook-ceph -- bash
ceph status
  cluster:
    id:     30d0968b-cb42-483a-a3b7-60702820fef9
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum a,b,c (age 27s)
    mgr: a(active, since 6s)
    osd: 3 osds: 3 up (since 3s), 3 in (since 16s)
 
  data:
    pools:   1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage:   14 MiB used, 300 GiB / 300 GiB avail
    pgs:     100.000% pgs unknown
             1 unknown

ceph osd status
ID  HOST      USED  AVAIL  WR OPS  WR DATA  RD OPS  RD DATA  STATE      
 0  host004  5064k  99.9G      0        0       0        0   exists,up  
 1  host002  5064k  99.9G      0        0       0        0   exists,up  
 2  host003  5064k  99.9G      0        0       0        0   exists,up

如上,3块100G的硬盘,已做为ceph的磁盘,启用成功!

5 添加StorageClass

上述配置后,基础的Ceph就成功了,实际上Rook支持3种模式:

  • Block: Create block storage to be consumed by a pod (RWO),块设备,就是PV / PVC
  • Shared Filesystem: Create a filesystem to be shared across multiple pods (RWX),类似NFS
  • Object: Create an object store that is accessible inside or outside the Kubernetes cluster,对象存储,类似S3、OSS

我这里常用的是第一个,即需要在需要时自动创建PVC,结束后自动销毁,配置StorageClass:

kubectl apply -f ./csi/rbd/storageclass.yaml

创建成功:

kubectl  get sc
NAME              PROVISIONER                  RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
rook-ceph-block   rook-ceph.rbd.csi.ceph.com   Delete          Immediate           true                   18s

我们测试一下MySQL能否自动装配:

cd ..
pwd
/root/rook-1.7.9/cluster/examples/kubernetes

kubectl apply -f ./mysql.yaml
kubectl get pvc
NAME             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
mysql-pv-claim   Bound    pvc-aaf9739c-f989-4c95-b10e-7327f768d756   20Gi       RWO            rook-ceph-block   52s
[root@host001 kubernetes]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                    STORAGECLASS      REASON   AGE
pvc-aaf9739c-f989-4c95-b10e-7327f768d756   20Gi       RWO            Delete           Bound    default/mysql-pv-claim   rook-ceph-block            54s

成功!

更进一步的,还可以安装dashboard,一个Web管理界面,参考这篇文章,这里就不再继续安装了。

要说明的是,Ceph的性能比本地磁盘差不少,如果对性能有苛刻的要求,慎重使用。

 

 

2 thoughts on “Kubernetes部署Rook(Ceph)分布式块存储

  1. yangjunchen

    你好 依据教程进行部署后,到kubectl apply -f ./cluster.yaml这一步未能达到预期。其中rook-ceph-csi-detect-version无法成功创建,状态为CrashLoopBackOff,之后k8s会自动删此deployment,不再重新创建。
    并且csi相关服务均无法成功创建。烦请帮忙看下此问题,感谢

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *