文章类别:

兜兜    2021-09-22 10:18:21    2021-09-24 09:41:49   

k8s cephfs
#### 环境介绍 ```sh ceph集群节点: 172.16.100.1:6789,172.16.100.2:6789,172.16.100.11:6789 ceph version 15.2.13 (c44bc49e7a57a87d84dfff2a077a2058aa2172e2) octopus (stable) ceph client:15.2.14 (pod镜像:"elementalnet/cephfs-provisioner:0.8.0") ``` `注意:目前官网的quay.io/external_storage/cephfs-provisioner:latest镜像ceph版本为13.x,跟ceph集群的版本15不匹配,导致pv一直是pengding状态。这里使用第三方镜像:elementalnet/cephfs-provisioner:0.8.0` #### 安装ceph k8s节点安装客户端 ```sh $ yum install ceph-common -y ``` 拷贝ceph节点的key到k8s节点 ```sh $ scp /etc/ceph/ceph.client.admin.keyring 172.16.100.100:/etc/ceph ``` 配置访问ceph的secert ```sh $ ceph auth get-key client.admin | base64 QVFEa0RFTmhYQ1UzQUJBQXFmSWptMFJkSVpGaC9VR0V4M0RNc3c9PQ== ``` ```sh $ cat >cephfs-secret.yaml<<EOF apiVersion: v1 kind: Secret metadata: name: ceph-secret-admin namespace: cephfs type: "kubernetes.io/rbd" data: key: QVFEa0RFTmhYQ1UzQUJBQXFmSWptMFJkSVpGaC9VR0V4M0RNc3c9PQ== #替换上面的输出内容 EOF ``` ```sh $ kubectl create -f cephfs-secret.yaml ``` #### 安装cephfs provisioner 参考:https://github.com/kubernetes-retired/external-storage/tree/master/ceph/cephfs/deploy 创建RBAC ```sh $ cat >cephfs-rbac.yaml <<EOF --- apiVersion: v1 kind: ServiceAccount metadata: name: cephfs-provisioner namespace: cephfs --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: cephfs-provisioner namespace: cephfs rules: - apiGroups: [""] resources: ["secrets"] verbs: ["create", "get", "delete"] - apiGroups: [""] resources: ["endpoints"] verbs: ["get", "list", "watch", "create", "update", "patch"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: cephfs-provisioner namespace: cephfs roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: cephfs-provisioner subjects: - kind: ServiceAccount name: cephfs-provisioner --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: cephfs-provisioner namespace: cephfs rules: - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["create", "update", "patch"] - apiGroups: [""] resources: ["services"] resourceNames: ["kube-dns","coredns"] verbs: ["list", "get"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: cephfs-provisioner subjects: - kind: ServiceAccount name: cephfs-provisioner namespace: cephfs roleRef: kind: ClusterRole name: cephfs-provisioner apiGroup: rbac.authorization.k8s.io EOF ``` ```sh $ kubectl create -f cephfs-rbac.yaml ``` 创建cephfs-provisioner ```sh $ cat > cephfs-provisioner.yaml <<EOF apiVersion: apps/v1 kind: Deployment metadata: name: cephfs-provisioner namespace: cephfs spec: replicas: 1 selector: matchLabels: app: cephfs-provisioner strategy: type: Recreate template: metadata: labels: app: cephfs-provisioner spec: containers: - name: cephfs-provisioner #image: "quay.io/external_storage/cephfs-provisioner:latest" #ceph集群版本为15.2,该版本对应的ceph客户端太旧,镜像没有更新,替换下面的镜像 image: "elementalnet/cephfs-provisioner:0.8.0" env: - name: PROVISIONER_NAME value: ceph.com/cephfs - name: PROVISIONER_SECRET_NAMESPACE value: cephfs command: - "/usr/local/bin/cephfs-provisioner" args: - "-id=cephfs-provisioner-1" serviceAccount: cephfs-provisioner EOF ``` ```sh $ kubectl create -f cephfs-provisioner.yaml ``` #### 创建存储类 ```sh $ cat > cephfs-storageclass.yaml <<EOF kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: cephfs namespace: cephfs provisioner: ceph.com/cephfs parameters: monitors: 172.16.100.1:6789,172.16.100.2:6789,172.16.100.11:6789 adminId: admin adminSecretName: ceph-secret-admin adminSecretNamespace: cephfs claimRoot: /pvc-volumes EOF ``` ```sh $ kubectl create -f cephfs-storageclass.yaml ``` #### 创建测试pvc ```sh $ cat > cephfs-test-pvc.yaml <<EOF kind: PersistentVolumeClaim apiVersion: v1 metadata: name: cephfs-test-pvc-1 annotations: volume.beta.kubernetes.io/storage-class: "cephfs" spec: accessModes: - ReadWriteMany resources: requests: storage: 500Mi EOF ``` ```sh $ kubectl create -f cephfs-test-pvc.yaml ``` 获取pvc ```sh $ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE cephfs-test-pvc-1 Bound pvc-571ae252-b080-4a67-8f3d-40bda1304fe3 500Mi RWX cephfs 2m1s ``` 获取pv ```sh $ kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-571ae252-b080-4a67-8f3d-40bda1304fe3 500Mi RWX Delete Bound default/cephfs-test-pvc-1 cephfs 2m3s ``` #### 创建nginx挂载pvc ```sh $ cat > cephfs-test-busybox-deployment.yml <<EOF apiVersion: apps/v1 kind: Deployment metadata: name: cephfs-test-deploy-busybox spec: replicas: 3 selector: matchLabels: app: cephfs-test-busybox template: metadata: labels: app: cephfs-test-busybox spec: containers: - name: busybox image: busybox command: ["sleep", "60000"] volumeMounts: - mountPath: "/mnt/cephfs" name: cephfs-test-pvc volumes: - name: cephfs-test-pvc persistentVolumeClaim: claimName: cephfs-test-pvc-1 EOF ``` ```sh $ kubectl create -f cephfs-test-busybox-deployment.yml ``` ```sh $ kubectl get pods NAME READY STATUS RESTARTS AGE cephfs-test-deploy-busybox-56556d86ff-4dmzn 1/1 Running 0 4m28s cephfs-test-deploy-busybox-56556d86ff-6dr6v 1/1 Running 0 4m28s cephfs-test-deploy-busybox-56556d86ff-b75mw 1/1 Running 0 4m28s ``` 测试pod挂载的cephfs文件读写是否同步 ```sh $ kubectl exec -ti cephfs-test-deploy-busybox-56556d86ff-4dmzn sh / # cd /mnt/cephfs/ /mnt/cephfs # ls /mnt/cephfs # touch cephfs-test-deploy-busybox-56556d86ff-4dmzn /mnt/cephfs # exit ``` ```sh $ kubectl exec -ti cephfs-test-deploy-busybox-56556d86ff-6dr6v sh / # cd /mnt/cephfs/ /mnt/cephfs # ls cephfs-test-deploy-busybox-56556d86ff-4dmzn #pod之间同步成功 ```
阅读 26 评论 0 收藏 0
阅读 26
评论 0
收藏 0

兜兜    2021-08-26 18:05:39    2021-08-26 18:09:23   

kubernetes k8s
#### 1.查看pod的docker容器ID ```bash $ kubectl get pod -n kube-system nginx-ingress-controller-8489c5b8c4-fccs5 -o json ``` ```yaml ... "containerStatuses": [ { "containerID": "docker://eda6562ab8d504599016d4dba8ceea4b9b255dbf97446f044ce89ad4410ab49a", "image": "registry-vpc.cn-shenzhen.aliyuncs.com/acs/aliyun-ingress-controller:v0.44.0.3-8e83e7dc6-aliyun", "imageID": "docker-pullable://registry-vpc.cn-shenzhen.aliyuncs.com/acs/aliyun-ingress-controller@sha256:7238b6230b678b312113a891ad5f9f7bbedc7839a913eaaee0def8aa748c3313", "lastState": { "terminated": { "containerID": "docker://e9a6d774429c0385be4f3a3129860677a4380277bfe5647563ae4541335111a7", "exitCode": 255, "finishedAt": "2021-07-09T06:48:39Z", "reason": "Error", "startedAt": "2021-07-08T03:29:36Z" } }, "name": "nginx-ingress-controller", "ready": true, "restartCount": 1, "started": true, "state": { "running": { "startedAt": "2021-07-09T06:48:56Z" } } } ], ... ``` #### 2.查看网卡接口索引号 ```bash $ docker exec eda6562ab8d504599016d4dba8ceea4b9b255dbf97446f044ce89ad4410ab49a /bin/bash -c 'cat /sys/class/net/eth0/iflink' ``` ```bash 7 ``` #### 3.查看网卡接口索引对应的veth网卡 ```bash $ IFLINK_INDEX=7 #IFLINK_INDEX值为上面查询的结果 $ for i in /sys/class/net/veth*/ifindex; do grep -l ^$IFLINK_INDEX$ $i; done ``` ```bash /sys/class/net/vethfe247b7f/ifindex ``` #### 4.tcpdump抓包pod ```bash $ tcpdump -i vethfe247b7f -nnn ``` ```bash listening on vethfe247b7f, link-type EN10MB (Ethernet), capture size 262144 bytes 18:04:47.362688 IP 100.127.5.192.25291 > 10.151.0.78.80: Flags [S], seq 3419414283, win 2920, options [mss 1460,sackOK,TS val 2630688633 ecr 0,nop,wscale 0], length 0 18:04:47.362704 IP 10.151.0.78.80 > 100.127.5.192.25291: Flags [S.], seq 3153672948, ack 3419414284, win 65535, options [mss 1460,sackOK,TS val 1408463572 ecr 2630688633,nop,wscale 9], length 0 18:04:47.362857 IP 100.127.5.192.25291 > 10.151.0.78.80: Flags [.], ack 1, win 2920, options [nop,nop,TS val 2630688633 ecr 1408463572], length 0 ```
阅读 46 评论 0 收藏 0
阅读 46
评论 0
收藏 0

兜兜    2021-08-26 14:51:18    2021-09-10 12:44:37   

k8s kubernets
这里以用户访问 https://gw.example.com gateway服务为例,整个网络包的调用过程如下: `CLIENT->阿里云SLB->K8S NODE(IPVS/kube-proxy)->INGRESS POD(nginx controller)->GATEWAY POD(gateway服务)` ```BASH CLIENT IP: CLIENT_IP SLB IP: 47.107.x.x K8S NODE IP: 172.18.238.85 INGRESS POD IP: 10.151.0.78 GATEWAY POD IP: 10.151.0.107 ``` #### 1.CLIENT-->阿里云SLB ```bash 解析gw.example.com 47.107.x.x(SLB公网ip), 数据包到达阿里云SLB(CLIENT_IP:RANDOM_PORT---->47.107.x.x:443) ``` #### 2.阿里云SLB-->K8S NODE(IPVS/kube-proxy) ```bash 阿里云SLB配置后端虚拟服务: TCP:443-->172.18.238.85:30483 数据包到达K8S NODE(CLIENT_IP:RANDOM_PORT---->172.18.238.85:30483) ``` K8S NODE抓包 ```bash $ tcpdump -i eth0 ip host CLIENT_IP -n 14:39:33.043508 IP CLIENT_IP.RANDOM_PORT > 172.18.238.85.30483: Flags [S], seq 1799504552, win 29200, options [mss 1460,sackOK,TS val 1092093183 ecr 0,nop,wscale 7], length 0 ``` #### 3.K8S NODE(IPVS/kube-proxy)-->INGRESS POD(nginx controller) IPVS配置后端服务: ```BASH $ ipvsadm -L -n TCP 172.18.238.85:30483 rr -> 10.151.0.78:443 Masq 1 2 40 -> 10.151.0.83:443 Masq 1 8 42 ``` ```BASH 数据包到达nginx ingress(CLIENT_IP:RANDOM_PORT---->10.151.0.78.443) ``` K8S NODE抓包nginx ingress服务([抓包pod教程](https://ynotes.cn/blog/article_detail/260)) ```bash $ tcpdump -i vethfe247b7f -nnn |grep "\.443" #vethfe247b7f为ingress controller pod的网卡 16:45:28.687578 IP CLIENT_IP.RANDOM_PORT > 10.151.0.78.443: Flags [S], seq 2547516746, win 29200, options [mss 1460,sackOK,TS val 1099648828 ecr 0,nop,wscale 7], length 0 ``` #### 4.INGRESS POD(nginx controller)->GATEWAY POD(gateway服务) ```bash $ kubectl get pods -o wide --all-namespaces|grep 10.151.0.78 kube-system nginx-ingress-controller-8489c5b8c4-fccs5 1/1 Running 1 49d 10.151.0.78 cn-shenzhen.172.18.238.85 <none> <none> ``` ```BASH 数据包到达gateway服务(10.151.0.78.57270---->10.151.0.107.18880) ``` K8S NODE抓包gateway服务 ```bash $ tcpdump -i veth553c1000 -nnn port 18880 17:05:58.463497 IP 10.151.0.78.57270 > 10.151.0.107.18880: Flags [S], seq 3538162899, win 65535, options [mss 1460,sackOK,TS val 878505289 ecr 0,nop,wscale 9], length 0 ```
阅读 39 评论 0 收藏 0
阅读 39
评论 0
收藏 0


第 1 页 / 共 20 页
 
第 1 页 / 共 20 页