兜兜    2021-09-22 10:19:33    2022-01-25 09:30:00   

ceph
```sh Cephadm通过manager daemon SSH连接到主机部署和管理Ceph群集,以添加,删除或更新Ceph daemon containers。它不依赖于诸如Ansible,Rook或Salt的外部配置或编排工具。 Cephadm管理Ceph集群的整个生命周期。它首先在一个节点上引导一个微小的Ceph集群(one monitor and one manager),然后自动将集群扩展到多个主机节点,并提供所有Ceph守护程序和服务。这可以通过Ceph命令行界面(CLI)或仪表板(GUI)执行。Cephadm是Octopus v15.2.0版本中的新增功能,并且不支持旧版本的Ceph。 ``` 节点规划: ```sh +--------+---------------+------+---------------------------------+ | Host | ip | Disk | Role | +--------+---------------+------+---------------------------------+ | test01 | 172.16.100.1 | sdc | cephadm,mon,mgr,mds,osd | | test02 | 172.16.100.2 | sdc | mon,mgr,osd | | test11 | 172.16.100.11 | sdb | mon,mds,osd | +--------+---------------+------+---------------------------------+ ``` ##### 安装说明 ```sh ceph版本:15.2.14 octopus(stable) python: 3+ ``` #### 安装docker [参考CentOS7安装Docker-CE](https://ynotes.cn/blog/article_detail/118) #### 安装cephadm(`test01`) ```sh curl --silent --remote-name --location https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadm chmod +x cephadm ./cephadm add-repo --release octopus ./cephadm install ``` cephadm引导ceph集群 ```sh cephadm bootstrap --mon-ip 172.16.100.1 ``` ```sh Creating directory /etc/ceph for ceph.conf Verifying podman|docker is present... Verifying lvm2 is present... Verifying time synchronization is in place... Unit chronyd.service is enabled and running Repeating the final host check... podman|docker (/usr/bin/docker) is present systemctl is present lvcreate is present Unit chronyd.service is enabled and running Host looks OK Cluster fsid: 797b1aa4-16cf-11ec-a787-00505693fd22 Verifying IP 172.16.100.1 port 3300 ... Verifying IP 172.16.100.1 port 6789 ... Mon IP 172.16.100.1 is in CIDR network 172.16.100.0/24 Pulling container image docker.io/ceph/ceph:v15... Extracting ceph user uid/gid from container image... Creating initial keys... Creating initial monmap... Creating mon... Waiting for mon to start... Waiting for mon... mon is available Assimilating anything we can from ceph.conf... Generating new minimal ceph.conf... Restarting the monitor... Setting mon public_network... Creating mgr... Verifying port 9283 ... Wrote keyring to /etc/ceph/ceph.client.admin.keyring Wrote config to /etc/ceph/ceph.conf Waiting for mgr to start... Waiting for mgr... mgr not available, waiting (1/10)... mgr not available, waiting (2/10)... mgr not available, waiting (3/10)... mgr not available, waiting (4/10)... mgr not available, waiting (5/10)... mgr not available, waiting (6/10)... mgr is available Enabling cephadm module... Waiting for the mgr to restart... Waiting for Mgr epoch 5... Mgr epoch 5 is available Setting orchestrator backend to cephadm... Generating ssh key... Wrote public SSH key to to /etc/ceph/ceph.pub Adding key to root@localhost's authorized_keys... Adding host test01... Deploying mon service with default placement... Deploying mgr service with default placement... Deploying crash service with default placement... Enabling mgr prometheus module... Deploying prometheus service with default placement... Deploying grafana service with default placement... Deploying node-exporter service with default placement... Deploying alertmanager service with default placement... Enabling the dashboard module... Waiting for the mgr to restart... Waiting for Mgr epoch 13... Mgr epoch 13 is available Generating a dashboard self-signed certificate... Creating initial admin user... Fetching dashboard port number... Ceph Dashboard is now available at: URL: https://ceph1:8443/ User: admin Password: hj49zqnfm2 You can access the Ceph CLI with: sudo /usr/sbin/cephadm shell --fsid 797b1aa4-16cf-11ec-a787-00505693fd22 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring Please consider enabling telemetry to help improve Ceph: ceph telemetry on For more information see: https://docs.ceph.com/docs/master/mgr/telemetry/ Bootstrap complete. ``` ```sh 在本地主机上为新集群创建monitor 和 manager daemon守护程序。 为Ceph集群生成一个新的SSH密钥,并将其添加到root用户的/root/.ssh/authorized_keys文件中。 将与新群集进行通信所需的最小配置文件保存到/etc/ceph/ceph.conf。 向/etc/ceph/ceph.client.admin.keyring写入client.admin管理(特权!)secret key的副本。 将public key的副本写入/etc/ceph/ceph.pub ``` 查看节点运行的docker容器 ```sh $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 3a707133d00e ceph/ceph:v15 "/usr/bin/ceph-mds -…" 5 days ago Up 5 days ceph-797b1aa4-16cf-11ec-a787-00505693fd22-mds.cephfs.test01.kqizcm 36ada96953ea ceph/ceph:v15 "/usr/bin/ceph-osd -…" 5 days ago Up 5 days ceph-797b1aa4-16cf-11ec-a787-00505693fd22-osd.0 08aa352c61f1 ceph/ceph:v15 "/usr/bin/ceph-mgr -…" 5 days ago Up 5 days ceph-797b1aa4-16cf-11ec-a787-00505693fd22-mgr.test01.soadiy 47be185fc946 prom/node-exporter:v0.18.1 "/bin/node_exporter …" 5 days ago Up 5 days ceph-797b1aa4-16cf-11ec-a787-00505693fd22-node-exporter.test01 dad697dd6669 ceph/ceph-grafana:6.7.4 "/bin/sh -c 'grafana…" 5 days ago Up 5 days ceph-797b1aa4-16cf-11ec-a787-00505693fd22-grafana.test01 754ba4d9e132 prom/alertmanager:v0.20.0 "/bin/alertmanager -…" 5 days ago Up 5 days ceph-797b1aa4-16cf-11ec-a787-00505693fd22-alertmanager.test01 2b7e3664edbf ceph/ceph:v15 "/usr/bin/ceph-crash…" 5 days ago Up 5 days ceph-797b1aa4-16cf-11ec-a787-00505693fd22-crash.test01 c7cf9f5502de ceph/ceph:v15 "/usr/bin/ceph-mon -…" 5 days ago Up 5 days ceph-797b1aa4-16cf-11ec-a787-00505693fd22-mon.test01 ba3958a9dec8 prom/prometheus:v2.18.1 "/bin/prometheus --c…" 5 days ago Up 5 days ceph-797b1aa4-16cf-11ec-a787-00505693fd22-prometheus.test01 c737fb65080c elasticsearch:7.8.0 "/tini -- /usr/local…" 7 months ago Up 5 days 0.0.0.0:9200->9200/tcp, 0.0.0.0:9300->9300/tcp es7.6 ``` 添加节点 ```sh ceph orch host add test02 172.16.100.2 ceph orch host add test11 172.16.100.11 ``` 添加硬盘 ```sh ceph orch daemon add osd test01:/dev/sdc ceph orch daemon add osd test02:/dev/sdc ceph orch daemon add osd test11:/dev/sdb ``` ceph osd pool create k8s 64
阅读 604 评论 0 收藏 0
阅读 604
评论 0
收藏 0

兜兜    2021-09-22 10:18:21    2021-09-24 09:41:49   

k8s cephfs
#### 环境介绍 ```sh ceph集群节点: 172.16.100.1:6789,172.16.100.2:6789,172.16.100.11:6789 ceph version 15.2.13 (c44bc49e7a57a87d84dfff2a077a2058aa2172e2) octopus (stable) ceph client:15.2.14 (pod镜像:"elementalnet/cephfs-provisioner:0.8.0") ``` `注意:目前官网的quay.io/external_storage/cephfs-provisioner:latest镜像ceph版本为13.x,跟ceph集群的版本15不匹配,导致pv一直是pengding状态。这里使用第三方镜像:elementalnet/cephfs-provisioner:0.8.0` #### 安装ceph k8s节点安装客户端 ```sh $ yum install ceph-common -y ``` 拷贝ceph节点的key到k8s节点 ```sh $ scp /etc/ceph/ceph.client.admin.keyring 172.16.100.100:/etc/ceph ``` 配置访问ceph的secert ```sh $ ceph auth get-key client.admin | base64 QVFEa0RFTmhYQ1UzQUJBQXFmSWptMFJkSVpGaC9VR0V4M0RNc3c9PQ== ``` ```sh $ cat >cephfs-secret.yaml<<EOF apiVersion: v1 kind: Secret metadata: name: ceph-secret-admin namespace: cephfs type: "kubernetes.io/rbd" data: key: QVFEa0RFTmhYQ1UzQUJBQXFmSWptMFJkSVpGaC9VR0V4M0RNc3c9PQ== #替换上面的输出内容 EOF ``` ```sh $ kubectl create -f cephfs-secret.yaml ``` #### 安装cephfs provisioner 参考:https://github.com/kubernetes-retired/external-storage/tree/master/ceph/cephfs/deploy 创建RBAC ```sh $ cat >cephfs-rbac.yaml <<EOF --- apiVersion: v1 kind: ServiceAccount metadata: name: cephfs-provisioner namespace: cephfs --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: cephfs-provisioner namespace: cephfs rules: - apiGroups: [""] resources: ["secrets"] verbs: ["create", "get", "delete"] - apiGroups: [""] resources: ["endpoints"] verbs: ["get", "list", "watch", "create", "update", "patch"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: cephfs-provisioner namespace: cephfs roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: cephfs-provisioner subjects: - kind: ServiceAccount name: cephfs-provisioner --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: cephfs-provisioner namespace: cephfs rules: - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["create", "update", "patch"] - apiGroups: [""] resources: ["services"] resourceNames: ["kube-dns","coredns"] verbs: ["list", "get"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: cephfs-provisioner subjects: - kind: ServiceAccount name: cephfs-provisioner namespace: cephfs roleRef: kind: ClusterRole name: cephfs-provisioner apiGroup: rbac.authorization.k8s.io EOF ``` ```sh $ kubectl create -f cephfs-rbac.yaml ``` 创建cephfs-provisioner ```sh $ cat > cephfs-provisioner.yaml <<EOF apiVersion: apps/v1 kind: Deployment metadata: name: cephfs-provisioner namespace: cephfs spec: replicas: 1 selector: matchLabels: app: cephfs-provisioner strategy: type: Recreate template: metadata: labels: app: cephfs-provisioner spec: containers: - name: cephfs-provisioner #image: "quay.io/external_storage/cephfs-provisioner:latest" #ceph集群版本为15.2,该版本对应的ceph客户端太旧,镜像没有更新,替换下面的镜像 image: "elementalnet/cephfs-provisioner:0.8.0" env: - name: PROVISIONER_NAME value: ceph.com/cephfs - name: PROVISIONER_SECRET_NAMESPACE value: cephfs command: - "/usr/local/bin/cephfs-provisioner" args: - "-id=cephfs-provisioner-1" serviceAccount: cephfs-provisioner EOF ``` ```sh $ kubectl create -f cephfs-provisioner.yaml ``` #### 创建存储类 ```sh $ cat > cephfs-storageclass.yaml <<EOF kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: cephfs namespace: cephfs provisioner: ceph.com/cephfs parameters: monitors: 172.16.100.1:6789,172.16.100.2:6789,172.16.100.11:6789 adminId: admin adminSecretName: ceph-secret-admin adminSecretNamespace: cephfs claimRoot: /pvc-volumes EOF ``` ```sh $ kubectl create -f cephfs-storageclass.yaml ``` #### 创建测试pvc ```sh $ cat > cephfs-test-pvc.yaml <<EOF kind: PersistentVolumeClaim apiVersion: v1 metadata: name: cephfs-test-pvc-1 annotations: volume.beta.kubernetes.io/storage-class: "cephfs" spec: accessModes: - ReadWriteMany resources: requests: storage: 500Mi EOF ``` ```sh $ kubectl create -f cephfs-test-pvc.yaml ``` 获取pvc ```sh $ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE cephfs-test-pvc-1 Bound pvc-571ae252-b080-4a67-8f3d-40bda1304fe3 500Mi RWX cephfs 2m1s ``` 获取pv ```sh $ kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-571ae252-b080-4a67-8f3d-40bda1304fe3 500Mi RWX Delete Bound default/cephfs-test-pvc-1 cephfs 2m3s ``` #### 创建nginx挂载pvc ```sh $ cat > cephfs-test-busybox-deployment.yml <<EOF apiVersion: apps/v1 kind: Deployment metadata: name: cephfs-test-deploy-busybox spec: replicas: 3 selector: matchLabels: app: cephfs-test-busybox template: metadata: labels: app: cephfs-test-busybox spec: containers: - name: busybox image: busybox command: ["sleep", "60000"] volumeMounts: - mountPath: "/mnt/cephfs" name: cephfs-test-pvc volumes: - name: cephfs-test-pvc persistentVolumeClaim: claimName: cephfs-test-pvc-1 EOF ``` ```sh $ kubectl create -f cephfs-test-busybox-deployment.yml ``` ```sh $ kubectl get pods NAME READY STATUS RESTARTS AGE cephfs-test-deploy-busybox-56556d86ff-4dmzn 1/1 Running 0 4m28s cephfs-test-deploy-busybox-56556d86ff-6dr6v 1/1 Running 0 4m28s cephfs-test-deploy-busybox-56556d86ff-b75mw 1/1 Running 0 4m28s ``` 测试pod挂载的cephfs文件读写是否同步 ```sh $ kubectl exec -ti cephfs-test-deploy-busybox-56556d86ff-4dmzn sh / # cd /mnt/cephfs/ /mnt/cephfs # ls /mnt/cephfs # touch cephfs-test-deploy-busybox-56556d86ff-4dmzn /mnt/cephfs # exit ``` ```sh $ kubectl exec -ti cephfs-test-deploy-busybox-56556d86ff-6dr6v sh / # cd /mnt/cephfs/ /mnt/cephfs # ls cephfs-test-deploy-busybox-56556d86ff-4dmzn #pod之间同步成功 ```
阅读 835 评论 0 收藏 0
阅读 835
评论 0
收藏 0

兜兜    2021-09-17 14:20:53    2021-10-19 14:31:14   

cilium
介绍 ```sh 本文档使用cilium(eBFP)+kube-router(BGP路由)模式,etcd使用的是外部集群,K8S ApiServer做了高可用。结合实际情况做调整 此模式优缺点:性能最好,缺点pod和非BGP节点无法通信,目前是Beta阶段,生产不推荐。 ``` #### 安装cilium 配置cilium的访问etcd的secret ```sh $ cd /opt/etcd/ssl #进入etcd证书目录 $ kubectl create secret generic -n kube-system cilium-etcd-secrets \ #创建cilium访问etcd的secret --from-file=etcd-client-ca.crt=ca.pem \ --from-file=etcd-client.key=server-key.pem \ --from-file=etcd-client.crt=server.pem ``` helm安装cilium ```sh $ helm repo add cilium https://helm.cilium.io/ $ export REPLACE_WITH_API_SERVER_IP=172.16.100.111 $ export REPLACE_WITH_API_SERVER_PORT=16443 $ helm install cilium cilium/cilium \ --version 1.9.10 \ #这里选择cilium 1.9.10 --namespace kube-system \ --set ipam.mode=kubernetes \ #指定PodCIDRs IPAM管理方案使用kubernetes --set tunnel=disabled \ #关闭tunnel,开启路由模式 --set nativeRoutingCIDR=172.16.100.0/24 \ #本地网络 --set kubeProxyReplacement=strict \ #严格模式,完全用cilium替代kube-proxy --set k8sServiceHost=$REPLACE_WITH_API_SERVER_IP \ #k8s ApiServer IP --set k8sServicePort=$REPLACE_WITH_API_SERVER_PORT \ #k8s ApiServer Port --set etcd.enabled=true \ #etcd开启 --set etcd.ssl=true \ #etcd ssl证书 --set "etcd.endpoints[0]=https://172.16.100.100:2379" \ #etcd 集群节点 --set "etcd.endpoints[1]=https://172.16.100.101:2379" \ --set "etcd.endpoints[2]=https://172.16.100.102:2379" ``` 测试cilium网络 ```sh $ wget https://raw.githubusercontent.com/cilium/cilium/master/examples/kubernetes/connectivity-check/connectivity-check.yaml $ sed -i 's/google.com/baidu.com/g' connectivity-check.yaml #测试外网的地址改成baidu.com $ kubectl apply -f connectivity-check.yaml ``` 查看pod的状态 ```sh kubectl get pods NAME READY STATUS RESTARTS AGE echo-a-dc9bcfd8f-hgc64 1/1 Running 0 9m59s echo-b-5884b7dc69-bl5px 1/1 Running 0 9m59s echo-b-host-cfdd57978-dg6gw 1/1 Running 0 9m59s host-to-b-multi-node-clusterip-c4ff7ff64-m9zwz 1/1 Running 0 9m58s host-to-b-multi-node-headless-84d8f6f4c4-8b797 1/1 Running 1 9m57s pod-to-a-5cdfd4754d-jgmnt 1/1 Running 0 9m59s pod-to-a-allowed-cnp-7d7c8f9f9b-f9lpc 1/1 Running 0 9m58s pod-to-a-denied-cnp-75cb89dfd-jsjd4 1/1 Running 0 9m59s pod-to-b-intra-node-nodeport-c6d79965d-w98jx 1/1 Running 1 9m57s pod-to-b-multi-node-clusterip-cd4d764b6-gjvc5 1/1 Running 0 9m58s pod-to-b-multi-node-headless-6696c5f8cd-fvcsl 1/1 Running 1 9m58s pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns 1/1 Running 0 9m57s pod-to-external-1111-d5c7bb4c4-sflfc 1/1 Running 0 9m59s pod-to-external-fqdn-allow-google-cnp-dcb4d867d-dxqzx 1/1 Running 0 7m16s ``` 删除kube-proxy ```sh $ kubectl -n kube-system delete ds kube-proxy #删除kube-proxy DaemonSet $ kubectl -n kube-system delete cm kube-proxy $ iptables-restore <(iptables-save | grep -v KUBE) ``` #### 安装kube-router(`使用BGP功能`) ```sh $ curl -LO https://raw.githubusercontent.com/cloudnativelabs/kube-router/v0.4.0/daemonset/generic-kuberouter-only-advertise-routes.yaml $ vim generic-kuberouter-only-advertise-routes.yaml #修改配置文件,只开启router ... - "--run-router=true" - "--run-firewall=false" - "--run-service-proxy=false" - "--enable-cni=false" - "--enable-pod-egress=false" - "--enable-ibgp=true" - "--enable-overlay=true" # - "--peer-router-ips=<CHANGE ME>" # - "--peer-router-asns=<CHANGE ME>" # - "--cluster-asn=<CHANGE ME>" - "--advertise-cluster-ip=true" #宣告集群IP - "--advertise-external-ip=true" #宣告svc外部ip,如果svc指定了external-ip则生效 - "--advertise-loadbalancer-ip=true" ... ``` 部署kube-router ```sh $ kubectl apply -f generic-kuberouter-only-advertise-routes.yaml ``` 查看部署pod ```sh $ kubectl get pods -n kube-system |grep kube-router kube-router-dz58s 1/1 Running 0 2d18h kube-router-vdwqg 1/1 Running 0 2d18h kube-router-wrc4v 1/1 Running 0 2d18h ``` 查看路由 ```sh $ ip route default via 172.16.100.254 dev ens192 proto static metric 100 10.0.1.0/24 via 10.0.1.104 dev cilium_host src 10.0.1.104 10.0.1.104 dev cilium_host scope link 10.244.0.0/24 via 10.244.0.81 dev cilium_host src 10.244.0.81 10.244.0.81 dev cilium_host scope link 10.244.1.0/24 via 172.16.100.101 dev ens192 proto 17 #BGP路由 10.244.2.0/24 via 172.16.100.102 dev ens192 proto 17 #BGP路由 172.16.100.0/24 dev ens192 proto kernel scope link src 172.16.100.100 metric 100 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown ```
阅读 1647 评论 0 收藏 0
阅读 1647
评论 0
收藏 0

兜兜    2021-09-16 00:45:44    2022-01-25 09:20:56   

kubernetes nfs
```sh Kubernetes提供了一套可以自动创建PV的机制,即:Dynamic Provisioning.而这个机制的核心在于:StorageClass这个API对象. StorageClass对象会定义下面两部分内容: 1,PV的属性.比如,存储类型,Volume的大小等. 2,创建这种PV需要用到的存储插件 有了这两个信息之后,Kubernetes就能够根据用户提交的PVC,找到一个对应的StorageClass,之后Kubernetes就会调用该StorageClass声明的存储插件,进而创建出需要的PV. 但是其实使用起来是一件很简单的事情,你只需要根据自己的需求,编写YAML文件即可,然后使用kubectl create命令执行即可 ``` 搭建StorageClass+NFS,大致有以下几个步骤: ```sh 1.创建一个可用的NFS Serve 2.创建Service Account.这是用来管控NFS provisioner在k8s集群中运行的权限 3.创建StorageClass.负责建立PVC并调用NFS provisioner进行预定的工作,并让PV与PVC建立管理 4.创建NFS provisioner.有两个功能,一个是在NFS共享目录下创建挂载点(volume),另一个则是建了PV并将PV与NFS的挂载点建立关联 ``` #### 创建StorageClass 1.创建NFS共享服务 当前环境NFS server及共享目录信息 ```sh IP: 172.16.100.100 PATH: /data/k8s ``` 2.创建ServiceAccout账号 ```sh $ cat > rbac.yaml <<EOF apiVersion: v1 kind: ServiceAccount metadata: name: nfs-client-provisioner namespace: default --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: nfs-client-provisioner-runner rules: - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["create", "update", "patch"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: run-nfs-client-provisioner subjects: - kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default roleRef: kind: ClusterRole name: nfs-client-provisioner-runner apiGroup: rbac.authorization.k8s.io --- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default rules: - apiGroups: [""] resources: ["endpoints"] verbs: ["get", "list", "watch", "create", "update", "patch"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner subjects: - kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default roleRef: kind: Role name: leader-locking-nfs-client-provisioner apiGroup: rbac.authorization.k8s.io EOF ``` ```sh $ kubectl apply -f rbac.yaml ``` 3.创建NFS provisioner ```sh $ cat > nfs-provisioner.yaml <<EOF apiVersion: apps/v1 kind: Deployment metadata: name: nfs-client-provisioner labels: app: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default #与RBAC文件中的namespace保持一致 spec: replicas: 1 selector: matchLabels: app: nfs-client-provisioner strategy: type: Recreate selector: matchLabels: app: nfs-client-provisioner template: metadata: labels: app: nfs-client-provisioner spec: serviceAccountName: nfs-client-provisioner containers: - name: nfs-client-provisioner image: quay.io/external_storage/nfs-client-provisioner:latest volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes env: - name: PROVISIONER_NAME value: nfs-storage-provisioner - name: NFS_SERVER value: 172.16.100.100 - name: NFS_PATH value: /data/k8s volumes: - name: nfs-client-root nfs: server: 172.16.100.100 path: /data/k8s EOF ``` ```sh $ kubectl apply -f nfs-provisioner.yaml ``` 4.创建storageclass ```sh $ cat >nfs-storageclass.yaml <<EOF apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: managed-nfs-storage provisioner: nfs-storage-provisioner EOF ``` ```sh $ kubectl apply -f nfs-storageclass.yaml ``` 5.测试创建pvc ```sh $ cat > test-pvc.yaml <<EOF kind: PersistentVolumeClaim apiVersion: v1 metadata: name: test-claim annotations: volume.beta.kubernetes.io/storage-class: "managed-nfs-storage" #指定上面创建的存储类 spec: accessModes: - ReadWriteMany resources: requests: storage: 1Mi EOF ``` ```sh $ kubectl apply -f test-pvc.yaml ``` 查看pv ```sh kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-66967ce3-5adf-46cf-9fa3-8379f499d254 1Mi RWX Delete Bound default/test-claim managed-nfs-storage 27m ``` 查看pvc ```sh $ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE test-claim Bound pvc-66967ce3-5adf-46cf-9fa3-8379f499d254 1Mi RWX managed-nfs-storage 27m ``` 创建nginx-statefulset.yaml ```sh $ cat >nginx-statefulset.yaml <<EOF --- apiVersion: v1 kind: Service metadata: name: nginx-headless labels: app: nginx spec: ports: - port: 80 name: web clusterIP: None selector: app: nginx --- apiVersion: apps/v1 kind: StatefulSet metadata: name: web spec: selector: matchLabels: app: nginx serviceName: "nginx" replicas: 2 template: metadata: labels: app: nginx spec: containers: - name: nginx image: ikubernetes/myapp:v1 ports: - containerPort: 80 name: web volumeMounts: - name: www mountPath: /usr/share/nginx/html volumeClaimTemplates: - metadata: name: www annotations: volume.beta.kubernetes.io/storage-class: "managed-nfs-storage" spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 20Mi EOF ``` ```sh $ kubectl apply -f nginx-statefulset.yaml ``` 查看pod ```sh $ kubectl get pods -l app=nginx NAME READY STATUS RESTARTS AGE web-0 1/1 Running 0 21m web-1 1/1 Running 0 21m ``` 查看pv ```sh $kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-2c23ca4d-7c98-4021-8099-c208b418aa5b 20Mi RWO Delete Bound default/www-web-0 managed-nfs-storage 22m pvc-4d8da9d3-cb51-49e6-bd0e-c5bad7878551 20Mi RWO Delete Bound default/www-web-1 managed-nfs-storage 21m pvc-66967ce3-5adf-46cf-9fa3-8379f499d254 1Mi RWX Delete Bound default/test-claim managed-nfs-storage 32m ``` 查看pvc ```sh $ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE test-claim Bound pvc-66967ce3-5adf-46cf-9fa3-8379f499d254 1Mi RWX managed-nfs-storage 32m www-web-0 Bound pvc-2c23ca4d-7c98-4021-8099-c208b418aa5b 20Mi RWO managed-nfs-storage 22m www-web-1 Bound pvc-4d8da9d3-cb51-49e6-bd0e-c5bad7878551 20Mi RWO managed-nfs-storage 21m ``` 参考:https://www.cnblogs.com/panwenbin-logs/p/12196286.html
阅读 1059 评论 0 收藏 0
阅读 1059
评论 0
收藏 0

兜兜    2021-09-15 15:56:38    2022-01-25 09:20:20   

Keepalived LVS ingress
```sh 防火墙端口映射80/443到LVS的VIP对应80/443,LVS负载均衡K8S节点IP的80/443端口。ingress-nginx-controller服务暴露方式通过(HostNetwork:80/443)。实现的效果119.x.x.x:80/443-->172.16.100.99:80/443(LVS VIP)--> 172.16.100.100:80/443,172.16.100.101:80/443,172.16.100.102:80/443 ``` 配置规划 ```sh +----------------+----------------+--------+--------------------------+ | Host | IP | Port | SoftWare | +----------------+----------------+--------+--------------------------+ | LVS01 | 172.16.100.27 | 80/443 | LVS,Keepalived | | LVS02 | 172.16.100.28 | 80/443 | LVS,Keepalived | | RS/k8s-master1 | 172.16.100.100 | 80/443 | ingress-nginx-controller | | RS/k8s-master2 | 172.16.100.101 | 80/443 | ingress-nginx-controller | | RS/k8s-node1 | 172.16.100.102 | 80/443 | ingress-nginx-controller | | VIP | 172.16.100.99 | 80/443 | / | +----------------+----------------+--------+--------------------------+ ``` 安装lvs和keepalived(`172.16.100.27/172.16.100.28`) ```sh $ yum install ipvsadm keepalived -y $ systemctl enable keepavlied ``` 配置keepalived(`172.16.100.27`) ```sh $ cat > /etc/keepalived/keepalived.conf <<EOF ! Configuration File for keepalived global_defs { router_id LVS_27 #route id } vrrp_instance VI_1 { state MASTER #主节点 interface ens192 #网卡 virtual_router_id 51 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 172.16.100.99 } } virtual_server 172.16.100.99 443 { delay_loop 3 lb_algo rr lb_kind DR persistence_timeout 50 protocol TCP real_server 172.16.100.100 443 { weight 1 TCP_CHECK { connect_port 443 connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } real_server 172.16.100.101 443 { weight 1 TCP_CHECK { connect_port 443 connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } real_server 172.16.100.102 443 { weight 1 TCP_CHECK { connect_port 443 connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } } virtual_server 172.16.100.99 80 { delay_loop 3 lb_algo rr lb_kind DR persistence_timeout 50 protocol TCP real_server 172.16.100.100 80 { weight 1 TCP_CHECK { connect_port 80 connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } real_server 172.16.100.101 80 { weight 1 TCP_CHECK { connect_port 80 connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } real_server 172.16.100.102 80 { weight 1 TCP_CHECK { connect_port 80 connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } } EOF ``` 配置keepalived(`172.16.100.28`) ```sh $ cat /etc/keepalived/keepalived.conf ... global_defs { router_id LVS_28 #route id,两台机器配置不一样 } vrrp_instance VI_1 { state BACKUP #备份节点 interface ens192 #网卡名 virtual_router_id 51 priority 99 #优先级 ... ``` 配置RS节点(`172.16.100.100/172.16.100.101/172.16.100.102`) ```sh $ cat >/etc/init.d/lvs_rs.sh <<EOF vip=172.16.100.99 mask='255.255.255.255' dev=lo:1 case $1 in start) echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce ifconfig $dev $vip netmask $mask #broadcast $vip up echo "The RS Server is Ready!" ;; stop) ifconfig $dev down echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce echo 0 > /proc/sys/net/ipv4/conf/lo/arp_announce echo "The RS Server is Canceled!" ;; *) echo "Usage: $(basename $0) start|stop" exit 1 ;; esac EOF ``` 启动脚本 ```sh $ chmod +x /etc/init.d/lvs_rs.sh $ /etc/init.d/lvs_rs.sh start ``` ```sh $ ip a #查看lo:1接口VIP是否绑定 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet 172.16.100.99/32 scope global lo:1 valid_lft forever preferred_lft forever ``` 启动keepalived(`172.16.100.27/172.16.100.28`) ```sh $ systemctl start keepavlied ``` 查看VIP是否绑定 ```sh $ ip a 2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 00:50:56:93:ed:a4 brd ff:ff:ff:ff:ff:ff inet 172.16.100.27/24 brd 172.16.100.255 scope global noprefixroute ens192 valid_lft forever preferred_lft forever inet 172.16.100.99/32 scope global ens192 valid_lft forever preferred_lft forever ``` 查看LVS信息 ```sh $ ipvsadm -L -n IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 172.16.100.99:80 rr persistent 50 -> 172.16.100.100:80 Route 1 0 0 -> 172.16.100.101:80 Route 1 0 0 -> 172.16.100.102:80 Route 1 0 0 TCP 172.16.100.99:443 rr persistent 50 -> 172.16.100.100:443 Route 1 0 0 -> 172.16.100.101:443 Route 1 0 0 -> 172.16.100.102:443 Route 1 0 0 ```
阅读 1070 评论 0 收藏 0
阅读 1070
评论 0
收藏 0

兜兜    2021-09-15 15:55:33    2022-01-25 09:21:07   

kubernetes
部署生产环境Kubernetes集群方式 ```sh kubeadm Kubeadm是一个K8s部署工具,提供kubeadm init和kubeadm join,用于快速部署Kubernetes集群。 二进制包 从github下载发行版的二进制包,手动部署每个组件,组成Kubernetes集群。 ``` 本次测试采用kubeadm搭建集群 架构图 ```sh hostport/nodeport vip:172.16.100.99 +-----------------+ +-----------------+ | keepalived | |K8S-Master1 | | | | | 119.x.x.x | +---------+ | +------->| | | | | | | | | | | | | | | | +--------+ +--------+--> lvs27 | | | +-----------------+ | | | | | | | | 172.16.100.100 +---------+ | | | | +---------+ | | | | | | | | | | +-----------------+ | Client +--------+firewall+---+ | 172.16.100.27 | | |K8S-Master2 | | | | | | | | | | | +---------+ | | | | +---------+ +---------+------->| | | | | | | | | | | | | | | | | | | | | | +--------+ +--------+--> lvs28 | | | +-----------------+ | | | | | 172.16.100.101 | +---------+ | | +-----------------+ | | | |K8S-Node1 | | 172.16.100.2 | | | | | | | | | | | +------->| | +-----------------+ | | +-----------------+ 172.16.100.102 ``` #### 部署高可用高可用负载均衡器 ```sh Kubernetes作为容器集群系统,通过健康检查+重启策略实现了Pod故障自我修复能力,通过调度算法实现将Pod分布式部署,并保持预期副本数,根据Node失效状态自动在其他Node拉起Pod,实现了应用层的高可用性。 针对Kubernetes集群,高可用性还应包含以下两个层面的考虑:Etcd数据库的高可用性和Kubernetes Master组件的高可用性。 而kubeadm搭建的K8s集群,Etcd只起了一个,存在单点,所以我们这里会独立搭建一个Etcd集群。 Master节点扮演着总控中心的角色,通过不断与工作节点上的Kubelet和kube-proxy进行通信来维护整个集群的健康工作状态。如果Master节点故障,将无法使用kubectl工具或者API做任何集群管理。 Master节点主要有三个服务kube-apiserver、kube-controller-manager和kube-scheduler,其中kube-controller-manager和kube-scheduler组件自身通过选择机制已经实现了高可用,所以Master高可用主要针对kube-apiserver组件,而该组件是以HTTP API提供服务,因此对他高可用与Web服务器类似,增加负载均衡器对其负载均衡即可,并且可水平扩容。 ``` [查看安装文档](https://ynotes.cn/blog/article_detail/280) #### 部署Etcd集群 准备cfssl证书生成工具 cfssl是一个开源的证书管理工具,使用json文件生成证书,相比openssl更方便使用。 找任意一台服务器操作,这里用Master节点。 ```sh $ wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 $ wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 $ wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 $ chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64 $ mv cfssl_linux-amd64 /usr/local/bin/cfssl $ mv cfssljson_linux-amd64 /usr/local/bin/cfssljson $ mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo ``` 生成Etcd证书 ```sh $ mkdir -p ~/etcd_tls $ cd ~/etcd_tls ``` ```sh $ cat > ca-config.json << EOF { "signing": { "default": { "expiry": "87600h" }, "profiles": { "www": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF ``` ```sh $ cat > ca-csr.json << EOF { "CN": "etcd CA", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Guangdong", "ST": "Guangzhou" } ] } EOF ``` 生成CA证书 ```sh $ cfssl gencert -initca ca-csr.json | cfssljson -bare ca - ``` 使用自签CA签发Etcd HTTPS证书 创建证书申请文件 ```sh $ cat > server-csr.json << EOF { "CN": "etcd", "hosts": [ "172.16.100.100", "172.16.100.101", "172.16.100.102", "172.16.100.103", "172.16.100.104" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Guangdong", "ST": "Guangzhou" } ] } EOF ``` 生成证书 ```sh $ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | $ cfssljson -bare server ``` 下载二进制文件 ```sh $ wget https://github.com/etcd-io/etcd/releases/download/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz ``` ```sh $ mkdir /opt/etcd/{bin,cfg,ssl} -p $ tar zxvf etcd-v3.4.9-linux-amd64.tar.gz $ mv etcd-v3.4.9-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/ ``` 创建etcd配置文件 ```sh cat > /opt/etcd/cfg/etcd.conf << EOF #[Member] ETCD_NAME="etcd-1" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://172.16.100.100:2380" ETCD_LISTEN_CLIENT_URLS="https://172.16.100.100:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.16.100.100:2380" ETCD_ADVERTISE_CLIENT_URLS="https://172.16.100.100:2379" ETCD_INITIAL_CLUSTER="etcd-1=https://172.16.100.100:2380,etcd-2=https://172.16.100.101:2380,etcd-3=https://172.16.100.102:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" EOF ``` systemd管理etcd ```sh cat > /usr/lib/systemd/system/etcd.service << EOF [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target [Service] Type=notify EnvironmentFile=/opt/etcd/cfg/etcd.conf ExecStart=/opt/etcd/bin/etcd --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --peer-cert-file=/opt/etcd/ssl/server.pem --peer-key-file=/opt/etcd/ssl/server-key.pem --trusted-ca-file=/opt/etcd/ssl/ca.pem --peer-trusted-ca-file=/opt/etcd/ssl/ca.pem --logger=zap Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF ``` 拷贝刚才生成的证书 ```sh $ cp ~/etcd_tls/ca*pem ~/etcd_tls/server*pem /opt/etcd/ssl/ ``` 将上面节点1所有生成的文件拷贝到节点2和节点3 ```sh $ scp -r /opt/etcd/ root@172.16.100.101:/opt/ $ scp -r /opt/etcd/ root@172.16.100.102:/opt/ $ scp -r /usr/lib/systemd/system/etcd.service root@172.16.100.102:/usr/lib/systemd/system/ $ scp -r /usr/lib/systemd/system/etcd.service root@172.16.100.101:/usr/lib/systemd/system/ ``` 三个节点同时启动 ```sh $ systemctl daemon-reload $ systemctl start etcd $ systemctl enable etcd ``` ```sh $ ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://172.16.100.100:2379,https://172.16.100.101:2379,https://172.16.100.102:2379" endpoint health --write-out=table ``` ```sh +-----------------------------+--------+-------------+-------+ | ENDPOINT | HEALTH | TOOK | ERROR | +-----------------------------+--------+-------------+-------+ | https://172.16.100.100:2379 | true | 12.812954ms | | | https://172.16.100.102:2379 | true | 13.596982ms | | | https://172.16.100.101:2379 | true | 14.607151ms | | +-----------------------------+--------+-------------+-------+ ``` #### 安装Docker [查看Docker安装部分](https://ynotes.cn/blog/article_detail/271) #### 安装kubeadm、kubelet、kubectl ```sh $ cat > /etc/yum.repos.d/kubernetes.repo <<EOF [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF ``` 安装kubeadm、kubelet、kubectl ```sh $ yum install -y kubelet-1.18.20 kubeadm-1.18.20 kubectl-1.18.20 $ systemctl enable kubelet ``` 集群配置文件 ```sh $ cat > kubeadm-config.yaml <<EOF apiVersion: kubeadm.k8s.io/v1beta2 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token token: abcdef.0123456789abcdef ttl: 24h0m0s usages: - signing - authentication kind: InitConfiguration localAPIEndpoint: advertiseAddress: 172.16.100.100 bindPort: 6443 nodeRegistration: criSocket: /var/run/dockershim.sock name: k8s-master1 taints: - effect: NoSchedule key: node-role.kubernetes.io/master --- apiServer: certSANs: - k8s-master1 - k8s-master2 - k8s-master3 - 172.16.100.100 - 172.16.100.101 - 172.16.100.102 - 172.16.100.111 - 127.0.0.1 extraArgs: authorization-mode: Node,RBAC timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta2 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controlPlaneEndpoint: 172.16.100.111:16443 controllerManager: {} dns: type: CoreDNS etcd: external: endpoints: - https://172.16.100.100:2379 - https://172.16.100.101:2379 - https://172.16.100.102:2379 caFile: /opt/etcd/ssl/ca.pem certFile: /opt/etcd/ssl/server.pem keyFile: /opt/etcd/ssl/server-key.pem imageRepository: registry.aliyuncs.com/google_containers kind: ClusterConfiguration kubernetesVersion: v1.18.20 networking: dnsDomain: cluster.local podSubnet: 10.244.0.0/16 serviceSubnet: 10.96.0.0/12 scheduler: {} EOF ``` 集群初始化(`k8s-master1`) ```sh $ kubeadm init --config kubeadm-config.yaml ``` ```sh Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of control-plane nodes by copying certificate authorities and service account keys on each node and then running the following as root: kubeadm join 172.16.100.111:16443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:466694e2952961d35ca66960df917b65a5bd5da6219780274603deeb52b2cdde \ --control-plane Then you can join any number of worker nodes by running the following on each as root: kubeadm join 172.16.100.111:16443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:466694e2952961d35ca66960df917b65a5bd5da6219780274603deeb52b2cdde ``` 配置访问集群的配置 ```sh $ mkdir -p $HOME/.kube $ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config $ sudo chown $(id -u):$(id -g) $HOME/.kube/config ``` 加入集群(`k8s-master2`) 拷贝k8s-master1证书 ```sh $ scp -r 172.16.100.100:/etc/kubernetes/pki/ /etc/kubernetes/ ``` ```sh $ kubeadm join 172.16.100.111:16443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:466694e2952961d35ca66960df917b65a5bd5da6219780274603deeb52b2cdde \ --control-plane ``` 加入集群(`k8s-node1`) ```sh $ kubeadm join 172.16.100.111:16443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:466694e2952961d35ca66960df917b65a5bd5da6219780274603deeb52b2cdde ``` 修改主节点可部署pod ```sh $ kubectl taint node k8s-master1 node-role.kubernetes.io/master- $ kubectl taint node k8s-master2 node-role.kubernetes.io/master- ``` #### 部署网络 `方式一:calico(BGP/IPIP)`:比较主流的网络组件,BGP模式性能最好,缺点是pod无法和非BGP节点通信,IPIP无网络限制,相比BGP性能略差。 [参考:k8s配置calico网络](https://ynotes.cn/blog/article_detail/274) `方式二:flannel(vxlan/host-gw)`:host-gw性能最好,缺点是pod无法和非集群节点通信。vxlan可跨网络,相比host-gw性能略差。 [参考:K8S-kubeadm部署k8s集群(一) flannel部分](https://ynotes.cn/blog/article_detail/271) `方式三:cilium(eBFP)+kube-router(BGP路由)`:三种方式性能最好,缺点pod和非BGP节点无法通信,目前是Beta阶段,生产不推荐。 [参考:k8s配置cilium网络](https://ynotes.cn/blog/article_detail/286) #### 重置K8S集群 ##### kubeadm重置节点 ```sh $ kubeadm reset #kubeadm reset 负责从使用 kubeadm init 或 kubeadm join 命令创建的文件中清除节点本地文件系统。对于控制平面节点,reset 还从 etcd 集群中删除该节点的本地 etcd 堆成员,还从 kubeadm ClusterStatus 对象中删除该节点的信息。 ClusterStatus 是一个 kubeadm 管理的 Kubernetes API 对象,该对象包含 kube-apiserver 端点列表。 #kubeadm reset phase 可用于执行上述工作流程的各个阶段。 要跳过阶段列表,您可以使用