私信
兜兜
文章
206
评论
12
点赞
98
原创 180
翻译 4
转载 22

文章
关注
粉丝
收藏

个人分类:

兜兜    2021-09-22 10:20:24    2022-01-25 09:20:25   

kubernetes k8s nginx ingress
#### 环境介绍 ```sh k8s版本:1.18.20 ingress-nginx: 3.4.0 ``` #### 安装ingress-nginx ##### 下载helm安装包 ```sh $ wget https://github.com/kubernetes/ingress-nginx/releases/download/ingress-nginx-3.4.0/ingress-nginx-3.4.0.tgz $ tar xvf ingress-nginx-3.4.0.tgz $ cd ingress-nginx ``` #### 配置参数 ```sh $ cat > values.yaml <<EOF controller: image: repository: registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller #更好阿里云镜像 tag: "v0.40.1" digest: sha256:abffcf2d25e3e7c7b67a315a7c664ec79a1588c9c945d3c7a75637c2f55caec6 pullPolicy: IfNotPresent runAsUser: 101 allowPrivilegeEscalation: true containerPort: http: 80 https: 443 config: {} configAnnotations: {} proxySetHeaders: {} addHeaders: {} dnsConfig: {} dnsPolicy: ClusterFirst reportNodeInternalIp: false hostNetwork: true #开启主机网络 hostPort: enabled: true #开启主机端口 ports: http: 80 https: 443 electionID: ingress-controller-leader ingressClass: nginx podLabels: {} podSecurityContext: {} sysctls: {} publishService: enabled: true pathOverride: "" scope: enabled: false tcp: annotations: {} udp: annotations: {} extraArgs: {} extraEnvs: [] kind: DaemonSet #DaemonSet运行 annotations: {} labels: {} updateStrategy: {} minReadySeconds: 0 tolerations: [] affinity: {} topologySpreadConstraints: [] terminationGracePeriodSeconds: 300 nodeSelector: ingress: nginx #配置部署选择ingress=nginx节点 livenessProbe: failureThreshold: 5 initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 port: 10254 readinessProbe: failureThreshold: 3 initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 port: 10254 healthCheckPath: "/healthz" podAnnotations: {} #replicaCount: 1 #关闭 minAvailable: 1 resources: requests: cpu: 100m memory: 90Mi autoscaling: enabled: false minReplicas: 1 maxReplicas: 11 targetCPUUtilizationPercentage: 50 targetMemoryUtilizationPercentage: 50 autoscalingTemplate: [] enableMimalloc: true customTemplate: configMapName: "" configMapKey: "" service: enabled: true annotations: {} labels: {} externalIPs: [] loadBalancerSourceRanges: [] enableHttp: true enableHttps: true ports: http: 80 https: 443 targetPorts: http: http https: https type: LoadBalancer nodePorts: http: "" https: "" tcp: {} udp: {} internal: enabled: false annotations: {} extraContainers: [] extraVolumeMounts: [] extraVolumes: [] extraInitContainers: [] admissionWebhooks: enabled: false #关闭 failurePolicy: Fail port: 8443 service: annotations: {} externalIPs: [] loadBalancerSourceRanges: [] servicePort: 443 type: ClusterIP patch: enabled: true image: repository: docker.io/jettech/kube-webhook-certgen tag: v1.3.0 pullPolicy: IfNotPresent priorityClassName: "" podAnnotations: {} nodeSelector: {} tolerations: [] runAsUser: 2000 metrics: port: 10254 enabled: false service: annotations: {} externalIPs: [] loadBalancerSourceRanges: [] servicePort: 9913 type: ClusterIP serviceMonitor: enabled: false additionalLabels: {} namespace: "" namespaceSelector: {} scrapeInterval: 30s targetLabels: [] metricRelabelings: [] prometheusRule: enabled: false additionalLabels: {} rules: [] lifecycle: preStop: exec: command: - /wait-shutdown priorityClassName: "" revisionHistoryLimit: 10 maxmindLicenseKey: "" defaultBackend: enabled: false image: repository: k8s.gcr.io/defaultbackend-amd64 tag: "1.5" pullPolicy: IfNotPresent runAsUser: 65534 extraArgs: {} serviceAccount: create: true name: extraEnvs: [] port: 8080 livenessProbe: failureThreshold: 3 initialDelaySeconds: 30 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 5 readinessProbe: failureThreshold: 6 initialDelaySeconds: 0 periodSeconds: 5 successThreshold: 1 timeoutSeconds: 5 tolerations: [] affinity: {} podSecurityContext: {} podLabels: {} nodeSelector: {} podAnnotations: {} replicaCount: 1 minAvailable: 1 resources: {} service: annotations: {} externalIPs: [] loadBalancerSourceRanges: [] servicePort: 80 type: ClusterIP priorityClassName: "" rbac: create: true scope: false podSecurityPolicy: enabled: false serviceAccount: create: true name: imagePullSecrets: [] tcp: {} udp: {} EOF ``` #### 创建命名空间 ```sh $ kubectl create namespace ingress-nginx ``` #### 节点打标签 ```sh $ kubectl label nodes k8s-master1 ingress=nginx $ kubectl label nodes k8s-master2 ingress=nginx $ kubectl label nodes k8s-node1 ingress=nginx ``` #### 安装nginx-ingress ```sh $ helm -n ingress-nginx upgrade -i ingress-nginx . ``` #### 卸载ingress-nginx ```sh $ helm -n ingress-nginx uninstall ingress-nginx ``` #### 测试nginx-ingress #### 部署一个测试nginx服务 ```sh $ cat > nginx-deployment.yml <<EOF apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deploy spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: nginx-service spec: selector: app: nginx ports: - protocol: TCP port: 80 targetPort: 80 type: ClusterIP EOF ``` 配置ingress对象 创建TLS证书 ```sh $ kubectl create secret tls shudoon-com-tls --cert=5024509__example.com.pem --key=5024509__example.com.key ``` #### 创建ingress规则 ```sh $ cat >tnginx-ingress.yaml <<EOF apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx name: tnginx-ingress spec: rules: - host: tnginx.example.com http: paths: - path: / backend: serviceName: nginx-service servicePort: 80 # This section is only required if TLS is to be enabled for the Ingress tls: - hosts: - tnginx.example.com secretName: shudoon-com-tls EOF ``` 测试 https://tnginx.example.com/ #### 问题 `问题:创建自定义ingress报错:Internal error occurred: failed calling webhook “validate.nginx.ingress.kubernetes.io` 查看策略 ```sh $ kubectl get validatingwebhookconfigurations ``` 删除策略 ```sh $ kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission ```
阅读 2727 评论 0 收藏 0
阅读 2727
评论 0
收藏 0


兜兜    2021-09-22 10:18:21    2021-09-24 09:41:49   

k8s cephfs
#### 环境介绍 ```sh ceph集群节点: 172.16.100.1:6789,172.16.100.2:6789,172.16.100.11:6789 ceph version 15.2.13 (c44bc49e7a57a87d84dfff2a077a2058aa2172e2) octopus (stable) ceph client:15.2.14 (pod镜像:"elementalnet/cephfs-provisioner:0.8.0") ``` `注意:目前官网的quay.io/external_storage/cephfs-provisioner:latest镜像ceph版本为13.x,跟ceph集群的版本15不匹配,导致pv一直是pengding状态。这里使用第三方镜像:elementalnet/cephfs-provisioner:0.8.0` #### 安装ceph k8s节点安装客户端 ```sh $ yum install ceph-common -y ``` 拷贝ceph节点的key到k8s节点 ```sh $ scp /etc/ceph/ceph.client.admin.keyring 172.16.100.100:/etc/ceph ``` 配置访问ceph的secert ```sh $ ceph auth get-key client.admin | base64 QVFEa0RFTmhYQ1UzQUJBQXFmSWptMFJkSVpGaC9VR0V4M0RNc3c9PQ== ``` ```sh $ cat >cephfs-secret.yaml<<EOF apiVersion: v1 kind: Secret metadata: name: ceph-secret-admin namespace: cephfs type: "kubernetes.io/rbd" data: key: QVFEa0RFTmhYQ1UzQUJBQXFmSWptMFJkSVpGaC9VR0V4M0RNc3c9PQ== #替换上面的输出内容 EOF ``` ```sh $ kubectl create -f cephfs-secret.yaml ``` #### 安装cephfs provisioner 参考:https://github.com/kubernetes-retired/external-storage/tree/master/ceph/cephfs/deploy 创建RBAC ```sh $ cat >cephfs-rbac.yaml <<EOF --- apiVersion: v1 kind: ServiceAccount metadata: name: cephfs-provisioner namespace: cephfs --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: cephfs-provisioner namespace: cephfs rules: - apiGroups: [""] resources: ["secrets"] verbs: ["create", "get", "delete"] - apiGroups: [""] resources: ["endpoints"] verbs: ["get", "list", "watch", "create", "update", "patch"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: cephfs-provisioner namespace: cephfs roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: cephfs-provisioner subjects: - kind: ServiceAccount name: cephfs-provisioner --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: cephfs-provisioner namespace: cephfs rules: - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["create", "update", "patch"] - apiGroups: [""] resources: ["services"] resourceNames: ["kube-dns","coredns"] verbs: ["list", "get"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: cephfs-provisioner subjects: - kind: ServiceAccount name: cephfs-provisioner namespace: cephfs roleRef: kind: ClusterRole name: cephfs-provisioner apiGroup: rbac.authorization.k8s.io EOF ``` ```sh $ kubectl create -f cephfs-rbac.yaml ``` 创建cephfs-provisioner ```sh $ cat > cephfs-provisioner.yaml <<EOF apiVersion: apps/v1 kind: Deployment metadata: name: cephfs-provisioner namespace: cephfs spec: replicas: 1 selector: matchLabels: app: cephfs-provisioner strategy: type: Recreate template: metadata: labels: app: cephfs-provisioner spec: containers: - name: cephfs-provisioner #image: "quay.io/external_storage/cephfs-provisioner:latest" #ceph集群版本为15.2,该版本对应的ceph客户端太旧,镜像没有更新,替换下面的镜像 image: "elementalnet/cephfs-provisioner:0.8.0" env: - name: PROVISIONER_NAME value: ceph.com/cephfs - name: PROVISIONER_SECRET_NAMESPACE value: cephfs command: - "/usr/local/bin/cephfs-provisioner" args: - "-id=cephfs-provisioner-1" serviceAccount: cephfs-provisioner EOF ``` ```sh $ kubectl create -f cephfs-provisioner.yaml ``` #### 创建存储类 ```sh $ cat > cephfs-storageclass.yaml <<EOF kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: cephfs namespace: cephfs provisioner: ceph.com/cephfs parameters: monitors: 172.16.100.1:6789,172.16.100.2:6789,172.16.100.11:6789 adminId: admin adminSecretName: ceph-secret-admin adminSecretNamespace: cephfs claimRoot: /pvc-volumes EOF ``` ```sh $ kubectl create -f cephfs-storageclass.yaml ``` #### 创建测试pvc ```sh $ cat > cephfs-test-pvc.yaml <<EOF kind: PersistentVolumeClaim apiVersion: v1 metadata: name: cephfs-test-pvc-1 annotations: volume.beta.kubernetes.io/storage-class: "cephfs" spec: accessModes: - ReadWriteMany resources: requests: storage: 500Mi EOF ``` ```sh $ kubectl create -f cephfs-test-pvc.yaml ``` 获取pvc ```sh $ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE cephfs-test-pvc-1 Bound pvc-571ae252-b080-4a67-8f3d-40bda1304fe3 500Mi RWX cephfs 2m1s ``` 获取pv ```sh $ kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-571ae252-b080-4a67-8f3d-40bda1304fe3 500Mi RWX Delete Bound default/cephfs-test-pvc-1 cephfs 2m3s ``` #### 创建nginx挂载pvc ```sh $ cat > cephfs-test-busybox-deployment.yml <<EOF apiVersion: apps/v1 kind: Deployment metadata: name: cephfs-test-deploy-busybox spec: replicas: 3 selector: matchLabels: app: cephfs-test-busybox template: metadata: labels: app: cephfs-test-busybox spec: containers: - name: busybox image: busybox command: ["sleep", "60000"] volumeMounts: - mountPath: "/mnt/cephfs" name: cephfs-test-pvc volumes: - name: cephfs-test-pvc persistentVolumeClaim: claimName: cephfs-test-pvc-1 EOF ``` ```sh $ kubectl create -f cephfs-test-busybox-deployment.yml ``` ```sh $ kubectl get pods NAME READY STATUS RESTARTS AGE cephfs-test-deploy-busybox-56556d86ff-4dmzn 1/1 Running 0 4m28s cephfs-test-deploy-busybox-56556d86ff-6dr6v 1/1 Running 0 4m28s cephfs-test-deploy-busybox-56556d86ff-b75mw 1/1 Running 0 4m28s ``` 测试pod挂载的cephfs文件读写是否同步 ```sh $ kubectl exec -ti cephfs-test-deploy-busybox-56556d86ff-4dmzn sh / # cd /mnt/cephfs/ /mnt/cephfs # ls /mnt/cephfs # touch cephfs-test-deploy-busybox-56556d86ff-4dmzn /mnt/cephfs # exit ``` ```sh $ kubectl exec -ti cephfs-test-deploy-busybox-56556d86ff-6dr6v sh / # cd /mnt/cephfs/ /mnt/cephfs # ls cephfs-test-deploy-busybox-56556d86ff-4dmzn #pod之间同步成功 ```
阅读 907 评论 0 收藏 0
阅读 907
评论 0
收藏 0


兜兜    2021-09-17 14:20:53    2021-10-19 14:31:14   

cilium
介绍 ```sh 本文档使用cilium(eBFP)+kube-router(BGP路由)模式,etcd使用的是外部集群,K8S ApiServer做了高可用。结合实际情况做调整 此模式优缺点:性能最好,缺点pod和非BGP节点无法通信,目前是Beta阶段,生产不推荐。 ``` #### 安装cilium 配置cilium的访问etcd的secret ```sh $ cd /opt/etcd/ssl #进入etcd证书目录 $ kubectl create secret generic -n kube-system cilium-etcd-secrets \ #创建cilium访问etcd的secret --from-file=etcd-client-ca.crt=ca.pem \ --from-file=etcd-client.key=server-key.pem \ --from-file=etcd-client.crt=server.pem ``` helm安装cilium ```sh $ helm repo add cilium https://helm.cilium.io/ $ export REPLACE_WITH_API_SERVER_IP=172.16.100.111 $ export REPLACE_WITH_API_SERVER_PORT=16443 $ helm install cilium cilium/cilium \ --version 1.9.10 \ #这里选择cilium 1.9.10 --namespace kube-system \ --set ipam.mode=kubernetes \ #指定PodCIDRs IPAM管理方案使用kubernetes --set tunnel=disabled \ #关闭tunnel,开启路由模式 --set nativeRoutingCIDR=172.16.100.0/24 \ #本地网络 --set kubeProxyReplacement=strict \ #严格模式,完全用cilium替代kube-proxy --set k8sServiceHost=$REPLACE_WITH_API_SERVER_IP \ #k8s ApiServer IP --set k8sServicePort=$REPLACE_WITH_API_SERVER_PORT \ #k8s ApiServer Port --set etcd.enabled=true \ #etcd开启 --set etcd.ssl=true \ #etcd ssl证书 --set "etcd.endpoints[0]=https://172.16.100.100:2379" \ #etcd 集群节点 --set "etcd.endpoints[1]=https://172.16.100.101:2379" \ --set "etcd.endpoints[2]=https://172.16.100.102:2379" ``` 测试cilium网络 ```sh $ wget https://raw.githubusercontent.com/cilium/cilium/master/examples/kubernetes/connectivity-check/connectivity-check.yaml $ sed -i 's/google.com/baidu.com/g' connectivity-check.yaml #测试外网的地址改成baidu.com $ kubectl apply -f connectivity-check.yaml ``` 查看pod的状态 ```sh kubectl get pods NAME READY STATUS RESTARTS AGE echo-a-dc9bcfd8f-hgc64 1/1 Running 0 9m59s echo-b-5884b7dc69-bl5px 1/1 Running 0 9m59s echo-b-host-cfdd57978-dg6gw 1/1 Running 0 9m59s host-to-b-multi-node-clusterip-c4ff7ff64-m9zwz 1/1 Running 0 9m58s host-to-b-multi-node-headless-84d8f6f4c4-8b797 1/1 Running 1 9m57s pod-to-a-5cdfd4754d-jgmnt 1/1 Running 0 9m59s pod-to-a-allowed-cnp-7d7c8f9f9b-f9lpc 1/1 Running 0 9m58s pod-to-a-denied-cnp-75cb89dfd-jsjd4 1/1 Running 0 9m59s pod-to-b-intra-node-nodeport-c6d79965d-w98jx 1/1 Running 1 9m57s pod-to-b-multi-node-clusterip-cd4d764b6-gjvc5 1/1 Running 0 9m58s pod-to-b-multi-node-headless-6696c5f8cd-fvcsl 1/1 Running 1 9m58s pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns 1/1 Running 0 9m57s pod-to-external-1111-d5c7bb4c4-sflfc 1/1 Running 0 9m59s pod-to-external-fqdn-allow-google-cnp-dcb4d867d-dxqzx 1/1 Running 0 7m16s ``` 删除kube-proxy ```sh $ kubectl -n kube-system delete ds kube-proxy #删除kube-proxy DaemonSet $ kubectl -n kube-system delete cm kube-proxy $ iptables-restore <(iptables-save | grep -v KUBE) ``` #### 安装kube-router(`使用BGP功能`) ```sh $ curl -LO https://raw.githubusercontent.com/cloudnativelabs/kube-router/v0.4.0/daemonset/generic-kuberouter-only-advertise-routes.yaml $ vim generic-kuberouter-only-advertise-routes.yaml #修改配置文件,只开启router ... - "--run-router=true" - "--run-firewall=false" - "--run-service-proxy=false" - "--enable-cni=false" - "--enable-pod-egress=false" - "--enable-ibgp=true" - "--enable-overlay=true" # - "--peer-router-ips=<CHANGE ME>" # - "--peer-router-asns=<CHANGE ME>" # - "--cluster-asn=<CHANGE ME>" - "--advertise-cluster-ip=true" #宣告集群IP - "--advertise-external-ip=true" #宣告svc外部ip,如果svc指定了external-ip则生效 - "--advertise-loadbalancer-ip=true" ... ``` 部署kube-router ```sh $ kubectl apply -f generic-kuberouter-only-advertise-routes.yaml ``` 查看部署pod ```sh $ kubectl get pods -n kube-system |grep kube-router kube-router-dz58s 1/1 Running 0 2d18h kube-router-vdwqg 1/1 Running 0 2d18h kube-router-wrc4v 1/1 Running 0 2d18h ``` 查看路由 ```sh $ ip route default via 172.16.100.254 dev ens192 proto static metric 100 10.0.1.0/24 via 10.0.1.104 dev cilium_host src 10.0.1.104 10.0.1.104 dev cilium_host scope link 10.244.0.0/24 via 10.244.0.81 dev cilium_host src 10.244.0.81 10.244.0.81 dev cilium_host scope link 10.244.1.0/24 via 172.16.100.101 dev ens192 proto 17 #BGP路由 10.244.2.0/24 via 172.16.100.102 dev ens192 proto 17 #BGP路由 172.16.100.0/24 dev ens192 proto kernel scope link src 172.16.100.100 metric 100 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown ```
阅读 1719 评论 0 收藏 0
阅读 1719
评论 0
收藏 0


兜兜    2021-09-16 00:45:44    2022-01-25 09:20:56   

kubernetes nfs
```sh Kubernetes提供了一套可以自动创建PV的机制,即:Dynamic Provisioning.而这个机制的核心在于:StorageClass这个API对象. StorageClass对象会定义下面两部分内容: 1,PV的属性.比如,存储类型,Volume的大小等. 2,创建这种PV需要用到的存储插件 有了这两个信息之后,Kubernetes就能够根据用户提交的PVC,找到一个对应的StorageClass,之后Kubernetes就会调用该StorageClass声明的存储插件,进而创建出需要的PV. 但是其实使用起来是一件很简单的事情,你只需要根据自己的需求,编写YAML文件即可,然后使用kubectl create命令执行即可 ``` 搭建StorageClass+NFS,大致有以下几个步骤: ```sh 1.创建一个可用的NFS Serve 2.创建Service Account.这是用来管控NFS provisioner在k8s集群中运行的权限 3.创建StorageClass.负责建立PVC并调用NFS provisioner进行预定的工作,并让PV与PVC建立管理 4.创建NFS provisioner.有两个功能,一个是在NFS共享目录下创建挂载点(volume),另一个则是建了PV并将PV与NFS的挂载点建立关联 ``` #### 创建StorageClass 1.创建NFS共享服务 当前环境NFS server及共享目录信息 ```sh IP: 172.16.100.100 PATH: /data/k8s ``` 2.创建ServiceAccout账号 ```sh $ cat > rbac.yaml <<EOF apiVersion: v1 kind: ServiceAccount metadata: name: nfs-client-provisioner namespace: default --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: nfs-client-provisioner-runner rules: - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["create", "update", "patch"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: run-nfs-client-provisioner subjects: - kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default roleRef: kind: ClusterRole name: nfs-client-provisioner-runner apiGroup: rbac.authorization.k8s.io --- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default rules: - apiGroups: [""] resources: ["endpoints"] verbs: ["get", "list", "watch", "create", "update", "patch"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner subjects: - kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default roleRef: kind: Role name: leader-locking-nfs-client-provisioner apiGroup: rbac.authorization.k8s.io EOF ``` ```sh $ kubectl apply -f rbac.yaml ``` 3.创建NFS provisioner ```sh $ cat > nfs-provisioner.yaml <<EOF apiVersion: apps/v1 kind: Deployment metadata: name: nfs-client-provisioner labels: app: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default #与RBAC文件中的namespace保持一致 spec: replicas: 1 selector: matchLabels: app: nfs-client-provisioner strategy: type: Recreate selector: matchLabels: app: nfs-client-provisioner template: metadata: labels: app: nfs-client-provisioner spec: serviceAccountName: nfs-client-provisioner containers: - name: nfs-client-provisioner image: quay.io/external_storage/nfs-client-provisioner:latest volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes env: - name: PROVISIONER_NAME value: nfs-storage-provisioner - name: NFS_SERVER value: 172.16.100.100 - name: NFS_PATH value: /data/k8s volumes: - name: nfs-client-root nfs: server: 172.16.100.100 path: /data/k8s EOF ``` ```sh $ kubectl apply -f nfs-provisioner.yaml ``` 4.创建storageclass ```sh $ cat >nfs-storageclass.yaml <<EOF apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: managed-nfs-storage provisioner: nfs-storage-provisioner EOF ``` ```sh $ kubectl apply -f nfs-storageclass.yaml ``` 5.测试创建pvc ```sh $ cat > test-pvc.yaml <<EOF kind: PersistentVolumeClaim apiVersion: v1 metadata: name: test-claim annotations: volume.beta.kubernetes.io/storage-class: "managed-nfs-storage" #指定上面创建的存储类 spec: accessModes: - ReadWriteMany resources: requests: storage: 1Mi EOF ``` ```sh $ kubectl apply -f test-pvc.yaml ``` 查看pv ```sh kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-66967ce3-5adf-46cf-9fa3-8379f499d254 1Mi RWX Delete Bound default/test-claim managed-nfs-storage 27m ``` 查看pvc ```sh $ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE test-claim Bound pvc-66967ce3-5adf-46cf-9fa3-8379f499d254 1Mi RWX managed-nfs-storage 27m ``` 创建nginx-statefulset.yaml ```sh $ cat >nginx-statefulset.yaml <<EOF --- apiVersion: v1 kind: Service metadata: name: nginx-headless labels: app: nginx spec: ports: - port: 80 name: web clusterIP: None selector: app: nginx --- apiVersion: apps/v1 kind: StatefulSet metadata: name: web spec: selector: matchLabels: app: nginx serviceName: "nginx" replicas: 2 template: metadata: labels: app: nginx spec: containers: - name: nginx image: ikubernetes/myapp:v1 ports: - containerPort: 80 name: web volumeMounts: - name: www mountPath: /usr/share/nginx/html volumeClaimTemplates: - metadata: name: www annotations: volume.beta.kubernetes.io/storage-class: "managed-nfs-storage" spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 20Mi EOF ``` ```sh $ kubectl apply -f nginx-statefulset.yaml ``` 查看pod ```sh $ kubectl get pods -l app=nginx NAME READY STATUS RESTARTS AGE web-0 1/1 Running 0 21m web-1 1/1 Running 0 21m ``` 查看pv ```sh $kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-2c23ca4d-7c98-4021-8099-c208b418aa5b 20Mi RWO Delete Bound default/www-web-0 managed-nfs-storage 22m pvc-4d8da9d3-cb51-49e6-bd0e-c5bad7878551 20Mi RWO Delete Bound default/www-web-1 managed-nfs-storage 21m pvc-66967ce3-5adf-46cf-9fa3-8379f499d254 1Mi RWX Delete Bound default/test-claim managed-nfs-storage 32m ``` 查看pvc ```sh $ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE test-claim Bound pvc-66967ce3-5adf-46cf-9fa3-8379f499d254 1Mi RWX managed-nfs-storage 32m www-web-0 Bound pvc-2c23ca4d-7c98-4021-8099-c208b418aa5b 20Mi RWO managed-nfs-storage 22m www-web-1 Bound pvc-4d8da9d3-cb51-49e6-bd0e-c5bad7878551 20Mi RWO managed-nfs-storage 21m ``` 参考:https://www.cnblogs.com/panwenbin-logs/p/12196286.html
阅读 1116 评论 0 收藏 0
阅读 1116
评论 0
收藏 0


兜兜    2021-09-15 15:55:33    2022-01-25 09:21:07   

kubernetes
部署生产环境Kubernetes集群方式 ```sh kubeadm Kubeadm是一个K8s部署工具,提供kubeadm init和kubeadm join,用于快速部署Kubernetes集群。 二进制包 从github下载发行版的二进制包,手动部署每个组件,组成Kubernetes集群。 ``` 本次测试采用kubeadm搭建集群 架构图 ```sh hostport/nodeport vip:172.16.100.99 +-----------------+ +-----------------+ | keepalived | |K8S-Master1 | | | | | 119.x.x.x | +---------+ | +------->| | | | | | | | | | | | | | | | +--------+ +--------+--> lvs27 | | | +-----------------+ | | | | | | | | 172.16.100.100 +---------+ | | | | +---------+ | | | | | | | | | | +-----------------+ | Client +--------+firewall+---+ | 172.16.100.27 | | |K8S-Master2 | | | | | | | | | | | +---------+ | | | | +---------+ +---------+------->| | | | | | | | | | | | | | | | | | | | | | +--------+ +--------+--> lvs28 | | | +-----------------+ | | | | | 172.16.100.101 | +---------+ | | +-----------------+ | | | |K8S-Node1 | | 172.16.100.2 | | | | | | | | | | | +------->| | +-----------------+ | | +-----------------+ 172.16.100.102 ``` #### 部署高可用高可用负载均衡器 ```sh Kubernetes作为容器集群系统,通过健康检查+重启策略实现了Pod故障自我修复能力,通过调度算法实现将Pod分布式部署,并保持预期副本数,根据Node失效状态自动在其他Node拉起Pod,实现了应用层的高可用性。 针对Kubernetes集群,高可用性还应包含以下两个层面的考虑:Etcd数据库的高可用性和Kubernetes Master组件的高可用性。 而kubeadm搭建的K8s集群,Etcd只起了一个,存在单点,所以我们这里会独立搭建一个Etcd集群。 Master节点扮演着总控中心的角色,通过不断与工作节点上的Kubelet和kube-proxy进行通信来维护整个集群的健康工作状态。如果Master节点故障,将无法使用kubectl工具或者API做任何集群管理。 Master节点主要有三个服务kube-apiserver、kube-controller-manager和kube-scheduler,其中kube-controller-manager和kube-scheduler组件自身通过选择机制已经实现了高可用,所以Master高可用主要针对kube-apiserver组件,而该组件是以HTTP API提供服务,因此对他高可用与Web服务器类似,增加负载均衡器对其负载均衡即可,并且可水平扩容。 ``` [查看安装文档](https://ynotes.cn/blog/article_detail/280) #### 部署Etcd集群 准备cfssl证书生成工具 cfssl是一个开源的证书管理工具,使用json文件生成证书,相比openssl更方便使用。 找任意一台服务器操作,这里用Master节点。 ```sh $ wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 $ wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 $ wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 $ chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64 $ mv cfssl_linux-amd64 /usr/local/bin/cfssl $ mv cfssljson_linux-amd64 /usr/local/bin/cfssljson $ mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo ``` 生成Etcd证书 ```sh $ mkdir -p ~/etcd_tls $ cd ~/etcd_tls ``` ```sh $ cat > ca-config.json << EOF { "signing": { "default": { "expiry": "87600h" }, "profiles": { "www": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF ``` ```sh $ cat > ca-csr.json << EOF { "CN": "etcd CA", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Guangdong", "ST": "Guangzhou" } ] } EOF ``` 生成CA证书 ```sh $ cfssl gencert -initca ca-csr.json | cfssljson -bare ca - ``` 使用自签CA签发Etcd HTTPS证书 创建证书申请文件 ```sh $ cat > server-csr.json << EOF { "CN": "etcd", "hosts": [ "172.16.100.100", "172.16.100.101", "172.16.100.102", "172.16.100.103", "172.16.100.104" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Guangdong", "ST": "Guangzhou" } ] } EOF ``` 生成证书 ```sh $ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | $ cfssljson -bare server ``` 下载二进制文件 ```sh $ wget https://github.com/etcd-io/etcd/releases/download/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz ``` ```sh $ mkdir /opt/etcd/{bin,cfg,ssl} -p $ tar zxvf etcd-v3.4.9-linux-amd64.tar.gz $ mv etcd-v3.4.9-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/ ``` 创建etcd配置文件 ```sh cat > /opt/etcd/cfg/etcd.conf << EOF #[Member] ETCD_NAME="etcd-1" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://172.16.100.100:2380" ETCD_LISTEN_CLIENT_URLS="https://172.16.100.100:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.16.100.100:2380" ETCD_ADVERTISE_CLIENT_URLS="https://172.16.100.100:2379" ETCD_INITIAL_CLUSTER="etcd-1=https://172.16.100.100:2380,etcd-2=https://172.16.100.101:2380,etcd-3=https://172.16.100.102:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" EOF ``` systemd管理etcd ```sh cat > /usr/lib/systemd/system/etcd.service << EOF [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target [Service] Type=notify EnvironmentFile=/opt/etcd/cfg/etcd.conf ExecStart=/opt/etcd/bin/etcd --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --peer-cert-file=/opt/etcd/ssl/server.pem --peer-key-file=/opt/etcd/ssl/server-key.pem --trusted-ca-file=/opt/etcd/ssl/ca.pem --peer-trusted-ca-file=/opt/etcd/ssl/ca.pem --logger=zap Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF ``` 拷贝刚才生成的证书 ```sh $ cp ~/etcd_tls/ca*pem ~/etcd_tls/server*pem /opt/etcd/ssl/ ``` 将上面节点1所有生成的文件拷贝到节点2和节点3 ```sh $ scp -r /opt/etcd/ root@172.16.100.101:/opt/ $ scp -r /opt/etcd/ root@172.16.100.102:/opt/ $ scp -r /usr/lib/systemd/system/etcd.service root@172.16.100.102:/usr/lib/systemd/system/ $ scp -r /usr/lib/systemd/system/etcd.service root@172.16.100.101:/usr/lib/systemd/system/ ``` 三个节点同时启动 ```sh $ systemctl daemon-reload $ systemctl start etcd $ systemctl enable etcd ``` ```sh $ ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://172.16.100.100:2379,https://172.16.100.101:2379,https://172.16.100.102:2379" endpoint health --write-out=table ``` ```sh +-----------------------------+--------+-------------+-------+ | ENDPOINT | HEALTH | TOOK | ERROR | +-----------------------------+--------+-------------+-------+ | https://172.16.100.100:2379 | true | 12.812954ms | | | https://172.16.100.102:2379 | true | 13.596982ms | | | https://172.16.100.101:2379 | true | 14.607151ms | | +-----------------------------+--------+-------------+-------+ ``` #### 安装Docker [查看Docker安装部分](https://ynotes.cn/blog/article_detail/271) #### 安装kubeadm、kubelet、kubectl ```sh $ cat > /etc/yum.repos.d/kubernetes.repo <<EOF [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF ``` 安装kubeadm、kubelet、kubectl ```sh $ yum install -y kubelet-1.18.20 kubeadm-1.18.20 kubectl-1.18.20 $ systemctl enable kubelet ``` 集群配置文件 ```sh $ cat > kubeadm-config.yaml <<EOF apiVersion: kubeadm.k8s.io/v1beta2 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token token: abcdef.0123456789abcdef ttl: 24h0m0s usages: - signing - authentication kind: InitConfiguration localAPIEndpoint: advertiseAddress: 172.16.100.100 bindPort: 6443 nodeRegistration: criSocket: /var/run/dockershim.sock name: k8s-master1 taints: - effect: NoSchedule key: node-role.kubernetes.io/master --- apiServer: certSANs: - k8s-master1 - k8s-master2 - k8s-master3 - 172.16.100.100 - 172.16.100.101 - 172.16.100.102 - 172.16.100.111 - 127.0.0.1 extraArgs: authorization-mode: Node,RBAC timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta2 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controlPlaneEndpoint: 172.16.100.111:16443 controllerManager: {} dns: type: CoreDNS etcd: external: endpoints: - https://172.16.100.100:2379 - https://172.16.100.101:2379 - https://172.16.100.102:2379 caFile: /opt/etcd/ssl/ca.pem certFile: /opt/etcd/ssl/server.pem keyFile: /opt/etcd/ssl/server-key.pem imageRepository: registry.aliyuncs.com/google_containers kind: ClusterConfiguration kubernetesVersion: v1.18.20 networking: dnsDomain: cluster.local podSubnet: 10.244.0.0/16 serviceSubnet: 10.96.0.0/12 scheduler: {} EOF ``` 集群初始化(`k8s-master1`) ```sh $ kubeadm init --config kubeadm-config.yaml ``` ```sh Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of control-plane nodes by copying certificate authorities and service account keys on each node and then running the following as root: kubeadm join 172.16.100.111:16443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:466694e2952961d35ca66960df917b65a5bd5da6219780274603deeb52b2cdde \ --control-plane Then you can join any number of worker nodes by running the following on each as root: kubeadm join 172.16.100.111:16443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:466694e2952961d35ca66960df917b65a5bd5da6219780274603deeb52b2cdde ``` 配置访问集群的配置 ```sh $ mkdir -p $HOME/.kube $ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config $ sudo chown $(id -u):$(id -g) $HOME/.kube/config ``` 加入集群(`k8s-master2`) 拷贝k8s-master1证书 ```sh $ scp -r 172.16.100.100:/etc/kubernetes/pki/ /etc/kubernetes/ ``` ```sh $ kubeadm join 172.16.100.111:16443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:466694e2952961d35ca66960df917b65a5bd5da6219780274603deeb52b2cdde \ --control-plane ``` 加入集群(`k8s-node1`) ```sh $ kubeadm join 172.16.100.111:16443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:466694e2952961d35ca66960df917b65a5bd5da6219780274603deeb52b2cdde ``` 修改主节点可部署pod ```sh $ kubectl taint node k8s-master1 node-role.kubernetes.io/master- $ kubectl taint node k8s-master2 node-role.kubernetes.io/master- ``` #### 部署网络 `方式一:calico(BGP/IPIP)`:比较主流的网络组件,BGP模式性能最好,缺点是pod无法和非BGP节点通信,IPIP无网络限制,相比BGP性能略差。 [参考:k8s配置calico网络](https://ynotes.cn/blog/article_detail/274) `方式二:flannel(vxlan/host-gw)`:host-gw性能最好,缺点是pod无法和非集群节点通信。vxlan可跨网络,相比host-gw性能略差。 [参考:K8S-kubeadm部署k8s集群(一) flannel部分](https://ynotes.cn/blog/article_detail/271) `方式三:cilium(eBFP)+kube-router(BGP路由)`:三种方式性能最好,缺点pod和非BGP节点无法通信,目前是Beta阶段,生产不推荐。 [参考:k8s配置cilium网络](https://ynotes.cn/blog/article_detail/286) #### 重置K8S集群 ##### kubeadm重置节点 ```sh $ kubeadm reset #kubeadm reset 负责从使用 kubeadm init 或 kubeadm join 命令创建的文件中清除节点本地文件系统。对于控制平面节点,reset 还从 etcd 集群中删除该节点的本地 etcd 堆成员,还从 kubeadm ClusterStatus 对象中删除该节点的信息。 ClusterStatus 是一个 kubeadm 管理的 Kubernetes API 对象,该对象包含 kube-apiserver 端点列表。 #kubeadm reset phase 可用于执行上述工作流程的各个阶段。 要跳过阶段列表,您可以使用 --skip-phases 参数,该参数的工作方式类似于 kubeadm join 和 kubeadm init 阶段运行器。 ``` ##### 清理网络配置(flannel) ```sh $ systemctl stop kubelet $ systemctl stop docker $ rm /etc/cni/net.d/* -rf $ ifconfig cni0 down $ ifconfig flannel.1 down $ ifconfig docker0 down $ ip link delete cni0 $ ip link delete flannel.1 $ systemctl start docker ``` ##### 清理iptables ```sh $ iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X ``` ##### 清理ipvs ```sh $ ipvsadm --clear ``` ##### 清理外部 etcd _`如果使用了外部 etcd,kubeadm reset 将不会删除任何 etcd 中的数据。这意味着,如果再次使用相同的 etcd 端点运行 kubeadm init,您将看到先前集群的状态。`_ 要清理 etcd 中的数据,建议您使用 etcdctl 这样的客户端,例如: ```sh $ etcdctl del "" --prefix ```
阅读 702 评论 0 收藏 0
阅读 702
评论 0
收藏 0


兜兜    2021-09-11 14:22:25    2022-01-25 09:21:02   

kubernets CNI cilium
`注意:cilium+kube-router目前仅beta阶段` #### 准备阶段 系统环境 ```sh 节点类型 IP 主机名 系统 内核版本 Master 172.16.13.80 shudoon101 CentOS7.9.2009 5.4.144-1.el7.elrepo.x86_64 Node 172.16.13.81 shudoon102 CentOS7.9.2009 5.4.144-1.el7.elrepo.x86_64 Node 172.16.13.82 shudoon103 CentOS7.9.2009 5.4.144-1.el7.elrepo.x86_64 ``` 软件环境 ```sh CentOS7 kernel 5.4.144 (cilium 内核要求4.9.17+) kubernets 1.20 cilium(1.9.10): CNI网络插件,包含kube-proxy代理功能 kube-router: 仅使用它的BGP路由功能 ``` 挂载bpf文件系统 ```sh $ mount bpffs /sys/fs/bpf -t bpf ``` ```sh $ cat >> /etc/fstab <<E0F bpffs /sys/fs/bpf bpf defaults 0 0 EOF ``` #### 安装cilium 快速安装 ```sh kubectl create -f https://raw.githubusercontent.com/cilium/cilium/v1.9/install/kubernetes/quick-install.yaml ``` 查看状态 ```sh $ kubectl -n kube-system get pods ``` ```sh NAME READY STATUS RESTARTS AGE cilium-csknd 1/1 Running 0 87m cilium-dwz2c 1/1 Running 0 87m cilium-nq8r9 1/1 Running 1 87m cilium-operator-59c9d5bc94-sh4c2 1/1 Running 0 87m coredns-7f89b7bc75-bhgrp 1/1 Running 0 7m24s coredns-7f89b7bc75-sfgvq 1/1 Running 0 7m24s ``` 部署connectivity test ```sh $ wget https://raw.githubusercontent.com/cilium/cilium/v1.9/examples/kubernetes/connectivity-check/connectivity-check.yaml $ sed -i 's/google.com/baidu.com/g' connectivity-check.yaml #测试外网的地址改成baidu.com $ kubectl apply -f connectivity-check.yaml ``` 查看pod的状态 ```sh kubectl get pods NAME READY STATUS RESTARTS AGE echo-a-dc9bcfd8f-hgc64 1/1 Running 0 9m59s echo-b-5884b7dc69-bl5px 1/1 Running 0 9m59s echo-b-host-cfdd57978-dg6gw 1/1 Running 0 9m59s host-to-b-multi-node-clusterip-c4ff7ff64-m9zwz 1/1 Running 0 9m58s host-to-b-multi-node-headless-84d8f6f4c4-8b797 1/1 Running 1 9m57s pod-to-a-5cdfd4754d-jgmnt 1/1 Running 0 9m59s pod-to-a-allowed-cnp-7d7c8f9f9b-f9lpc 1/1 Running 0 9m58s pod-to-a-denied-cnp-75cb89dfd-jsjd4 1/1 Running 0 9m59s pod-to-b-intra-node-nodeport-c6d79965d-w98jx 1/1 Running 1 9m57s pod-to-b-multi-node-clusterip-cd4d764b6-gjvc5 1/1 Running 0 9m58s pod-to-b-multi-node-headless-6696c5f8cd-fvcsl 1/1 Running 1 9m58s pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns 1/1 Running 0 9m57s pod-to-external-1111-d5c7bb4c4-sflfc 1/1 Running 0 9m59s pod-to-external-fqdn-allow-google-cnp-dcb4d867d-dxqzx 1/1 Running 0 7m16s ``` #### 安装Hubble ##### 安装Hubble UI 配置环境变量 ```sh $ export CILIUM_NAMESPACE=kube-system ``` 快速安装 `Cilium 1.9.2+` ```sh $ kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/v1.9/install/kubernetes/quick-hubble-install.yaml ``` helm安装 `Cilium <1.9.2` ```sh # Set this to your installed Cilium version $ export CILIUM_VERSION=1.9.10 # Please set any custom Helm values you may need for Cilium, # such as for example `--set operator.replicas=1` on single-cluster nodes. $ helm template cilium cilium/cilium --version $CILIUM_VERSION \\ --namespace $CILIUM_NAMESPACE \\ --set hubble.tls.auto.method="cronJob" \\ --set hubble.listenAddress=":4244" \\ --set hubble.relay.enabled=true \\ --set hubble.ui.enabled=true > cilium-with-hubble.yaml # This will modify your existing Cilium DaemonSet and ConfigMap $ kubectl apply -f cilium-with-hubble.yaml ``` 配置端口转发 ```sh $ kubectl port-forward -n $CILIUM_NAMESPACE svc/hubble-ui --address 0.0.0.0 --address :: 12000:80 ``` ##### 安装Hubble CLI 下载hubble CLI ```sh $ export CILIUM_NAMESPACE=kube-system $ export HUBBLE_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/hubble/master/stable.txt) $ curl -LO "https://github.com/cilium/hubble/releases/download/$HUBBLE_VERSION/hubble-linux-amd64.tar.gz" $ tar zxf hubble-linux-amd64.tar.gz $ mv hubble /usr/local/bin ``` 开启端口转发 ```sh $ kubectl port-forward -n $CILIUM_NAMESPACE svc/hubble-relay --address 0.0.0.0 --address :: 4245:80 ``` 查看节点状态 ```sh hubble --server localhost:4245 status ``` ```sh Healthcheck (via localhost:4245): Ok Current/Max Flows: 12288/12288 (100.00%) Flows/s: 22.71 Connected Nodes: 3/3 ``` 观察cilium信息 ```sh hubble --server localhost:4245 observe ``` ```sh Sep 11 06:52:06.119: default/pod-to-b-multi-node-clusterip-cd4d764b6-gjvc5:46762 -> default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: ACK, PSH) Sep 11 06:52:06.122: default/echo-b-5884b7dc69-bl5px:8080 <> default/pod-to-b-multi-node-clusterip-cd4d764b6-gjvc5:46762 to-overlay FORWARDED (TCP Flags: ACK, PSH) Sep 11 06:52:06.122: default/pod-to-b-multi-node-clusterip-cd4d764b6-gjvc5:46762 -> default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: ACK, FIN) Sep 11 06:52:06.123: default/echo-b-5884b7dc69-bl5px:8080 <> default/pod-to-b-multi-node-clusterip-cd4d764b6-gjvc5:46762 to-overlay FORWARDED (TCP Flags: ACK, FIN) Sep 11 06:52:06.123: default/pod-to-b-multi-node-clusterip-cd4d764b6-gjvc5:46762 -> default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: ACK) Sep 11 06:52:06.793: default/pod-to-a-5cdfd4754d-jgmnt:54735 -> kube-system/coredns-7f89b7bc75-bhgrp:53 to-endpoint FORWARDED (UDP) Sep 11 06:52:06.793: default/pod-to-a-5cdfd4754d-jgmnt:54735 -> kube-system/coredns-7f89b7bc75-bhgrp:53 to-endpoint FORWARDED (UDP) Sep 11 06:52:06.793: kube-system/coredns-7f89b7bc75-bhgrp:53 <> default/pod-to-a-5cdfd4754d-jgmnt:54735 to-overlay FORWARDED (UDP) Sep 11 06:52:06.793: kube-system/coredns-7f89b7bc75-bhgrp:53 <> default/pod-to-a-5cdfd4754d-jgmnt:54735 to-overlay FORWARDED (UDP) Sep 11 06:52:06.903: kube-system/coredns-7f89b7bc75-bhgrp:32928 -> 172.16.13.80:6443 to-stack FORWARDED (TCP Flags: ACK) Sep 11 06:52:06.903: kube-system/coredns-7f89b7bc75-bhgrp:32928 <- 172.16.13.80:6443 to-endpoint FORWARDED (TCP Flags: ACK) Sep 11 06:52:07.003: 10.244.0.1:36858 -> kube-system/coredns-7f89b7bc75-bhgrp:8181 to-endpoint FORWARDED (TCP Flags: SYN) Sep 11 06:52:07.003: 10.244.0.1:36858 <- kube-system/coredns-7f89b7bc75-bhgrp:8181 to-stack FORWARDED (TCP Flags: SYN, ACK) Sep 11 06:52:07.003: 10.244.0.1:36858 -> kube-system/coredns-7f89b7bc75-bhgrp:8181 to-endpoint FORWARDED (TCP Flags: ACK) Sep 11 06:52:07.003: 10.244.0.1:36858 -> kube-system/coredns-7f89b7bc75-bhgrp:8181 to-endpoint FORWARDED (TCP Flags: ACK, PSH) Sep 11 06:52:07.003: 10.244.0.1:36858 <- kube-system/coredns-7f89b7bc75-bhgrp:8181 to-stack FORWARDED (TCP Flags: ACK, PSH) Sep 11 06:52:07.003: 10.244.0.1:36858 <- kube-system/coredns-7f89b7bc75-bhgrp:8181 to-stack FORWARDED (TCP Flags: ACK, FIN) Sep 11 06:52:07.003: 10.244.0.1:36858 -> kube-system/coredns-7f89b7bc75-bhgrp:8181 to-endpoint FORWARDED (TCP Flags: ACK, FIN) Sep 11 06:52:07.043: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:49051 -> kube-system/coredns-7f89b7bc75-bhgrp:53 to-endpoint FORWARDED (UDP) Sep 11 06:52:07.043: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:49051 -> kube-system/coredns-7f89b7bc75-bhgrp:53 to-endpoint FORWARDED (UDP) Sep 11 06:52:07.043: kube-system/coredns-7f89b7bc75-bhgrp:53 <> default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:49051 to-overlay FORWARDED (UDP) Sep 11 06:52:07.043: kube-system/coredns-7f89b7bc75-bhgrp:53 <> default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:49051 to-overlay FORWARDED (UDP) Sep 11 06:52:07.044: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 <> default/echo-b-5884b7dc69-bl5px:8080 to-overlay FORWARDED (TCP Flags: SYN) Sep 11 06:52:07.044: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 <- default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: SYN, ACK) Sep 11 06:52:07.044: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 <> default/echo-b-5884b7dc69-bl5px:8080 to-overlay FORWARDED (TCP Flags: ACK) Sep 11 06:52:07.044: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 <> default/echo-b-5884b7dc69-bl5px:8080 to-overlay FORWARDED (TCP Flags: ACK, PSH) Sep 11 06:52:07.044: kube-system/hubble-relay-74b76459f9-d66h9:60564 -> 172.16.13.81:4244 to-stack FORWARDED (TCP Flags: ACK) Sep 11 06:52:07.046: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 -> default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: SYN) Sep 11 06:52:07.046: default/echo-b-5884b7dc69-bl5px:8080 <> default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 to-overlay FORWARDED (TCP Flags: SYN, ACK) Sep 11 06:52:07.047: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 -> default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: ACK) Sep 11 06:52:07.047: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 -> default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: ACK, PSH) Sep 11 06:52:07.048: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 <- default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: ACK, PSH) Sep 11 06:52:07.048: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 <> default/echo-b-5884b7dc69-bl5px:8080 to-overlay FORWARDED (TCP Flags: ACK, FIN) Sep 11 06:52:07.048: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 <- default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: ACK, FIN) Sep 11 06:52:07.048: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 <> default/echo-b-5884b7dc69-bl5px:8080 to-overlay FORWARDED (TCP Flags: ACK) Sep 11 06:52:07.050: default/echo-b-5884b7dc69-bl5px:8080 <> default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 to-overlay FORWARDED (TCP Flags: ACK, PSH) Sep 11 06:52:07.051: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 -> default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: ACK, FIN) Sep 11 06:52:07.051: default/echo-b-5884b7dc69-bl5px:8080 <> default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 to-overlay FORWARDED (TCP Flags: ACK, FIN) Sep 11 06:52:07.051: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 -> default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: ACK) Sep 11 06:52:07.095: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:43239 <> kube-system/coredns-7f89b7bc75-bhgrp:53 to-overlay FORWARDED (UDP) Sep 11 06:52:07.096: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:43239 <> kube-system/coredns-7f89b7bc75-bhgrp:53 to-overlay FORWARDED (UDP) Sep 11 06:52:07.096: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:43239 -> kube-system/coredns-7f89b7bc75-bhgrp:53 to-endpoint FORWARDED (UDP) Sep 11 06:52:07.096: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:43239 -> kube-system/coredns-7f89b7bc75-bhgrp:53 to-endpoint FORWARDED (UDP) Sep 11 06:52:07.096: kube-system/coredns-7f89b7bc75-bhgrp:53 <> default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:43239 to-overlay FORWARDED (UDP) Sep 11 06:52:07.096: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:43239 <- kube-system/coredns-7f89b7bc75-bhgrp:53 to-endpoint FORWARDED (UDP) Sep 11 06:52:07.096: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:43239 <- kube-system/coredns-7f89b7bc75-bhgrp:53 to-endpoint FORWARDED (UDP) Sep 11 06:52:07.096: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57236 <> default/echo-b-5884b7dc69-bl5px:8080 to-overlay FORWARDED (TCP Flags: SYN) Sep 11 06:52:07.097: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57236 <- default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: SYN, ACK) Sep 11 06:52:07.097: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57236 <> default/echo-b-5884b7dc69-bl5px:8080 to-overlay FORWARDED (TCP Flags: ACK) Sep 11 06:52:07.097: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57236 <> default/echo-b-5884b7dc69-bl5px:8080 to-overlay FORWARDED (TCP Flags: ACK, PSH) Sep 11 06:52:07.099: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57236 -> default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: SYN) Sep 11 06:52:07.099: default/echo-b-5884b7dc69-bl5px:8080 <> default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57236 to-overlay FORWARDED (TCP Flags: SYN, ACK) Sep 11 06:52:07.099: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57236 -> default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: ACK) Sep 11 06:52:07.100: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57236 -> default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: ACK, PSH) Sep 11 06:52:07.101: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57236 <- default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: ACK, PSH) Sep 11 06:52:07.101: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57236 <> default/echo-b-5884b7dc69-bl5px:8080 to-overlay FORWARDED (TCP Flags: ACK, FIN) Sep 11 06:52:07.102: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57236 <- default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: ACK, FIN) Sep 11 06:52:07.103: default/echo-b-5884b7dc69-bl5px:8080 <> default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57236 to-overlay FORWARDED (TCP Flags: ACK, PSH) Sep 11 06:52:07.104: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57236 -> default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: ACK, FIN) Sep 11 06:52:07.104: default/echo-b-5884b7dc69-bl5px:8080 <> default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57236 to-overlay FORWARDED (TCP Flags: ACK, FIN) ``` 浏览器访问 http://127.0.0.1:12000/ #### Cilium替换kube-proxy(`默认为共存模式Probe`) 删除kube-proxy ```sh $ kubectl -n kube-system delete ds kube-proxy #删除kube-proxy DaemonSet # Delete the configmap as well to avoid kube-proxy being reinstalled during a kubeadm upgrade (works only for K8s 1.19 and newer) $ kubectl -n kube-system delete cm kube-proxy # 所有节点执行,清除kube-proxy相关的规则 $ iptables-restore <(iptables-save | grep -v KUBE) ``` 卸载快速安装的cilium ```sh $ kubectl delete -f https://raw.githubusercontent.com/cilium/cilium/v1.9/install/kubernetes/quick-install.yaml ``` helm3 重新安装cilium库 ```sh $ helm repo add cilium https://helm.cilium.io/ ``` ```sh $ export REPLACE_WITH_API_SERVER_IP=172.16.13.80 $ export REPLACE_WITH_API_SERVER_PORT=6443 $ helm upgrade --install cilium cilium/cilium --version 1.9.10 \ --namespace kube-system \ --set kubeProxyReplacement=strict \ --set k8sServiceHost=REPLACE_WITH_API_SERVER_IP \ --set k8sServicePort=REPLACE_WITH_API_SERVER_PORT ``` 查看部署的模式 ```sh $ kubectl -n kube-system get pods -l k8s-app=cilium NAME READY STATUS RESTARTS AGE cilium-9szpn 1/1 Running 0 45s cilium-fllk6 1/1 Running 0 45s cilium-sn2q5 1/1 Running 0 45s $ kubectl exec -it -n kube-system cilium-9szpn -- cilium status | grep KubeProxyReplacement KubeProxyReplacement: Strict [ens192 (Direct Routing)] #Strict为严格模式 ``` 查看cilium的状态 ```sh $ kubectl exec -ti -n kube-system cilium-9szpn -- cilium status --verbose ``` ```sh KVStore: Ok Disabled Kubernetes: Ok 1.20 (v1.20.10) [linux/amd64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Strict [ens192 (Direct Routing)] Cilium: Ok 1.9.10 (v1.9.10-4e26039) NodeMonitor: Listening for events on 8 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 6/255 allocated from 10.244.1.0/24, Allocated addresses: 10.244.1.164 (health) 10.244.1.169 (default/pod-to-b-intra-node-nodeport-c6d79965d-w98jx [restored]) 10.244.1.211 (default/busybox-deployment-99db9cf4d-dch94 [restored]) 10.244.1.217 (default/echo-b-5884b7dc69-bl5px [restored]) 10.244.1.221 (router) 10.244.1.6 (default/busybox-deployment-99db9cf4d-pgxzz [restored]) BandwidthManager: Disabled Host Routing: Legacy Masquerading: BPF [ens192] 10.244.1.0/24 Clock Source for BPF: ktime Controller Status: 38/38 healthy Name Last success Last error Count Message cilium-health-ep 42s ago never 0 no error dns-garbage-collector-job 53s ago never 0 no error endpoint-1383-regeneration-recovery never never 0 no error endpoint-1746-regeneration-recovery never never 0 no error endpoint-1780-regeneration-recovery never never 0 no error endpoint-3499-regeneration-recovery never never 0 no error endpoint-546-regeneration-recovery never never 0 no error endpoint-777-regeneration-recovery never never 0 no error k8s-heartbeat 23s ago never 0 no error mark-k8s-node-as-available 4m43s ago never 0 no error metricsmap-bpf-prom-sync 8s ago never 0 no error neighbor-table-refresh 4m43s ago never 0 no error resolve-identity-777 4m42s ago never 0 no error restoring-ep-identity (1383) 4m43s ago never 0 no error restoring-ep-identity (1746) 4m43s ago never 0 no error restoring-ep-identity (1780) 4m43s ago never 0 no error restoring-ep-identity (3499) 4m43s ago never 0 no error restoring-ep-identity (546) 4m43s ago never 0 no error sync-endpoints-and-host-ips 43s ago never 0 no error sync-lb-maps-with-k8s-services 4m43s ago never 0 no error sync-policymap-1383 41s ago never 0 no error sync-policymap-1746 41s ago never 0 no error sync-policymap-1780 41s ago never 0 no error sync-policymap-3499 41s ago never 0 no error sync-policymap-546 40s ago never 0 no error sync-policymap-777 41s ago never 0 no error sync-to-k8s-ciliumendpoint (1383) 3s ago never 0 no error sync-to-k8s-ciliumendpoint (1746) 3s ago never 0 no error sync-to-k8s-ciliumendpoint (1780) 3s ago never 0 no error sync-to-k8s-ciliumendpoint (3499) 3s ago never 0 no error sync-to-k8s-ciliumendpoint (546) 3s ago never 0 no error sync-to-k8s-ciliumendpoint (777) 12s ago never 0 no error template-dir-watcher never never 0 no error update-k8s-node-annotations 4m51s ago never 0 no error waiting-initial-global-identities-ep (1383) 4m43s ago never 0 no error waiting-initial-global-identities-ep (1746) 4m43s ago never 0 no error waiting-initial-global-identities-ep (1780) 4m43s ago never 0 no error waiting-initial-global-identities-ep (3499) 4m43s ago never 0 no error Proxy Status: OK, ip 10.244.1.221, 0 redirects active on ports 10000-20000 Hubble: Ok Current/Max Flows: 3086/4096 (75.34%), Flows/s: 11.02 Metrics: Disabled KubeProxyReplacement Details: Status: Strict Protocols: TCP, UDP Devices: ens192 (Direct Routing) Mode: SNAT Backend Selection: Random Session Affinity: Enabled XDP Acceleration: Disabled Services: #下面Enabled表示支持的类型 - ClusterIP: Enabled - NodePort: Enabled (Range: 30000-32767) - LoadBalancer: Enabled - externalIPs: Enabled - HostPort: Enabled BPF Maps: dynamic sizing: on (ratio: 0.002500) Name Size Non-TCP connection tracking 73653 TCP connection tracking 147306 Endpoint policy 65535 Events 8 IP cache 512000 IP masquerading agent 16384 IPv4 fragmentation 8192 IPv4 service 65536 IPv6 service 65536 IPv4 service backend 65536 IPv6 service backend 65536 IPv4 service reverse NAT 65536 IPv6 service reverse NAT 65536 Metrics 1024 NAT 147306 Neighbor table 147306 Global policy 16384 Per endpoint policy 65536 Session affinity 65536 Signal 8 Sockmap 65535 Sock reverse NAT 73653 Tunnel 65536 Cluster health: 3/3 reachable (2021-09-11T08:08:45Z) Name IP Node Endpoints shudoon102 (localhost) 172.16.13.81 reachable reachable shudoon101 172.16.13.80 reachable reachable shudoon103 172.16.13.82 reachable reachable ``` 创建nginx服务 nginx的deployment ```sh cat nginx-deployment.yaml ``` ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: my-nginx spec: selector: matchLabels: run: my-nginx replicas: 2 template: metadata: labels: run: my-nginx spec: containers: - name: my-nginx image: nginx ports: - containerPort: 80 ``` 执行部署 ```sh $ kubectl apply -f nginx-deployment.yaml ``` 获取nginx的pod运行状态 ```sh $ kubectl get pods -l run=my-nginx -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES my-nginx-5b56ccd65f-kdrhn 1/1 Running 0 3m36s 10.244.2.157 shudoon103 <none> <none> my-nginx-5b56ccd65f-ltxc6 1/1 Running 0 3m36s 10.244.1.9 shudoon102 <none> <none> ``` ```sh $ kubectl get svc my-nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE my-nginx NodePort 10.97.255.81 <none> 80:30074/TCP 4h ``` 查看Cilium eBPF生成的服务信息 ```sh kubectl exec -it -n kube-system cilium-9szpn -- cilium service list ID Frontend Service Type Backend 1 10.111.52.74:80 ClusterIP 1 => 10.244.2.249:8081 2 10.96.0.1:443 ClusterIP 1 => 172.16.13.80:6443 3 10.96.0.10:53 ClusterIP 1 => 10.244.0.37:53 2 => 10.244.0.232:53 4 10.96.0.10:9153 ClusterIP 1 => 10.244.0.37:9153 2 => 10.244.0.232:9153 5 10.102.133.211:8080 ClusterIP 1 => 10.244.2.129:8080 6 10.100.95.181:8080 ClusterIP 1 => 10.244.1.217:8080 7 0.0.0.0:31414 NodePort 1 => 10.244.1.217:8080 8 172.16.13.81:31414 NodePort 1 => 10.244.1.217:8080 9 10.97.255.81:80 ClusterIP 1 => 10.244.2.157:80 #service的10.97.255.81对应pod后端 2 => 10.244.1.9:80 10 0.0.0.0:30074 NodePort 1 => 10.244.2.157:80 #NodePort 2 => 10.244.1.9:80 11 172.16.13.81:30074 NodePort 1 => 10.244.2.157:80 2 => 10.244.1.9:80 12 10.98.83.147:80 ClusterIP 1 => 10.244.2.144:4245 13 172.16.13.81:40000 HostPort 1 => 10.244.1.217:8080 14 0.0.0.0:40000 HostPort 1 => 10.244.1.217:8080 ``` 查看iptables是否生成规则 ```sh $ iptables-save | grep KUBE-SVC [ empty line ] ``` 测试nginx服务 ```sh $curl http://10.97.255.81 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> ... ``` #### 整合kube-router ```sh $ curl -LO https://raw.githubusercontent.com/cloudnativelabs/kube-router/v0.4.0/daemonset/generic-kuberouter-only-advertise-routes.yaml ``` 修改配置文件 ```sh $ cat generic-kuberouter-only-advertise-routes.yaml ``` ```yaml ... - "--run-router=true" - "--run-firewall=false" - "--run-service-proxy=false" - "--enable-cni=false" - "--enable-pod-egress=false" - "--enable-ibgp=true" - "--enable-overlay=true" # - "--peer-router-ips=<CHANGE ME>" # - "--peer-router-asns=<CHANGE ME>" # - "--cluster-asn=<CHANGE ME>" - "--advertise-cluster-ip=true" - "--advertise-external-ip=true" - "--advertise-loadbalancer-ip=true" ... ``` ```sh $ kubectl apply -f generic-kuberouter-only-advertise-routes.yaml ``` ```sh $ kubectl -n kube-system get pods -l k8s-app=kube-router NAME READY STATUS RESTARTS AGE kube-router-5q2xw 1/1 Running 0 72m kube-router-cl9nc 1/1 Running 0 72m kube-router-tmdnt 1/1 Running 0 72m ``` 重新安装 ```sh $ helm delete cilium -n kube-system #删除旧的helm $ export REPLACE_WITH_API_SERVER_IP=172.16.13.80 $ export REPLACE_WITH_API_SERVER_PORT=6443 $ helm install cilium cilium/cilium --version 1.9.10 \ --namespace kube-system \ --set ipam.mode=kubernetes \ --set tunnel=disabled \ --set nativeRoutingCIDR=172.16.13.0/24 \ --set kubeProxyReplacement=strict \ --set k8sServiceHost=$REPLACE_WITH_API_SERVER_IP \ --set k8sServicePort=$REPLACE_WITH_API_SERVER_PORT ``` ```sh kubectl -n kube-system get pods -l k8s-app=cilium NAME READY STATUS RESTARTS AGE cilium-f22c2 1/1 Running 0 5m58s cilium-ftg8n 1/1 Running 0 5m58s cilium-s4mng 1/1 Running 0 5m58s ``` ```sh $ kubectl -n kube-system exec -ti cilium-s4mng -- ip route list scope global default via 172.16.13.254 dev ens192 proto static metric 100 10.244.0.0/24 via 10.244.0.1 dev cilium_host src 10.244.0.1 #Local PodCIDR 10.244.1.0/24 via 172.16.13.81 dev ens192 proto 17 #BGP route 10.244.2.0/24 via 172.16.13.82 dev ens192 proto 17 #BGP route ``` 验证安装 ```sh $ curl -L --remote-name-all https://github.com/cilium/cilium-cli/releases/latest/download/cilium-linux-amd64.tar.gz $ tar xzvfC cilium-linux-amd64.tar.gz /usr/local/bin ``` 验证cilium安装 ```sh $ cilium status --wait /¯¯\ /¯¯\__/¯¯\ Cilium: OK \__/¯¯\__/ Operator: OK /¯¯\__/¯¯\ Hubble: OK \__/¯¯\__/ ClusterMesh: disabled \__/ Deployment hubble-relay Desired: 1, Ready: 1/1, Available: 1/1 Deployment hubble-ui Desired: 1, Ready: 1/1, Available: 1/1 DaemonSet cilium Desired: 3, Ready: 3/3, Available: 3/3 Deployment cilium-operator Desired: 2, Ready: 2/2, Available: 2/2 Containers: cilium-operator Running: 2 hubble-relay Running: 1 hubble-ui Running: 1 cilium Running: 3 Cluster Pods: 20/20 managed by Cilium Image versions cilium quay.io/cilium/cilium:v1.9.10: 3 cilium-operator quay.io/cilium/operator-generic:v1.9.10: 2 hubble-relay quay.io/cilium/hubble-relay:v1.9.10@sha256:f15bc1a1127be143c957158651141443c9fa14683426ef8789cf688fb94cae55: 1 hubble-ui quay.io/cilium/hubble-ui:v0.7.3: 1 hubble-ui quay.io/cilium/hubble-ui-backend:v0.7.3: 1 hubble-ui docker.io/envoyproxy/envoy:v1.14.5: 1 ``` 验证k8s的cilium ```sh $ cilium connectivity test ``` 参考:https://docs.cilium.io/en/v1.9/gettingstarted/k8s-install-default/
阅读 776 评论 0 收藏 0
阅读 776
评论 0
收藏 0


第 1 页 / 共 4 页