私信
兜兜
文章
189
评论
12
点赞
97
原创 164
翻译 4
转载 21

文章
关注
粉丝
收藏

个人分类:

兜兜    2021-09-17 14:20:53    2021-10-19 14:31:14   

cilium
介绍 ```sh 本文档使用cilium(eBFP)+kube-router(BGP路由)模式,etcd使用的是外部集群,K8S ApiServer做了高可用。结合实际情况做调整 此模式优缺点:性能最好,缺点pod和非BGP节点无法通信,目前是Beta阶段,生产不推荐。 ``` #### 安装cilium 配置cilium的访问etcd的secret ```sh $ cd /opt/etcd/ssl #进入etcd证书目录 $ kubectl create secret generic -n kube-system cilium-etcd-secrets \ #创建cilium访问etcd的secret --from-file=etcd-client-ca.crt=ca.pem \ --from-file=etcd-client.key=server-key.pem \ --from-file=etcd-client.crt=server.pem ``` helm安装cilium ```sh $ helm repo add cilium https://helm.cilium.io/ $ export REPLACE_WITH_API_SERVER_IP=172.16.100.111 $ export REPLACE_WITH_API_SERVER_PORT=16443 $ helm install cilium cilium/cilium \ --version 1.9.10 \ #这里选择cilium 1.9.10 --namespace kube-system \ --set ipam.mode=kubernetes \ #指定PodCIDRs IPAM管理方案使用kubernetes --set tunnel=disabled \ #关闭tunnel,开启路由模式 --set nativeRoutingCIDR=172.16.100.0/24 \ #本地网络 --set kubeProxyReplacement=strict \ #严格模式,完全用cilium替代kube-proxy --set k8sServiceHost=$REPLACE_WITH_API_SERVER_IP \ #k8s ApiServer IP --set k8sServicePort=$REPLACE_WITH_API_SERVER_PORT \ #k8s ApiServer Port --set etcd.enabled=true \ #etcd开启 --set etcd.ssl=true \ #etcd ssl证书 --set "etcd.endpoints[0]=https://172.16.100.100:2379" \ #etcd 集群节点 --set "etcd.endpoints[1]=https://172.16.100.101:2379" \ --set "etcd.endpoints[2]=https://172.16.100.102:2379" ``` 测试cilium网络 ```sh $ wget https://raw.githubusercontent.com/cilium/cilium/master/examples/kubernetes/connectivity-check/connectivity-check.yaml $ sed -i 's/google.com/baidu.com/g' connectivity-check.yaml #测试外网的地址改成baidu.com $ kubectl apply -f connectivity-check.yaml ``` 查看pod的状态 ```sh kubectl get pods NAME READY STATUS RESTARTS AGE echo-a-dc9bcfd8f-hgc64 1/1 Running 0 9m59s echo-b-5884b7dc69-bl5px 1/1 Running 0 9m59s echo-b-host-cfdd57978-dg6gw 1/1 Running 0 9m59s host-to-b-multi-node-clusterip-c4ff7ff64-m9zwz 1/1 Running 0 9m58s host-to-b-multi-node-headless-84d8f6f4c4-8b797 1/1 Running 1 9m57s pod-to-a-5cdfd4754d-jgmnt 1/1 Running 0 9m59s pod-to-a-allowed-cnp-7d7c8f9f9b-f9lpc 1/1 Running 0 9m58s pod-to-a-denied-cnp-75cb89dfd-jsjd4 1/1 Running 0 9m59s pod-to-b-intra-node-nodeport-c6d79965d-w98jx 1/1 Running 1 9m57s pod-to-b-multi-node-clusterip-cd4d764b6-gjvc5 1/1 Running 0 9m58s pod-to-b-multi-node-headless-6696c5f8cd-fvcsl 1/1 Running 1 9m58s pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns 1/1 Running 0 9m57s pod-to-external-1111-d5c7bb4c4-sflfc 1/1 Running 0 9m59s pod-to-external-fqdn-allow-google-cnp-dcb4d867d-dxqzx 1/1 Running 0 7m16s ``` 删除kube-proxy ```sh $ kubectl -n kube-system delete ds kube-proxy #删除kube-proxy DaemonSet $ kubectl -n kube-system delete cm kube-proxy $ iptables-restore <(iptables-save | grep -v KUBE) ``` #### 安装kube-router(`使用BGP功能`) ```sh $ curl -LO https://raw.githubusercontent.com/cloudnativelabs/kube-router/v0.4.0/daemonset/generic-kuberouter-only-advertise-routes.yaml $ vim generic-kuberouter-only-advertise-routes.yaml #修改配置文件,只开启router ... - "--run-router=true" - "--run-firewall=false" - "--run-service-proxy=false" - "--enable-cni=false" - "--enable-pod-egress=false" - "--enable-ibgp=true" - "--enable-overlay=true" # - "--peer-router-ips=<CHANGE ME>" # - "--peer-router-asns=<CHANGE ME>" # - "--cluster-asn=<CHANGE ME>" - "--advertise-cluster-ip=true" #宣告集群IP - "--advertise-external-ip=true" #宣告svc外部ip,如果svc指定了external-ip则生效 - "--advertise-loadbalancer-ip=true" ... ``` 部署kube-router ```sh $ kubectl apply -f generic-kuberouter-only-advertise-routes.yaml ``` 查看部署pod ```sh $ kubectl get pods -n kube-system |grep kube-router kube-router-dz58s 1/1 Running 0 2d18h kube-router-vdwqg 1/1 Running 0 2d18h kube-router-wrc4v 1/1 Running 0 2d18h ``` 查看路由 ```sh $ ip route default via 172.16.100.254 dev ens192 proto static metric 100 10.0.1.0/24 via 10.0.1.104 dev cilium_host src 10.0.1.104 10.0.1.104 dev cilium_host scope link 10.244.0.0/24 via 10.244.0.81 dev cilium_host src 10.244.0.81 10.244.0.81 dev cilium_host scope link 10.244.1.0/24 via 172.16.100.101 dev ens192 proto 17 #BGP路由 10.244.2.0/24 via 172.16.100.102 dev ens192 proto 17 #BGP路由 172.16.100.0/24 dev ens192 proto kernel scope link src 172.16.100.100 metric 100 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown ```
阅读 152 评论 0 收藏 0
阅读 152
评论 0
收藏 0


兜兜    2021-09-16 00:45:44    2021-10-27 15:15:39   

kubernetes nfs
```sh Kubernetes提供了一套可以自动创建PV的机制,即:Dynamic Provisioning.而这个机制的核心在于:StorageClass这个API对象. StorageClass对象会定义下面两部分内容: 1,PV的属性.比如,存储类型,Volume的大小等. 2,创建这种PV需要用到的存储插件 有了这两个信息之后,Kubernetes就能够根据用户提交的PVC,找到一个对应的StorageClass,之后Kubernetes就会调用该StorageClass声明的存储插件,进而创建出需要的PV. 但是其实使用起来是一件很简单的事情,你只需要根据自己的需求,编写YAML文件即可,然后使用kubectl create命令执行即可 ``` 搭建StorageClass+NFS,大致有以下几个步骤: ```sh 1.创建一个可用的NFS Serve 2.创建Service Account.这是用来管控NFS provisioner在k8s集群中运行的权限 3.创建StorageClass.负责建立PVC并调用NFS provisioner进行预定的工作,并让PV与PVC建立管理 4.创建NFS provisioner.有两个功能,一个是在NFS共享目录下创建挂载点(volume),另一个则是建了PV并将PV与NFS的挂载点建立关联 ``` #### 创建StorageClass 1.创建NFS共享服务 当前环境NFS server及共享目录信息 ```sh IP: 172.16.100.100 PATH: /data/k8s ``` 2.创建ServiceAccout账号 ```sh $ cat > rbac.yaml <<EOF apiVersion: v1 kind: ServiceAccount metadata: name: nfs-client-provisioner namespace: default --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: nfs-client-provisioner-runner rules: - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["create", "update", "patch"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: run-nfs-client-provisioner subjects: - kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default roleRef: kind: ClusterRole name: nfs-client-provisioner-runner apiGroup: rbac.authorization.k8s.io --- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default rules: - apiGroups: [""] resources: ["endpoints"] verbs: ["get", "list", "watch", "create", "update", "patch"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner subjects: - kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default roleRef: kind: Role name: leader-locking-nfs-client-provisioner apiGroup: rbac.authorization.k8s.io EOF ``` ```sh $ kubectl apply -f rbac.yaml ``` 3.创建NFS provisioner ```sh $ cat > nfs-provisioner.yaml <<EOF apiVersion: apps/v1 kind: Deployment metadata: name: nfs-client-provisioner labels: app: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default #与RBAC文件中的namespace保持一致 spec: replicas: 1 selector: matchLabels: app: nfs-client-provisioner strategy: type: Recreate selector: matchLabels: app: nfs-client-provisioner template: metadata: labels: app: nfs-client-provisioner spec: serviceAccountName: nfs-client-provisioner containers: - name: nfs-client-provisioner image: quay.io/external_storage/nfs-client-provisioner:latest volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes env: - name: PROVISIONER_NAME value: nfs-storage-provisioner - name: NFS_SERVER value: 172.16.100.100 - name: NFS_PATH value: /data/k8s volumes: - name: nfs-client-root nfs: server: 172.16.100.100 path: /data/k8s EOF ``` ```sh $ kubectl apply -f nfs-provisioner.yaml ``` 4.创建storageclass ```sh $ cat >nfs-storageclass.yaml <<EOF apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: managed-nfs-storage provisioner: nfs-storage-provisioner EOF ``` ```sh $ kubectl apply -f nfs-storageclass.yaml ``` 5.测试创建pvc ```sh $ cat > test-pvc.yaml <<EOF kind: PersistentVolumeClaim apiVersion: v1 metadata: name: test-claim annotations: volume.beta.kubernetes.io/storage-class: "managed-nfs-storage" #指定上面创建的存储类 spec: accessModes: - ReadWriteMany resources: requests: storage: 1Mi EOF ``` ```sh $ kubectl apply -f test-pvc.yaml ``` 查看pv ```sh kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-66967ce3-5adf-46cf-9fa3-8379f499d254 1Mi RWX Delete Bound default/test-claim managed-nfs-storage 27m ``` 查看pvc ```sh $ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE test-claim Bound pvc-66967ce3-5adf-46cf-9fa3-8379f499d254 1Mi RWX managed-nfs-storage 27m ``` 创建nginx-statefulset.yaml ```sh $ cat >nginx-statefulset.yaml <<EOF --- apiVersion: v1 kind: Service metadata: name: nginx-headless labels: app: nginx spec: ports: - port: 80 name: web clusterIP: None selector: app: nginx --- apiVersion: apps/v1 kind: StatefulSet metadata: name: web spec: selector: matchLabels: app: nginx serviceName: "nginx" replicas: 2 template: metadata: labels: app: nginx spec: containers: - name: nginx image: ikubernetes/myapp:v1 ports: - containerPort: 80 name: web volumeMounts: - name: www mountPath: /usr/share/nginx/html volumeClaimTemplates: - metadata: name: www annotations: volume.beta.kubernetes.io/storage-class: "managed-nfs-storage" spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 20Mi EOF ``` ```sh $ kubectl apply -f nginx-statefulset.yaml ``` 查看pod ```sh $ kubectl get pods -l app=nginx NAME READY STATUS RESTARTS AGE web-0 1/1 Running 0 21m web-1 1/1 Running 0 21m ``` 查看pv ```sh $kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-2c23ca4d-7c98-4021-8099-c208b418aa5b 20Mi RWO Delete Bound default/www-web-0 managed-nfs-storage 22m pvc-4d8da9d3-cb51-49e6-bd0e-c5bad7878551 20Mi RWO Delete Bound default/www-web-1 managed-nfs-storage 21m pvc-66967ce3-5adf-46cf-9fa3-8379f499d254 1Mi RWX Delete Bound default/test-claim managed-nfs-storage 32m ``` 查看pvc ```sh $ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE test-claim Bound pvc-66967ce3-5adf-46cf-9fa3-8379f499d254 1Mi RWX managed-nfs-storage 32m www-web-0 Bound pvc-2c23ca4d-7c98-4021-8099-c208b418aa5b 20Mi RWO managed-nfs-storage 22m www-web-1 Bound pvc-4d8da9d3-cb51-49e6-bd0e-c5bad7878551 20Mi RWO managed-nfs-storage 21m ``` 参考:https://www.cnblogs.com/panwenbin-logs/p/12196286.html
阅读 118 评论 0 收藏 0
阅读 118
评论 0
收藏 0


兜兜    2021-09-15 15:56:38    2021-10-27 15:14:16   

Keepalived LVS ingress
```sh 防火墙端口映射80/443到LVS的VIP对应80/443,LVS负载均衡K8S节点IP的80/443端口。ingress-nginx-controller服务暴露方式通过(HostNetwork:80/443)。实现的效果119.x.x.x:80/443-->172.16.100.99:80/443(LVS VIP)--> 172.16.100.100:80/443,172.16.100.101:80/443,172.16.100.102:80/443 ``` 配置规划 ```sh +----------------+----------------+--------+--------------------------+ | Host | IP | Port | SoftWare | +----------------+----------------+--------+--------------------------+ | LVS01 | 172.16.100.27 | 80/443 | LVS,Keepalived | | LVS02 | 172.16.100.28 | 80/443 | LVS,Keepalived | | RS/k8s-master1 | 172.16.100.100 | 80/443 | ingress-nginx-controller | | RS/k8s-master2 | 172.16.100.101 | 80/443 | ingress-nginx-controller | | RS/k8s-node1 | 172.16.100.102 | 80/443 | ingress-nginx-controller | | VIP | 172.16.100.99 | 80/443 | / | +----------------+----------------+--------+--------------------------+ ``` 安装lvs和keepalived(`172.16.100.27/172.16.100.28`) ```sh $ yum install ipvsadm keepalived -y $ systemctl enable keepavlied ``` 配置keepalived(`172.16.100.27`) ```sh $ cat > /etc/keepalived/keepalived.conf <<EOF ! Configuration File for keepalived global_defs { router_id LVS_27 #route id } vrrp_instance VI_1 { state MASTER #主节点 interface ens192 #网卡 virtual_router_id 51 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 172.16.100.99 } } virtual_server 172.16.100.99 443 { delay_loop 3 lb_algo rr lb_kind DR persistence_timeout 50 protocol TCP real_server 172.16.100.100 443 { weight 1 TCP_CHECK { connect_port 443 connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } real_server 172.16.100.101 443 { weight 1 TCP_CHECK { connect_port 443 connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } real_server 172.16.100.102 443 { weight 1 TCP_CHECK { connect_port 443 connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } } virtual_server 172.16.100.99 80 { delay_loop 3 lb_algo rr lb_kind DR persistence_timeout 50 protocol TCP real_server 172.16.100.100 80 { weight 1 TCP_CHECK { connect_port 80 connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } real_server 172.16.100.101 80 { weight 1 TCP_CHECK { connect_port 80 connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } real_server 172.16.100.102 80 { weight 1 TCP_CHECK { connect_port 80 connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } } EOF ``` 配置keepalived(`172.16.100.28`) ```sh $ cat /etc/keepalived/keepalived.conf ... global_defs { router_id LVS_28 #route id,两台机器配置不一样 } vrrp_instance VI_1 { state BACKUP #备份节点 interface ens192 #网卡名 virtual_router_id 51 priority 99 #优先级 ... ``` 配置RS节点(`172.16.100.100/172.16.100.101/172.16.100.102`) ```sh $ cat >/etc/init.d/lvs_rs.sh <<EOF vip=172.16.100.99 mask='255.255.255.255' dev=lo:1 case $1 in start) echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce ifconfig $dev $vip netmask $mask #broadcast $vip up echo "The RS Server is Ready!" ;; stop) ifconfig $dev down echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce echo 0 > /proc/sys/net/ipv4/conf/lo/arp_announce echo "The RS Server is Canceled!" ;; *) echo "Usage: $(basename $0) start|stop" exit 1 ;; esac EOF ``` 启动脚本 ```sh $ chmod +x /etc/init.d/lvs_rs.sh $ /etc/init.d/lvs_rs.sh start ``` ```sh $ ip a #查看lo:1接口VIP是否绑定 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet 172.16.100.99/32 scope global lo:1 valid_lft forever preferred_lft forever ``` 启动keepalived(`172.16.100.27/172.16.100.28`) ```sh $ systemctl start keepavlied ``` 查看VIP是否绑定 ```sh $ ip a 2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 00:50:56:93:ed:a4 brd ff:ff:ff:ff:ff:ff inet 172.16.100.27/24 brd 172.16.100.255 scope global noprefixroute ens192 valid_lft forever preferred_lft forever inet 172.16.100.99/32 scope global ens192 valid_lft forever preferred_lft forever ``` 查看LVS信息 ```sh $ ipvsadm -L -n IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 172.16.100.99:80 rr persistent 50 -> 172.16.100.100:80 Route 1 0 0 -> 172.16.100.101:80 Route 1 0 0 -> 172.16.100.102:80 Route 1 0 0 TCP 172.16.100.99:443 rr persistent 50 -> 172.16.100.100:443 Route 1 0 0 -> 172.16.100.101:443 Route 1 0 0 -> 172.16.100.102:443 Route 1 0 0 ```
阅读 105 评论 0 收藏 0
阅读 105
评论 0
收藏 0


兜兜    2021-09-15 15:55:33    2021-10-27 15:13:48   

kubernetes
部署生产环境Kubernetes集群方式 ```sh kubeadm Kubeadm是一个K8s部署工具,提供kubeadm init和kubeadm join,用于快速部署Kubernetes集群。 二进制包 从github下载发行版的二进制包,手动部署每个组件,组成Kubernetes集群。 ``` 本次测试采用kubeadm搭建集群 架构图 ```sh hostport/nodeport vip:172.16.100.99 +-----------------+ +-----------------+ | keepalived | |K8S-Master1 | | | | | 119.x.x.x | +---------+ | +------->| | | | | | | | | | | | | | | | +--------+ +--------+--> lvs27 | | | +-----------------+ | | | | | | | | 172.16.100.100 +---------+ | | | | +---------+ | | | | | | | | | | +-----------------+ | Client +--------+firewall+---+ | 172.16.100.27 | | |K8S-Master2 | | | | | | | | | | | +---------+ | | | | +---------+ +---------+------->| | | | | | | | | | | | | | | | | | | | | | +--------+ +--------+--> lvs28 | | | +-----------------+ | | | | | 172.16.100.101 | +---------+ | | +-----------------+ | | | |K8S-Node1 | | 172.16.100.2 | | | | | | | | | | | +------->| | +-----------------+ | | +-----------------+ 172.16.100.102 ``` #### 部署高可用高可用负载均衡器 ```sh Kubernetes作为容器集群系统,通过健康检查+重启策略实现了Pod故障自我修复能力,通过调度算法实现将Pod分布式部署,并保持预期副本数,根据Node失效状态自动在其他Node拉起Pod,实现了应用层的高可用性。 针对Kubernetes集群,高可用性还应包含以下两个层面的考虑:Etcd数据库的高可用性和Kubernetes Master组件的高可用性。 而kubeadm搭建的K8s集群,Etcd只起了一个,存在单点,所以我们这里会独立搭建一个Etcd集群。 Master节点扮演着总控中心的角色,通过不断与工作节点上的Kubelet和kube-proxy进行通信来维护整个集群的健康工作状态。如果Master节点故障,将无法使用kubectl工具或者API做任何集群管理。 Master节点主要有三个服务kube-apiserver、kube-controller-manager和kube-scheduler,其中kube-controller-manager和kube-scheduler组件自身通过选择机制已经实现了高可用,所以Master高可用主要针对kube-apiserver组件,而该组件是以HTTP API提供服务,因此对他高可用与Web服务器类似,增加负载均衡器对其负载均衡即可,并且可水平扩容。 ``` [查看安装文档](https://ynotes.cn/blog/article_detail/280) #### 部署Etcd集群 准备cfssl证书生成工具 cfssl是一个开源的证书管理工具,使用json文件生成证书,相比openssl更方便使用。 找任意一台服务器操作,这里用Master节点。 ```sh $ wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 $ wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 $ wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 $ chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64 $ mv cfssl_linux-amd64 /usr/local/bin/cfssl $ mv cfssljson_linux-amd64 /usr/local/bin/cfssljson $ mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo ``` 生成Etcd证书 ```sh $ mkdir -p ~/etcd_tls $ cd ~/etcd_tls ``` ```sh $ cat > ca-config.json << EOF { "signing": { "default": { "expiry": "87600h" }, "profiles": { "www": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF ``` ```sh $ cat > ca-csr.json << EOF { "CN": "etcd CA", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Guangdong", "ST": "Guangzhou" } ] } EOF ``` 生成CA证书 ```sh $ cfssl gencert -initca ca-csr.json | cfssljson -bare ca - ``` 使用自签CA签发Etcd HTTPS证书 创建证书申请文件 ```sh $ cat > server-csr.json << EOF { "CN": "etcd", "hosts": [ "172.16.100.100", "172.16.100.101", "172.16.100.102", "172.16.100.103", "172.16.100.104" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Guangdong", "ST": "Guangzhou" } ] } EOF ``` 生成证书 ```sh $ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | $ cfssljson -bare server ``` 下载二进制文件 ```sh $ wget https://github.com/etcd-io/etcd/releases/download/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz ``` ```sh $ mkdir /opt/etcd/{bin,cfg,ssl} -p $ tar zxvf etcd-v3.4.9-linux-amd64.tar.gz $ mv etcd-v3.4.9-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/ ``` 创建etcd配置文件 ```sh cat > /opt/etcd/cfg/etcd.conf << EOF #[Member] ETCD_NAME="etcd-1" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://172.16.100.100:2380" ETCD_LISTEN_CLIENT_URLS="https://172.16.100.100:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.16.100.100:2380" ETCD_ADVERTISE_CLIENT_URLS="https://172.16.100.100:2379" ETCD_INITIAL_CLUSTER="etcd-1=https://172.16.100.100:2380,etcd-2=https://172.16.100.101:2380,etcd-3=https://172.16.100.102:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" EOF ``` systemd管理etcd ```sh cat > /usr/lib/systemd/system/etcd.service << EOF [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target [Service] Type=notify EnvironmentFile=/opt/etcd/cfg/etcd.conf ExecStart=/opt/etcd/bin/etcd --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --peer-cert-file=/opt/etcd/ssl/server.pem --peer-key-file=/opt/etcd/ssl/server-key.pem --trusted-ca-file=/opt/etcd/ssl/ca.pem --peer-trusted-ca-file=/opt/etcd/ssl/ca.pem --logger=zap Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF ``` 拷贝刚才生成的证书 ```sh $ cp ~/etcd_tls/ca*pem ~/etcd_tls/server*pem /opt/etcd/ssl/ ``` 将上面节点1所有生成的文件拷贝到节点2和节点3 ```sh $ scp -r /opt/etcd/ root@172.16.100.101:/opt/ $ scp -r /opt/etcd/ root@172.16.100.102:/opt/ $ scp -r /usr/lib/systemd/system/etcd.service root@172.16.100.102:/usr/lib/systemd/system/ $ scp -r /usr/lib/systemd/system/etcd.service root@172.16.100.101:/usr/lib/systemd/system/ ``` 三个节点同时启动 ```sh $ systemctl daemon-reload $ systemctl start etcd $ systemctl enable etcd ``` ```sh $ ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://172.16.100.100:2379,https://172.16.100.101:2379,https://172.16.100.102:2379" endpoint health --write-out=table ``` ```sh +-----------------------------+--------+-------------+-------+ | ENDPOINT | HEALTH | TOOK | ERROR | +-----------------------------+--------+-------------+-------+ | https://172.16.100.100:2379 | true | 12.812954ms | | | https://172.16.100.102:2379 | true | 13.596982ms | | | https://172.16.100.101:2379 | true | 14.607151ms | | +-----------------------------+--------+-------------+-------+ ``` #### 安装Docker [查看Docker安装部分](https://ynotes.cn/blog/article_detail/271) #### 安装kubeadm、kubelet、kubectl ```sh $ cat > /etc/yum.repos.d/kubernetes.repo <<EOF [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF ``` 安装kubeadm、kubelet、kubectl ```sh $ yum install -y kubelet-1.18.20 kubeadm-1.18.20 kubectl-1.18.20 $ systemctl enable kubelet ``` 集群配置文件 ```sh $ cat > kubeadm-config.yaml <<EOF apiVersion: kubeadm.k8s.io/v1beta2 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token token: abcdef.0123456789abcdef ttl: 24h0m0s usages: - signing - authentication kind: InitConfiguration localAPIEndpoint: advertiseAddress: 172.16.100.100 bindPort: 6443 nodeRegistration: criSocket: /var/run/dockershim.sock name: k8s-master1 taints: - effect: NoSchedule key: node-role.kubernetes.io/master --- apiServer: certSANs: - k8s-master1 - k8s-master2 - k8s-master3 - 172.16.100.100 - 172.16.100.101 - 172.16.100.102 - 172.16.100.111 - 127.0.0.1 extraArgs: authorization-mode: Node,RBAC timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta2 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controlPlaneEndpoint: 172.16.100.111:16443 controllerManager: {} dns: type: CoreDNS etcd: external: endpoints: - https://172.16.100.100:2379 - https://172.16.100.101:2379 - https://172.16.100.102:2379 caFile: /opt/etcd/ssl/ca.pem certFile: /opt/etcd/ssl/server.pem keyFile: /opt/etcd/ssl/server-key.pem imageRepository: registry.aliyuncs.com/google_containers kind: ClusterConfiguration kubernetesVersion: v1.18.20 networking: dnsDomain: cluster.local podSubnet: 10.244.0.0/16 serviceSubnet: 10.96.0.0/12 scheduler: {} EOF ``` 集群初始化(`k8s-master1`) ```sh $ kubeadm init --config kubeadm-config.yaml ``` ```sh Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of control-plane nodes by copying certificate authorities and service account keys on each node and then running the following as root: kubeadm join 172.16.100.111:16443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:466694e2952961d35ca66960df917b65a5bd5da6219780274603deeb52b2cdde \ --control-plane Then you can join any number of worker nodes by running the following on each as root: kubeadm join 172.16.100.111:16443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:466694e2952961d35ca66960df917b65a5bd5da6219780274603deeb52b2cdde ``` 配置访问集群的配置 ```sh $ mkdir -p $HOME/.kube $ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config $ sudo chown $(id -u):$(id -g) $HOME/.kube/config ``` 加入集群(`k8s-master2`) 拷贝k8s-master1证书 ```sh $ scp -r 172.16.100.100:/etc/kubernetes/pki/ /etc/kubernetes/ ``` ```sh $ kubeadm join 172.16.100.111:16443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:466694e2952961d35ca66960df917b65a5bd5da6219780274603deeb52b2cdde \ --control-plane ``` 加入集群(`k8s-node1`) ```sh $ kubeadm join 172.16.100.111:16443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:466694e2952961d35ca66960df917b65a5bd5da6219780274603deeb52b2cdde ``` 修改主节点可部署pod ```sh $ kubectl taint node k8s-master1 node-role.kubernetes.io/master- $ kubectl taint node k8s-master2 node-role.kubernetes.io/master- ``` #### 部署网络 `方式一:calico(BGP/IPIP)`:比较主流的网络组件,BGP模式性能最好,缺点是pod无法和非BGP节点通信,IPIP无网络限制,相比BGP性能略差。 [参考:k8s配置calico网络](https://ynotes.cn/blog/article_detail/274) `方式二:flannel(vxlan/host-gw)`:host-gw性能最好,缺点是pod无法和非集群节点通信。vxlan可跨网络,相比host-gw性能略差。 [参考:K8S-kubeadm部署k8s集群(一) flannel部分](https://ynotes.cn/blog/article_detail/271) `方式三:cilium(eBFP)+kube-router(BGP路由)`:三种方式性能最好,缺点pod和非BGP节点无法通信,目前是Beta阶段,生产不推荐。 [参考:k8s配置cilium网络](https://ynotes.cn/blog/article_detail/286) #### 重置K8S集群 ##### kubeadm重置节点 ```sh $ kubeadm reset #kubeadm reset 负责从使用 kubeadm init 或 kubeadm join 命令创建的文件中清除节点本地文件系统。对于控制平面节点,reset 还从 etcd 集群中删除该节点的本地 etcd 堆成员,还从 kubeadm ClusterStatus 对象中删除该节点的信息。 ClusterStatus 是一个 kubeadm 管理的 Kubernetes API 对象,该对象包含 kube-apiserver 端点列表。 #kubeadm reset phase 可用于执行上述工作流程的各个阶段。 要跳过阶段列表,您可以使用 --skip-phases 参数,该参数的工作方式类似于 kubeadm join 和 kubeadm init 阶段运行器。 ``` ##### 清理网络配置(flannel) ```sh $ systemctl stop kubelet $ systemctl stop docker $ rm /etc/cni/net.d/* -rf $ ifconfig cni0 down $ ifconfig flannel.1 down $ ifconfig docker0 down $ ip link delete cni0 $ ip link delete flannel.1 $ systemctl start docker ``` ##### 清理iptables ```sh $ iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X ``` ##### 清理ipvs ```sh $ ipvsadm --clear ``` ##### 清理外部 etcd _`如果使用了外部 etcd,kubeadm reset 将不会删除任何 etcd 中的数据。这意味着,如果再次使用相同的 etcd 端点运行 kubeadm init,您将看到先前集群的状态。`_ 要清理 etcd 中的数据,建议您使用 etcdctl 这样的客户端,例如: ```sh $ etcdctl del "" --prefix ```
阅读 82 评论 0 收藏 0
阅读 82
评论 0
收藏 0


兜兜    2021-09-15 15:38:16    2021-10-27 15:14:05   

nginx Keepalived
```sh 高可用K8S ApiServer服务,通过nginx的stream做四层负载均衡,nginx高可用通过keepalived实现。实现的效果172.16.100.111:16433--> 172.16.100.100:6433/172.16.100.101:6433 ``` 配置规划 ```sh +-------------+----------------+-------+----------------------------+ | Host | IP | Port | SoftWare | +-------------+----------------+-------+----------------------------+ | k8s-master1 | 172.16.100.100 | 6433 | Nginx,Keepalived,ApiServer | | k8s-master2 | 172.16.100.101 | 6433 | Nginx,Keepalived,ApiServer | | VIP | 172.16.100.111 | 16433 | / | +-------------+----------------+-------+----------------------------+ ``` 主从LVS节点 ```sh $ yum install nginx nginx-mod-stream keepalived -y #nginx-mod-stream 四层负载均衡stream模块 ``` 配置主从节点nginx ```sh $ cat /etc/nginx/nginx.conf ... events { worker_connections 1024; } # 四层负载均衡,为两台Master apiserver组件提供负载均衡 stream { log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent'; access_log /var/log/nginx/k8s-access.log main; upstream k8s-apiserver { server 172.16.100.100:6443; # Master1 APISERVER IP:PORT server 172.16.100.101:6443; # Master1 APISERVER IP:PORT } server { listen 16443; # 由于nginx与master节点复用,这个监听端口不能是6443,否则会冲突 proxy_pass k8s-apiserver; } } ... ``` 配置主节点keepalived ```sh $cat > /etc/keepalived/keepalived.conf <<EOF global_defs { router_id keepalived_100 } vrrp_script check_nginx { script "/etc/keepalived/check_nginx.sh" } vrrp_instance VI_1 { state MASTER interface ens192 # 修改为实际网卡名 virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的 priority 100 # 优先级,备服务器设置 90 advert_int 1 # 指定VRRP 心跳包通告间隔时间,默认1秒 authentication { auth_type PASS auth_pass 1111 } # 虚拟IP virtual_ipaddress { 172.16.100.111/24 } track_script { check_nginx } } EOF ``` nginx检查脚本 ```sh $ cat >/etc/keepalived/check_nginx.sh <<EOF #!/bin/bash count=$(ss -antp |grep 16443 |egrep -cv "grep|$$") if [ "$count" -eq 0 ];then exit 1 else exit 0 fi EOF ``` ```sh systemctl restart nginx systemctl enable nginx systemctl restart keepalived systemctl enable keepalived ``` 配置从nginx节点keepalived ```sh $ cat >/etc/keepalived/keepalived.conf <<EOF global_defs { router_id keepalived_101 } vrrp_script check_nginx { script "/etc/keepalived/check_nginx.sh" } vrrp_instance VI_1 { state BACKUP interface ens192 # 修改为实际网卡名 virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的 priority 90 # 优先级,备服务器设置 90 advert_int 1 # 指定VRRP 心跳包通告间隔时间,默认1秒 authentication { auth_type PASS auth_pass 1111 } # 虚拟IP virtual_ipaddress { 172.16.100.111/24 } track_script { check_nginx } } EOF ``` 主从启动nginx和keepalived ```sh systemctl restart nginx systemctl enable nginx systemctl restart keepalived systemctl enable keepalived ``` 查看vip是否绑定 ```sh $ ip a 2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 00:50:56:93:6a:3a brd ff:ff:ff:ff:ff:ff inet 172.16.100.101/24 brd 172.16.100.255 scope global noprefixroute ens192 valid_lft forever preferred_lft forever inet 172.16.100.111/32 scope global ens192 ``` 测试 ```sh $ curl http://172.16.100.111 <html> <head><title>404 Not Found</title></head> <body> <center><h1>404 Not Found</h1></center> <hr><center>nginx</center> </body> </html> ```
阅读 32 评论 0 收藏 0
阅读 32
评论 0
收藏 0


兜兜    2021-09-11 14:22:25    2021-10-27 15:15:46   

kubernets CNI cilium
`注意:cilium+kube-router目前仅beta阶段` #### 准备阶段 系统环境 ```sh 节点类型 IP 主机名 系统 内核版本 Master 172.16.13.80 shudoon101 CentOS7.9.2009 5.4.144-1.el7.elrepo.x86_64 Node 172.16.13.81 shudoon102 CentOS7.9.2009 5.4.144-1.el7.elrepo.x86_64 Node 172.16.13.82 shudoon103 CentOS7.9.2009 5.4.144-1.el7.elrepo.x86_64 ``` 软件环境 ```sh CentOS7 kernel 5.4.144 (cilium 内核要求4.9.17+) kubernets 1.20 cilium(1.9.10): CNI网络插件,包含kube-proxy代理功能 kube-router: 仅使用它的BGP路由功能 ``` 挂载bpf文件系统 ```sh $ mount bpffs /sys/fs/bpf -t bpf ``` ```sh $ cat >> /etc/fstab <<E0F bpffs /sys/fs/bpf bpf defaults 0 0 EOF ``` #### 安装cilium 快速安装 ```sh kubectl create -f https://raw.githubusercontent.com/cilium/cilium/v1.9/install/kubernetes/quick-install.yaml ``` 查看状态 ```sh $ kubectl -n kube-system get pods ``` ```sh NAME READY STATUS RESTARTS AGE cilium-csknd 1/1 Running 0 87m cilium-dwz2c 1/1 Running 0 87m cilium-nq8r9 1/1 Running 1 87m cilium-operator-59c9d5bc94-sh4c2 1/1 Running 0 87m coredns-7f89b7bc75-bhgrp 1/1 Running 0 7m24s coredns-7f89b7bc75-sfgvq 1/1 Running 0 7m24s ``` 部署connectivity test ```sh $ wget https://raw.githubusercontent.com/cilium/cilium/v1.9/examples/kubernetes/connectivity-check/connectivity-check.yaml $ sed -i 's/google.com/baidu.com/g' connectivity-check.yaml #测试外网的地址改成baidu.com $ kubectl apply -f connectivity-check.yaml ``` 查看pod的状态 ```sh kubectl get pods NAME READY STATUS RESTARTS AGE echo-a-dc9bcfd8f-hgc64 1/1 Running 0 9m59s echo-b-5884b7dc69-bl5px 1/1 Running 0 9m59s echo-b-host-cfdd57978-dg6gw 1/1 Running 0 9m59s host-to-b-multi-node-clusterip-c4ff7ff64-m9zwz 1/1 Running 0 9m58s host-to-b-multi-node-headless-84d8f6f4c4-8b797 1/1 Running 1 9m57s pod-to-a-5cdfd4754d-jgmnt 1/1 Running 0 9m59s pod-to-a-allowed-cnp-7d7c8f9f9b-f9lpc 1/1 Running 0 9m58s pod-to-a-denied-cnp-75cb89dfd-jsjd4 1/1 Running 0 9m59s pod-to-b-intra-node-nodeport-c6d79965d-w98jx 1/1 Running 1 9m57s pod-to-b-multi-node-clusterip-cd4d764b6-gjvc5 1/1 Running 0 9m58s pod-to-b-multi-node-headless-6696c5f8cd-fvcsl 1/1 Running 1 9m58s pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns 1/1 Running 0 9m57s pod-to-external-1111-d5c7bb4c4-sflfc 1/1 Running 0 9m59s pod-to-external-fqdn-allow-google-cnp-dcb4d867d-dxqzx 1/1 Running 0 7m16s ``` #### 安装Hubble ##### 安装Hubble UI 配置环境变量 ```sh $ export CILIUM_NAMESPACE=kube-system ``` 快速安装 `Cilium 1.9.2+` ```sh $ kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/v1.9/install/kubernetes/quick-hubble-install.yaml ``` helm安装 `Cilium <1.9.2` ```sh # Set this to your installed Cilium version $ export CILIUM_VERSION=1.9.10 # Please set any custom Helm values you may need for Cilium, # such as for example `--set operator.replicas=1` on single-cluster nodes. $ helm template cilium cilium/cilium --version $CILIUM_VERSION \\ --namespace $CILIUM_NAMESPACE \\ --set hubble.tls.auto.method="cronJob" \\ --set hubble.listenAddress=":4244" \\ --set hubble.relay.enabled=true \\ --set hubble.ui.enabled=true > cilium-with-hubble.yaml # This will modify your existing Cilium DaemonSet and ConfigMap $ kubectl apply -f cilium-with-hubble.yaml ``` 配置端口转发 ```sh $ kubectl port-forward -n $CILIUM_NAMESPACE svc/hubble-ui --address 0.0.0.0 --address :: 12000:80 ``` ##### 安装Hubble CLI 下载hubble CLI ```sh $ export CILIUM_NAMESPACE=kube-system $ export HUBBLE_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/hubble/master/stable.txt) $ curl -LO "https://github.com/cilium/hubble/releases/download/$HUBBLE_VERSION/hubble-linux-amd64.tar.gz" $ tar zxf hubble-linux-amd64.tar.gz $ mv hubble /usr/local/bin ``` 开启端口转发 ```sh $ kubectl port-forward -n $CILIUM_NAMESPACE svc/hubble-relay --address 0.0.0.0 --address :: 4245:80 ``` 查看节点状态 ```sh hubble --server localhost:4245 status ``` ```sh Healthcheck (via localhost:4245): Ok Current/Max Flows: 12288/12288 (100.00%) Flows/s: 22.71 Connected Nodes: 3/3 ``` 观察cilium信息 ```sh hubble --server localhost:4245 observe ``` ```sh Sep 11 06:52:06.119: default/pod-to-b-multi-node-clusterip-cd4d764b6-gjvc5:46762 -> default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: ACK, PSH) Sep 11 06:52:06.122: default/echo-b-5884b7dc69-bl5px:8080 <> default/pod-to-b-multi-node-clusterip-cd4d764b6-gjvc5:46762 to-overlay FORWARDED (TCP Flags: ACK, PSH) Sep 11 06:52:06.122: default/pod-to-b-multi-node-clusterip-cd4d764b6-gjvc5:46762 -> default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: ACK, FIN) Sep 11 06:52:06.123: default/echo-b-5884b7dc69-bl5px:8080 <> default/pod-to-b-multi-node-clusterip-cd4d764b6-gjvc5:46762 to-overlay FORWARDED (TCP Flags: ACK, FIN) Sep 11 06:52:06.123: default/pod-to-b-multi-node-clusterip-cd4d764b6-gjvc5:46762 -> default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: ACK) Sep 11 06:52:06.793: default/pod-to-a-5cdfd4754d-jgmnt:54735 -> kube-system/coredns-7f89b7bc75-bhgrp:53 to-endpoint FORWARDED (UDP) Sep 11 06:52:06.793: default/pod-to-a-5cdfd4754d-jgmnt:54735 -> kube-system/coredns-7f89b7bc75-bhgrp:53 to-endpoint FORWARDED (UDP) Sep 11 06:52:06.793: kube-system/coredns-7f89b7bc75-bhgrp:53 <> default/pod-to-a-5cdfd4754d-jgmnt:54735 to-overlay FORWARDED (UDP) Sep 11 06:52:06.793: kube-system/coredns-7f89b7bc75-bhgrp:53 <> default/pod-to-a-5cdfd4754d-jgmnt:54735 to-overlay FORWARDED (UDP) Sep 11 06:52:06.903: kube-system/coredns-7f89b7bc75-bhgrp:32928 -> 172.16.13.80:6443 to-stack FORWARDED (TCP Flags: ACK) Sep 11 06:52:06.903: kube-system/coredns-7f89b7bc75-bhgrp:32928 <- 172.16.13.80:6443 to-endpoint FORWARDED (TCP Flags: ACK) Sep 11 06:52:07.003: 10.244.0.1:36858 -> kube-system/coredns-7f89b7bc75-bhgrp:8181 to-endpoint FORWARDED (TCP Flags: SYN) Sep 11 06:52:07.003: 10.244.0.1:36858 <- kube-system/coredns-7f89b7bc75-bhgrp:8181 to-stack FORWARDED (TCP Flags: SYN, ACK) Sep 11 06:52:07.003: 10.244.0.1:36858 -> kube-system/coredns-7f89b7bc75-bhgrp:8181 to-endpoint FORWARDED (TCP Flags: ACK) Sep 11 06:52:07.003: 10.244.0.1:36858 -> kube-system/coredns-7f89b7bc75-bhgrp:8181 to-endpoint FORWARDED (TCP Flags: ACK, PSH) Sep 11 06:52:07.003: 10.244.0.1:36858 <- kube-system/coredns-7f89b7bc75-bhgrp:8181 to-stack FORWARDED (TCP Flags: ACK, PSH) Sep 11 06:52:07.003: 10.244.0.1:36858 <- kube-system/coredns-7f89b7bc75-bhgrp:8181 to-stack FORWARDED (TCP Flags: ACK, FIN) Sep 11 06:52:07.003: 10.244.0.1:36858 -> kube-system/coredns-7f89b7bc75-bhgrp:8181 to-endpoint FORWARDED (TCP Flags: ACK, FIN) Sep 11 06:52:07.043: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:49051 -> kube-system/coredns-7f89b7bc75-bhgrp:53 to-endpoint FORWARDED (UDP) Sep 11 06:52:07.043: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:49051 -> kube-system/coredns-7f89b7bc75-bhgrp:53 to-endpoint FORWARDED (UDP) Sep 11 06:52:07.043: kube-system/coredns-7f89b7bc75-bhgrp:53 <> default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:49051 to-overlay FORWARDED (UDP) Sep 11 06:52:07.043: kube-system/coredns-7f89b7bc75-bhgrp:53 <> default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:49051 to-overlay FORWARDED (UDP) Sep 11 06:52:07.044: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 <> default/echo-b-5884b7dc69-bl5px:8080 to-overlay FORWARDED (TCP Flags: SYN) Sep 11 06:52:07.044: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 <- default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: SYN, ACK) Sep 11 06:52:07.044: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 <> default/echo-b-5884b7dc69-bl5px:8080 to-overlay FORWARDED (TCP Flags: ACK) Sep 11 06:52:07.044: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 <> default/echo-b-5884b7dc69-bl5px:8080 to-overlay FORWARDED (TCP Flags: ACK, PSH) Sep 11 06:52:07.044: kube-system/hubble-relay-74b76459f9-d66h9:60564 -> 172.16.13.81:4244 to-stack FORWARDED (TCP Flags: ACK) Sep 11 06:52:07.046: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 -> default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: SYN) Sep 11 06:52:07.046: default/echo-b-5884b7dc69-bl5px:8080 <> default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 to-overlay FORWARDED (TCP Flags: SYN, ACK) Sep 11 06:52:07.047: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 -> default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: ACK) Sep 11 06:52:07.047: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 -> default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: ACK, PSH) Sep 11 06:52:07.048: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 <- default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: ACK, PSH) Sep 11 06:52:07.048: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 <> default/echo-b-5884b7dc69-bl5px:8080 to-overlay FORWARDED (TCP Flags: ACK, FIN) Sep 11 06:52:07.048: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 <- default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: ACK, FIN) Sep 11 06:52:07.048: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 <> default/echo-b-5884b7dc69-bl5px:8080 to-overlay FORWARDED (TCP Flags: ACK) Sep 11 06:52:07.050: default/echo-b-5884b7dc69-bl5px:8080 <> default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 to-overlay FORWARDED (TCP Flags: ACK, PSH) Sep 11 06:52:07.051: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 -> default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: ACK, FIN) Sep 11 06:52:07.051: default/echo-b-5884b7dc69-bl5px:8080 <> default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 to-overlay FORWARDED (TCP Flags: ACK, FIN) Sep 11 06:52:07.051: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 -> default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: ACK) Sep 11 06:52:07.095: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:43239 <> kube-system/coredns-7f89b7bc75-bhgrp:53 to-overlay FORWARDED (UDP) Sep 11 06:52:07.096: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:43239 <> kube-system/coredns-7f89b7bc75-bhgrp:53 to-overlay FORWARDED (UDP) Sep 11 06:52:07.096: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:43239 -> kube-system/coredns-7f89b7bc75-bhgrp:53 to-endpoint FORWARDED (UDP) Sep 11 06:52:07.096: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:43239 -> kube-system/coredns-7f89b7bc75-bhgrp:53 to-endpoint FORWARDED (UDP) Sep 11 06:52:07.096: kube-system/coredns-7f89b7bc75-bhgrp:53 <> default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:43239 to-overlay FORWARDED (UDP) Sep 11 06:52:07.096: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:43239 <- kube-system/coredns-7f89b7bc75-bhgrp:53 to-endpoint FORWARDED (UDP) Sep 11 06:52:07.096: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:43239 <- kube-system/coredns-7f89b7bc75-bhgrp:53 to-endpoint FORWARDED (UDP) Sep 11 06:52:07.096: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57236 <> default/echo-b-5884b7dc69-bl5px:8080 to-overlay FORWARDED (TCP Flags: SYN) Sep 11 06:52:07.097: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57236 <- default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: SYN, ACK) Sep 11 06:52:07.097: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57236 <> default/echo-b-5884b7dc69-bl5px:8080 to-overlay FORWARDED (TCP Flags: ACK) Sep 11 06:52:07.097: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57236 <> default/echo-b-5884b7dc69-bl5px:8080 to-overlay FORWARDED (TCP Flags: ACK, PSH) Sep 11 06:52:07.099: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57236 -> default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: SYN) Sep 11 06:52:07.099: default/echo-b-5884b7dc69-bl5px:8080 <> default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57236 to-overlay FORWARDED (TCP Flags: SYN, ACK) Sep 11 06:52:07.099: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57236 -> default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: ACK) Sep 11 06:52:07.100: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57236 -> default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: ACK, PSH) Sep 11 06:52:07.101: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57236 <- default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: ACK, PSH) Sep 11 06:52:07.101: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57236 <> default/echo-b-5884b7dc69-bl5px:8080 to-overlay FORWARDED (TCP Flags: ACK, FIN) Sep 11 06:52:07.102: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57236 <- default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: ACK, FIN) Sep 11 06:52:07.103: default/echo-b-5884b7dc69-bl5px:8080 <> default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57236 to-overlay FORWARDED (TCP Flags: ACK, PSH) Sep 11 06:52:07.104: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57236 -> default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: ACK, FIN) Sep 11 06:52:07.104: default/echo-b-5884b7dc69-bl5px:8080 <> default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57236 to-overlay FORWARDED (TCP Flags: ACK, FIN) ``` 浏览器访问 http://127.0.0.1:12000/ #### Cilium替换kube-proxy(`默认为共存模式Probe`) 删除kube-proxy ```sh $ kubectl -n kube-system delete ds kube-proxy #删除kube-proxy DaemonSet # Delete the configmap as well to avoid kube-proxy being reinstalled during a kubeadm upgrade (works only for K8s 1.19 and newer) $ kubectl -n kube-system delete cm kube-proxy # 所有节点执行,清除kube-proxy相关的规则 $ iptables-restore <(iptables-save | grep -v KUBE) ``` 卸载快速安装的cilium ```sh $ kubectl delete -f https://raw.githubusercontent.com/cilium/cilium/v1.9/install/kubernetes/quick-install.yaml ``` helm3 重新安装cilium库 ```sh $ helm repo add cilium https://helm.cilium.io/ ``` ```sh $ export REPLACE_WITH_API_SERVER_IP=172.16.13.80 $ export REPLACE_WITH_API_SERVER_PORT=6443 $ helm upgrade --install cilium cilium/cilium --version 1.9.10 \ --namespace kube-system \ --set kubeProxyReplacement=strict \ --set k8sServiceHost=REPLACE_WITH_API_SERVER_IP \ --set k8sServicePort=REPLACE_WITH_API_SERVER_PORT ``` 查看部署的模式 ```sh $ kubectl -n kube-system get pods -l k8s-app=cilium NAME READY STATUS RESTARTS AGE cilium-9szpn 1/1 Running 0 45s cilium-fllk6 1/1 Running 0 45s cilium-sn2q5 1/1 Running 0 45s $ kubectl exec -it -n kube-system cilium-9szpn -- cilium status | grep KubeProxyReplacement KubeProxyReplacement: Strict [ens192 (Direct Routing)] #Strict为严格模式 ``` 查看cilium的状态 ```sh $ kubectl exec -ti -n kube-system cilium-9szpn -- cilium status --verbose ``` ```sh KVStore: Ok Disabled Kubernetes: Ok 1.20 (v1.20.10) [linux/amd64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Strict [ens192 (Direct Routing)] Cilium: Ok 1.9.10 (v1.9.10-4e26039) NodeMonitor: Listening for events on 8 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 6/255 allocated from 10.244.1.0/24, Allocated addresses: 10.244.1.164 (health) 10.244.1.169 (default/pod-to-b-intra-node-nodeport-c6d79965d-w98jx [restored]) 10.244.1.211 (default/busybox-deployment-99db9cf4d-dch94 [restored]) 10.244.1.217 (default/echo-b-5884b7dc69-bl5px [restored]) 10.244.1.221 (router) 10.244.1.6 (default/busybox-deployment-99db9cf4d-pgxzz [restored]) BandwidthManager: Disabled Host Routing: Legacy Masquerading: BPF [ens192] 10.244.1.0/24 Clock Source for BPF: ktime Controller Status: 38/38 healthy Name Last success Last error Count Message cilium-health-ep 42s ago never 0 no error dns-garbage-collector-job 53s ago never 0 no error endpoint-1383-regeneration-recovery never never 0 no error endpoint-1746-regeneration-recovery never never 0 no error endpoint-1780-regeneration-recovery never never 0 no error endpoint-3499-regeneration-recovery never never 0 no error endpoint-546-regeneration-recovery never never 0 no error endpoint-777-regeneration-recovery never never 0 no error k8s-heartbeat 23s ago never 0 no error mark-k8s-node-as-available 4m43s ago never 0 no error metricsmap-bpf-prom-sync 8s ago never 0 no error neighbor-table-refresh 4m43s ago never 0 no error resolve-identity-777 4m42s ago never 0 no error restoring-ep-identity (1383) 4m43s ago never 0 no error restoring-ep-identity (1746) 4m43s ago never 0 no error restoring-ep-identity (1780) 4m43s ago never 0 no error restoring-ep-identity (3499) 4m43s ago never 0 no error restoring-ep-identity (546) 4m43s ago never 0 no error sync-endpoints-and-host-ips 43s ago never 0 no error sync-lb-maps-with-k8s-services 4m43s ago never 0 no error sync-policymap-1383 41s ago never 0 no error sync-policymap-1746 41s ago never 0 no error sync-policymap-1780 41s ago never 0 no error sync-policymap-3499 41s ago never 0 no error sync-policymap-546 40s ago never 0 no error sync-policymap-777 41s ago never 0 no error sync-to-k8s-ciliumendpoint (1383) 3s ago never 0 no error sync-to-k8s-ciliumendpoint (1746) 3s ago never 0 no error sync-to-k8s-ciliumendpoint (1780) 3s ago never 0 no error sync-to-k8s-ciliumendpoint (3499) 3s ago never 0 no error sync-to-k8s-ciliumendpoint (546) 3s ago never 0 no error sync-to-k8s-ciliumendpoint (777) 12s ago never 0 no error template-dir-watcher never never 0 no error update-k8s-node-annotations 4m51s ago never 0 no error waiting-initial-global-identities-ep (1383) 4m43s ago never 0 no error waiting-initial-global-identities-ep (1746) 4m43s ago never 0 no error waiting-initial-global-identities-ep (1780) 4m43s ago never 0 no error waiting-initial-global-identities-ep (3499) 4m43s ago never 0 no error Proxy Status: OK, ip 10.244.1.221, 0 redirects active on ports 10000-20000 Hubble: Ok Current/Max Flows: 3086/4096 (75.34%), Flows/s: 11.02 Metrics: Disabled KubeProxyReplacement Details: Status: Strict Protocols: TCP, UDP Devices: ens192 (Direct Routing) Mode: SNAT Backend Selection: Random Session Affinity: Enabled XDP Acceleration: Disabled Services: #下面Enabled表示支持的类型 - ClusterIP: Enabled - NodePort: Enabled (Range: 30000-32767) - LoadBalancer: Enabled - externalIPs: Enabled - HostPort: Enabled BPF Maps: dynamic sizing: on (ratio: 0.002500) Name Size Non-TCP connection tracking 73653 TCP connection tracking 147306 Endpoint policy 65535 Events 8 IP cache 512000 IP masquerading agent 16384 IPv4 fragmentation 8192 IPv4 service 65536 IPv6 service 65536 IPv4 service backend 65536 IPv6 service backend 65536 IPv4 service reverse NAT 65536 IPv6 service reverse NAT 65536 Metrics 1024 NAT 147306 Neighbor table 147306 Global policy 16384 Per endpoint policy 65536 Session affinity 65536 Signal 8 Sockmap 65535 Sock reverse NAT 73653 Tunnel 65536 Cluster health: 3/3 reachable (2021-09-11T08:08:45Z) Name IP Node Endpoints shudoon102 (localhost) 172.16.13.81 reachable reachable shudoon101 172.16.13.80 reachable reachable shudoon103 172.16.13.82 reachable reachable ``` 创建nginx服务 nginx的deployment ```sh cat nginx-deployment.yaml ``` ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: my-nginx spec: selector: matchLabels: run: my-nginx replicas: 2 template: metadata: labels: run: my-nginx spec: containers: - name: my-nginx image: nginx ports: - containerPort: 80 ``` 执行部署 ```sh $ kubectl apply -f nginx-deployment.yaml ``` 获取nginx的pod运行状态 ```sh $ kubectl get pods -l run=my-nginx -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES my-nginx-5b56ccd65f-kdrhn 1/1 Running 0 3m36s 10.244.2.157 shudoon103 <none> <none> my-nginx-5b56ccd65f-ltxc6 1/1 Running 0 3m36s 10.244.1.9 shudoon102 <none> <none> ``` ```sh $ kubectl get svc my-nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE my-nginx NodePort 10.97.255.81 <none> 80:30074/TCP 4h ``` 查看Cilium eBPF生成的服务信息 ```sh kubectl exec -it -n kube-system cilium-9szpn -- cilium service list ID Frontend Service Type Backend 1 10.111.52.74:80 ClusterIP 1 => 10.244.2.249:8081 2 10.96.0.1:443 ClusterIP 1 => 172.16.13.80:6443 3 10.96.0.10:53 ClusterIP 1 => 10.244.0.37:53 2 => 10.244.0.232:53 4 10.96.0.10:9153 ClusterIP 1 => 10.244.0.37:9153 2 => 10.244.0.232:9153 5 10.102.133.211:8080 ClusterIP 1 => 10.244.2.129:8080 6 10.100.95.181:8080 ClusterIP 1 => 10.244.1.217:8080 7 0.0.0.0:31414 NodePort 1 => 10.244.1.217:8080 8 172.16.13.81:31414 NodePort 1 => 10.244.1.217:8080 9 10.97.255.81:80 ClusterIP 1 => 10.244.2.157:80 #service的10.97.255.81对应pod后端 2 => 10.244.1.9:80 10 0.0.0.0:30074 NodePort 1 => 10.244.2.157:80 #NodePort 2 => 10.244.1.9:80 11 172.16.13.81:30074 NodePort 1 => 10.244.2.157:80 2 => 10.244.1.9:80 12 10.98.83.147:80 ClusterIP 1 => 10.244.2.144:4245 13 172.16.13.81:40000 HostPort 1 => 10.244.1.217:8080 14 0.0.0.0:40000 HostPort 1 => 10.244.1.217:8080 ``` 查看iptables是否生成规则 ```sh $ iptables-save | grep KUBE-SVC [ empty line ] ``` 测试nginx服务 ```sh $curl http://10.97.255.81 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> ... ``` #### 整合kube-router ```sh $ curl -LO https://raw.githubusercontent.com/cloudnativelabs/kube-router/v0.4.0/daemonset/generic-kuberouter-only-advertise-routes.yaml ``` 修改配置文件 ```sh $ cat generic-kuberouter-only-advertise-routes.yaml ``` ```yaml ... - "--run-router=true" - "--run-firewall=false" - "--run-service-proxy=false" - "--enable-cni=false" - "--enable-pod-egress=false" - "--enable-ibgp=true" - "--enable-overlay=true" # - "--peer-router-ips=<CHANGE ME>" # - "--peer-router-asns=<CHANGE ME>" # - "--cluster-asn=<CHANGE ME>" - "--advertise-cluster-ip=true" - "--advertise-external-ip=true" - "--advertise-loadbalancer-ip=true" ... ``` ```sh $ kubectl apply -f generic-kuberouter-only-advertise-routes.yaml ``` ```sh $ kubectl -n kube-system get pods -l k8s-app=kube-router NAME READY STATUS RESTARTS AGE kube-router-5q2xw 1/1 Running 0 72m kube-router-cl9nc 1/1 Running 0 72m kube-router-tmdnt 1/1 Running 0 72m ``` 重新安装 ```sh $ helm delete cilium -n kube-system #删除旧的helm $ export REPLACE_WITH_API_SERVER_IP=172.16.13.80 $ export REPLACE_WITH_API_SERVER_PORT=6443 $ helm install cilium cilium/cilium --version 1.9.10 \ --namespace kube-system \ --set ipam.mode=kubernetes \ --set tunnel=disabled \ --set nativeRoutingCIDR=172.16.13.0/24 \ --set kubeProxyReplacement=strict \ --set k8sServiceHost=$REPLACE_WITH_API_SERVER_IP \ --set k8sServicePort=$REPLACE_WITH_API_SERVER_PORT ``` ```sh kubectl -n kube-system get pods -l k8s-app=cilium NAME READY STATUS RESTARTS AGE cilium-f22c2 1/1 Running 0 5m58s cilium-ftg8n 1/1 Running 0 5m58s cilium-s4mng 1/1 Running 0 5m58s ``` ```sh $ kubectl -n kube-system exec -ti cilium-s4mng -- ip route list scope global default via 172.16.13.254 dev ens192 proto static metric 100 10.244.0.0/24 via 10.244.0.1 dev cilium_host src 10.244.0.1 #Local PodCIDR 10.244.1.0/24 via 172.16.13.81 dev ens192 proto 17 #BGP route 10.244.2.0/24 via 172.16.13.82 dev ens192 proto 17 #BGP route ``` 验证安装 ```sh $ curl -L --remote-name-all https://github.com/cilium/cilium-cli/releases/latest/download/cilium-linux-amd64.tar.gz $ tar xzvfC cilium-linux-amd64.tar.gz /usr/local/bin ``` 验证cilium安装 ```sh $ cilium status --wait /¯¯\ /¯¯\__/¯¯\ Cilium: OK \__/¯¯\__/ Operator: OK /¯¯\__/¯¯\ Hubble: OK \__/¯¯\__/ ClusterMesh: disabled \__/ Deployment hubble-relay Desired: 1, Ready: 1/1, Available: 1/1 Deployment hubble-ui Desired: 1, Ready: 1/1, Available: 1/1 DaemonSet cilium Desired: 3, Ready: 3/3, Available: 3/3 Deployment cilium-operator Desired: 2, Ready: 2/2, Available: 2/2 Containers: cilium-operator Running: 2 hubble-relay Running: 1 hubble-ui Running: 1 cilium Running: 3 Cluster Pods: 20/20 managed by Cilium Image versions cilium quay.io/cilium/cilium:v1.9.10: 3 cilium-operator quay.io/cilium/operator-generic:v1.9.10: 2 hubble-relay quay.io/cilium/hubble-relay:v1.9.10@sha256:f15bc1a1127be143c957158651141443c9fa14683426ef8789cf688fb94cae55: 1 hubble-ui quay.io/cilium/hubble-ui:v0.7.3: 1 hubble-ui quay.io/cilium/hubble-ui-backend:v0.7.3: 1 hubble-ui docker.io/envoyproxy/envoy:v1.14.5: 1 ``` 验证k8s的cilium ```sh $ cilium connectivity test ``` 参考:https://docs.cilium.io/en/v1.9/gettingstarted/k8s-install-default/
阅读 20 评论 0 收藏 0
阅读 20
评论 0
收藏 0


第 2 页 / 共 14 页