兜兜    2021-09-11 14:22:25    2022-01-25 09:21:02   

kubernets CNI cilium
`注意:cilium+kube-router目前仅beta阶段` #### 准备阶段 系统环境 ```sh 节点类型 IP 主机名 系统 内核版本 Master 172.16.13.80 shudoon101 CentOS7.9.2009 5.4.144-1.el7.elrepo.x86_64 Node 172.16.13.81 shudoon102 CentOS7.9.2009 5.4.144-1.el7.elrepo.x86_64 Node 172.16.13.82 shudoon103 CentOS7.9.2009 5.4.144-1.el7.elrepo.x86_64 ``` 软件环境 ```sh CentOS7 kernel 5.4.144 (cilium 内核要求4.9.17+) kubernets 1.20 cilium(1.9.10): CNI网络插件,包含kube-proxy代理功能 kube-router: 仅使用它的BGP路由功能 ``` 挂载bpf文件系统 ```sh $ mount bpffs /sys/fs/bpf -t bpf ``` ```sh $ cat >> /etc/fstab <<E0F bpffs /sys/fs/bpf bpf defaults 0 0 EOF ``` #### 安装cilium 快速安装 ```sh kubectl create -f https://raw.githubusercontent.com/cilium/cilium/v1.9/install/kubernetes/quick-install.yaml ``` 查看状态 ```sh $ kubectl -n kube-system get pods ``` ```sh NAME READY STATUS RESTARTS AGE cilium-csknd 1/1 Running 0 87m cilium-dwz2c 1/1 Running 0 87m cilium-nq8r9 1/1 Running 1 87m cilium-operator-59c9d5bc94-sh4c2 1/1 Running 0 87m coredns-7f89b7bc75-bhgrp 1/1 Running 0 7m24s coredns-7f89b7bc75-sfgvq 1/1 Running 0 7m24s ``` 部署connectivity test ```sh $ wget https://raw.githubusercontent.com/cilium/cilium/v1.9/examples/kubernetes/connectivity-check/connectivity-check.yaml $ sed -i 's/google.com/baidu.com/g' connectivity-check.yaml #测试外网的地址改成baidu.com $ kubectl apply -f connectivity-check.yaml ``` 查看pod的状态 ```sh kubectl get pods NAME READY STATUS RESTARTS AGE echo-a-dc9bcfd8f-hgc64 1/1 Running 0 9m59s echo-b-5884b7dc69-bl5px 1/1 Running 0 9m59s echo-b-host-cfdd57978-dg6gw 1/1 Running 0 9m59s host-to-b-multi-node-clusterip-c4ff7ff64-m9zwz 1/1 Running 0 9m58s host-to-b-multi-node-headless-84d8f6f4c4-8b797 1/1 Running 1 9m57s pod-to-a-5cdfd4754d-jgmnt 1/1 Running 0 9m59s pod-to-a-allowed-cnp-7d7c8f9f9b-f9lpc 1/1 Running 0 9m58s pod-to-a-denied-cnp-75cb89dfd-jsjd4 1/1 Running 0 9m59s pod-to-b-intra-node-nodeport-c6d79965d-w98jx 1/1 Running 1 9m57s pod-to-b-multi-node-clusterip-cd4d764b6-gjvc5 1/1 Running 0 9m58s pod-to-b-multi-node-headless-6696c5f8cd-fvcsl 1/1 Running 1 9m58s pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns 1/1 Running 0 9m57s pod-to-external-1111-d5c7bb4c4-sflfc 1/1 Running 0 9m59s pod-to-external-fqdn-allow-google-cnp-dcb4d867d-dxqzx 1/1 Running 0 7m16s ``` #### 安装Hubble ##### 安装Hubble UI 配置环境变量 ```sh $ export CILIUM_NAMESPACE=kube-system ``` 快速安装 `Cilium 1.9.2+` ```sh $ kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/v1.9/install/kubernetes/quick-hubble-install.yaml ``` helm安装 `Cilium <1.9.2` ```sh # Set this to your installed Cilium version $ export CILIUM_VERSION=1.9.10 # Please set any custom Helm values you may need for Cilium, # such as for example `--set operator.replicas=1` on single-cluster nodes. $ helm template cilium cilium/cilium --version $CILIUM_VERSION \\ --namespace $CILIUM_NAMESPACE \\ --set hubble.tls.auto.method="cronJob" \\ --set hubble.listenAddress=":4244" \\ --set hubble.relay.enabled=true \\ --set hubble.ui.enabled=true > cilium-with-hubble.yaml # This will modify your existing Cilium DaemonSet and ConfigMap $ kubectl apply -f cilium-with-hubble.yaml ``` 配置端口转发 ```sh $ kubectl port-forward -n $CILIUM_NAMESPACE svc/hubble-ui --address 0.0.0.0 --address :: 12000:80 ``` ##### 安装Hubble CLI 下载hubble CLI ```sh $ export CILIUM_NAMESPACE=kube-system $ export HUBBLE_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/hubble/master/stable.txt) $ curl -LO "https://github.com/cilium/hubble/releases/download/$HUBBLE_VERSION/hubble-linux-amd64.tar.gz" $ tar zxf hubble-linux-amd64.tar.gz $ mv hubble /usr/local/bin ``` 开启端口转发 ```sh $ kubectl port-forward -n $CILIUM_NAMESPACE svc/hubble-relay --address 0.0.0.0 --address :: 4245:80 ``` 查看节点状态 ```sh hubble --server localhost:4245 status ``` ```sh Healthcheck (via localhost:4245): Ok Current/Max Flows: 12288/12288 (100.00%) Flows/s: 22.71 Connected Nodes: 3/3 ``` 观察cilium信息 ```sh hubble --server localhost:4245 observe ``` ```sh Sep 11 06:52:06.119: default/pod-to-b-multi-node-clusterip-cd4d764b6-gjvc5:46762 -> default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: ACK, PSH) Sep 11 06:52:06.122: default/echo-b-5884b7dc69-bl5px:8080 <> default/pod-to-b-multi-node-clusterip-cd4d764b6-gjvc5:46762 to-overlay FORWARDED (TCP Flags: ACK, PSH) Sep 11 06:52:06.122: default/pod-to-b-multi-node-clusterip-cd4d764b6-gjvc5:46762 -> default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: ACK, FIN) Sep 11 06:52:06.123: default/echo-b-5884b7dc69-bl5px:8080 <> default/pod-to-b-multi-node-clusterip-cd4d764b6-gjvc5:46762 to-overlay FORWARDED (TCP Flags: ACK, FIN) Sep 11 06:52:06.123: default/pod-to-b-multi-node-clusterip-cd4d764b6-gjvc5:46762 -> default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: ACK) Sep 11 06:52:06.793: default/pod-to-a-5cdfd4754d-jgmnt:54735 -> kube-system/coredns-7f89b7bc75-bhgrp:53 to-endpoint FORWARDED (UDP) Sep 11 06:52:06.793: default/pod-to-a-5cdfd4754d-jgmnt:54735 -> kube-system/coredns-7f89b7bc75-bhgrp:53 to-endpoint FORWARDED (UDP) Sep 11 06:52:06.793: kube-system/coredns-7f89b7bc75-bhgrp:53 <> default/pod-to-a-5cdfd4754d-jgmnt:54735 to-overlay FORWARDED (UDP) Sep 11 06:52:06.793: kube-system/coredns-7f89b7bc75-bhgrp:53 <> default/pod-to-a-5cdfd4754d-jgmnt:54735 to-overlay FORWARDED (UDP) Sep 11 06:52:06.903: kube-system/coredns-7f89b7bc75-bhgrp:32928 -> 172.16.13.80:6443 to-stack FORWARDED (TCP Flags: ACK) Sep 11 06:52:06.903: kube-system/coredns-7f89b7bc75-bhgrp:32928 <- 172.16.13.80:6443 to-endpoint FORWARDED (TCP Flags: ACK) Sep 11 06:52:07.003: 10.244.0.1:36858 -> kube-system/coredns-7f89b7bc75-bhgrp:8181 to-endpoint FORWARDED (TCP Flags: SYN) Sep 11 06:52:07.003: 10.244.0.1:36858 <- kube-system/coredns-7f89b7bc75-bhgrp:8181 to-stack FORWARDED (TCP Flags: SYN, ACK) Sep 11 06:52:07.003: 10.244.0.1:36858 -> kube-system/coredns-7f89b7bc75-bhgrp:8181 to-endpoint FORWARDED (TCP Flags: ACK) Sep 11 06:52:07.003: 10.244.0.1:36858 -> kube-system/coredns-7f89b7bc75-bhgrp:8181 to-endpoint FORWARDED (TCP Flags: ACK, PSH) Sep 11 06:52:07.003: 10.244.0.1:36858 <- kube-system/coredns-7f89b7bc75-bhgrp:8181 to-stack FORWARDED (TCP Flags: ACK, PSH) Sep 11 06:52:07.003: 10.244.0.1:36858 <- kube-system/coredns-7f89b7bc75-bhgrp:8181 to-stack FORWARDED (TCP Flags: ACK, FIN) Sep 11 06:52:07.003: 10.244.0.1:36858 -> kube-system/coredns-7f89b7bc75-bhgrp:8181 to-endpoint FORWARDED (TCP Flags: ACK, FIN) Sep 11 06:52:07.043: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:49051 -> kube-system/coredns-7f89b7bc75-bhgrp:53 to-endpoint FORWARDED (UDP) Sep 11 06:52:07.043: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:49051 -> kube-system/coredns-7f89b7bc75-bhgrp:53 to-endpoint FORWARDED (UDP) Sep 11 06:52:07.043: kube-system/coredns-7f89b7bc75-bhgrp:53 <> default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:49051 to-overlay FORWARDED (UDP) Sep 11 06:52:07.043: kube-system/coredns-7f89b7bc75-bhgrp:53 <> default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:49051 to-overlay FORWARDED (UDP) Sep 11 06:52:07.044: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 <> default/echo-b-5884b7dc69-bl5px:8080 to-overlay FORWARDED (TCP Flags: SYN) Sep 11 06:52:07.044: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 <- default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: SYN, ACK) Sep 11 06:52:07.044: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 <> default/echo-b-5884b7dc69-bl5px:8080 to-overlay FORWARDED (TCP Flags: ACK) Sep 11 06:52:07.044: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 <> default/echo-b-5884b7dc69-bl5px:8080 to-overlay FORWARDED (TCP Flags: ACK, PSH) Sep 11 06:52:07.044: kube-system/hubble-relay-74b76459f9-d66h9:60564 -> 172.16.13.81:4244 to-stack FORWARDED (TCP Flags: ACK) Sep 11 06:52:07.046: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 -> default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: SYN) Sep 11 06:52:07.046: default/echo-b-5884b7dc69-bl5px:8080 <> default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 to-overlay FORWARDED (TCP Flags: SYN, ACK) Sep 11 06:52:07.047: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 -> default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: ACK) Sep 11 06:52:07.047: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 -> default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: ACK, PSH) Sep 11 06:52:07.048: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 <- default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: ACK, PSH) Sep 11 06:52:07.048: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 <> default/echo-b-5884b7dc69-bl5px:8080 to-overlay FORWARDED (TCP Flags: ACK, FIN) Sep 11 06:52:07.048: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 <- default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: ACK, FIN) Sep 11 06:52:07.048: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 <> default/echo-b-5884b7dc69-bl5px:8080 to-overlay FORWARDED (TCP Flags: ACK) Sep 11 06:52:07.050: default/echo-b-5884b7dc69-bl5px:8080 <> default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 to-overlay FORWARDED (TCP Flags: ACK, PSH) Sep 11 06:52:07.051: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 -> default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: ACK, FIN) Sep 11 06:52:07.051: default/echo-b-5884b7dc69-bl5px:8080 <> default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 to-overlay FORWARDED (TCP Flags: ACK, FIN) Sep 11 06:52:07.051: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 -> default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: ACK) Sep 11 06:52:07.095: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:43239 <> kube-system/coredns-7f89b7bc75-bhgrp:53 to-overlay FORWARDED (UDP) Sep 11 06:52:07.096: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:43239 <> kube-system/coredns-7f89b7bc75-bhgrp:53 to-overlay FORWARDED (UDP) Sep 11 06:52:07.096: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:43239 -> kube-system/coredns-7f89b7bc75-bhgrp:53 to-endpoint FORWARDED (UDP) Sep 11 06:52:07.096: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:43239 -> kube-system/coredns-7f89b7bc75-bhgrp:53 to-endpoint FORWARDED (UDP) Sep 11 06:52:07.096: kube-system/coredns-7f89b7bc75-bhgrp:53 <> default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:43239 to-overlay FORWARDED (UDP) Sep 11 06:52:07.096: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:43239 <- kube-system/coredns-7f89b7bc75-bhgrp:53 to-endpoint FORWARDED (UDP) Sep 11 06:52:07.096: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:43239 <- kube-system/coredns-7f89b7bc75-bhgrp:53 to-endpoint FORWARDED (UDP) Sep 11 06:52:07.096: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57236 <> default/echo-b-5884b7dc69-bl5px:8080 to-overlay FORWARDED (TCP Flags: SYN) Sep 11 06:52:07.097: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57236 <- default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: SYN, ACK) Sep 11 06:52:07.097: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57236 <> default/echo-b-5884b7dc69-bl5px:8080 to-overlay FORWARDED (TCP Flags: ACK) Sep 11 06:52:07.097: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57236 <> default/echo-b-5884b7dc69-bl5px:8080 to-overlay FORWARDED (TCP Flags: ACK, PSH) Sep 11 06:52:07.099: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57236 -> default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: SYN) Sep 11 06:52:07.099: default/echo-b-5884b7dc69-bl5px:8080 <> default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57236 to-overlay FORWARDED (TCP Flags: SYN, ACK) Sep 11 06:52:07.099: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57236 -> default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: ACK) Sep 11 06:52:07.100: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57236 -> default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: ACK, PSH) Sep 11 06:52:07.101: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57236 <- default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: ACK, PSH) Sep 11 06:52:07.101: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57236 <> default/echo-b-5884b7dc69-bl5px:8080 to-overlay FORWARDED (TCP Flags: ACK, FIN) Sep 11 06:52:07.102: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57236 <- default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: ACK, FIN) Sep 11 06:52:07.103: default/echo-b-5884b7dc69-bl5px:8080 <> default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57236 to-overlay FORWARDED (TCP Flags: ACK, PSH) Sep 11 06:52:07.104: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57236 -> default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: ACK, FIN) Sep 11 06:52:07.104: default/echo-b-5884b7dc69-bl5px:8080 <> default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57236 to-overlay FORWARDED (TCP Flags: ACK, FIN) ``` 浏览器访问 http://127.0.0.1:12000/ #### Cilium替换kube-proxy(`默认为共存模式Probe`) 删除kube-proxy ```sh $ kubectl -n kube-system delete ds kube-proxy #删除kube-proxy DaemonSet # Delete the configmap as well to avoid kube-proxy being reinstalled during a kubeadm upgrade (works only for K8s 1.19 and newer) $ kubectl -n kube-system delete cm kube-proxy # 所有节点执行,清除kube-proxy相关的规则 $ iptables-restore <(iptables-save | grep -v KUBE) ``` 卸载快速安装的cilium ```sh $ kubectl delete -f https://raw.githubusercontent.com/cilium/cilium/v1.9/install/kubernetes/quick-install.yaml ``` helm3 重新安装cilium库 ```sh $ helm repo add cilium https://helm.cilium.io/ ``` ```sh $ export REPLACE_WITH_API_SERVER_IP=172.16.13.80 $ export REPLACE_WITH_API_SERVER_PORT=6443 $ helm upgrade --install cilium cilium/cilium --version 1.9.10 \ --namespace kube-system \ --set kubeProxyReplacement=strict \ --set k8sServiceHost=REPLACE_WITH_API_SERVER_IP \ --set k8sServicePort=REPLACE_WITH_API_SERVER_PORT ``` 查看部署的模式 ```sh $ kubectl -n kube-system get pods -l k8s-app=cilium NAME READY STATUS RESTARTS AGE cilium-9szpn 1/1 Running 0 45s cilium-fllk6 1/1 Running 0 45s cilium-sn2q5 1/1 Running 0 45s $ kubectl exec -it -n kube-system cilium-9szpn -- cilium status | grep KubeProxyReplacement KubeProxyReplacement: Strict [ens192 (Direct Routing)] #Strict为严格模式 ``` 查看cilium的状态 ```sh $ kubectl exec -ti -n kube-system cilium-9szpn -- cilium status --verbose ``` ```sh KVStore: Ok Disabled Kubernetes: Ok 1.20 (v1.20.10) [linux/amd64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Strict [ens192 (Direct Routing)] Cilium: Ok 1.9.10 (v1.9.10-4e26039) NodeMonitor: Listening for events on 8 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 6/255 allocated from 10.244.1.0/24, Allocated addresses: 10.244.1.164 (health) 10.244.1.169 (default/pod-to-b-intra-node-nodeport-c6d79965d-w98jx [restored]) 10.244.1.211 (default/busybox-deployment-99db9cf4d-dch94 [restored]) 10.244.1.217 (default/echo-b-5884b7dc69-bl5px [restored]) 10.244.1.221 (router) 10.244.1.6 (default/busybox-deployment-99db9cf4d-pgxzz [restored]) BandwidthManager: Disabled Host Routing: Legacy Masquerading: BPF [ens192] 10.244.1.0/24 Clock Source for BPF: ktime Controller Status: 38/38 healthy Name Last success Last error Count Message cilium-health-ep 42s ago never 0 no error dns-garbage-collector-job 53s ago never 0 no error endpoint-1383-regeneration-recovery never never 0 no error endpoint-1746-regeneration-recovery never never 0 no error endpoint-1780-regeneration-recovery never never 0 no error endpoint-3499-regeneration-recovery never never 0 no error endpoint-546-regeneration-recovery never never 0 no error endpoint-777-regeneration-recovery never never 0 no error k8s-heartbeat 23s ago never 0 no error mark-k8s-node-as-available 4m43s ago never 0 no error metricsmap-bpf-prom-sync 8s ago never 0 no error neighbor-table-refresh 4m43s ago never 0 no error resolve-identity-777 4m42s ago never 0 no error restoring-ep-identity (1383) 4m43s ago never 0 no error restoring-ep-identity (1746) 4m43s ago never 0 no error restoring-ep-identity (1780) 4m43s ago never 0 no error restoring-ep-identity (3499) 4m43s ago never 0 no error restoring-ep-identity (546) 4m43s ago never 0 no error sync-endpoints-and-host-ips 43s ago never 0 no error sync-lb-maps-with-k8s-services 4m43s ago never 0 no error sync-policymap-1383 41s ago never 0 no error sync-policymap-1746 41s ago never 0 no error sync-policymap-1780 41s ago never 0 no error sync-policymap-3499 41s ago never 0 no error sync-policymap-546 40s ago never 0 no error sync-policymap-777 41s ago never 0 no error sync-to-k8s-ciliumendpoint (1383) 3s ago never 0 no error sync-to-k8s-ciliumendpoint (1746) 3s ago never 0 no error sync-to-k8s-ciliumendpoint (1780) 3s ago never 0 no error sync-to-k8s-ciliumendpoint (3499) 3s ago never 0 no error sync-to-k8s-ciliumendpoint (546) 3s ago never 0 no error sync-to-k8s-ciliumendpoint (777) 12s ago never 0 no error template-dir-watcher never never 0 no error update-k8s-node-annotations 4m51s ago never 0 no error waiting-initial-global-identities-ep (1383) 4m43s ago never 0 no error waiting-initial-global-identities-ep (1746) 4m43s ago never 0 no error waiting-initial-global-identities-ep (1780) 4m43s ago never 0 no error waiting-initial-global-identities-ep (3499) 4m43s ago never 0 no error Proxy Status: OK, ip 10.244.1.221, 0 redirects active on ports 10000-20000 Hubble: Ok Current/Max Flows: 3086/4096 (75.34%), Flows/s: 11.02 Metrics: Disabled KubeProxyReplacement Details: Status: Strict Protocols: TCP, UDP Devices: ens192 (Direct Routing) Mode: SNAT Backend Selection: Random Session Affinity: Enabled XDP Acceleration: Disabled Services: #下面Enabled表示支持的类型 - ClusterIP: Enabled - NodePort: Enabled (Range: 30000-32767) - LoadBalancer: Enabled - externalIPs: Enabled - HostPort: Enabled BPF Maps: dynamic sizing: on (ratio: 0.002500) Name Size Non-TCP connection tracking 73653 TCP connection tracking 147306 Endpoint policy 65535 Events 8 IP cache 512000 IP masquerading agent 16384 IPv4 fragmentation 8192 IPv4 service 65536 IPv6 service 65536 IPv4 service backend 65536 IPv6 service backend 65536 IPv4 service reverse NAT 65536 IPv6 service reverse NAT 65536 Metrics 1024 NAT 147306 Neighbor table 147306 Global policy 16384 Per endpoint policy 65536 Session affinity 65536 Signal 8 Sockmap 65535 Sock reverse NAT 73653 Tunnel 65536 Cluster health: 3/3 reachable (2021-09-11T08:08:45Z) Name IP Node Endpoints shudoon102 (localhost) 172.16.13.81 reachable reachable shudoon101 172.16.13.80 reachable reachable shudoon103 172.16.13.82 reachable reachable ``` 创建nginx服务 nginx的deployment ```sh cat nginx-deployment.yaml ``` ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: my-nginx spec: selector: matchLabels: run: my-nginx replicas: 2 template: metadata: labels: run: my-nginx spec: containers: - name: my-nginx image: nginx ports: - containerPort: 80 ``` 执行部署 ```sh $ kubectl apply -f nginx-deployment.yaml ``` 获取nginx的pod运行状态 ```sh $ kubectl get pods -l run=my-nginx -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES my-nginx-5b56ccd65f-kdrhn 1/1 Running 0 3m36s 10.244.2.157 shudoon103 <none> <none> my-nginx-5b56ccd65f-ltxc6 1/1 Running 0 3m36s 10.244.1.9 shudoon102 <none> <none> ``` ```sh $ kubectl get svc my-nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE my-nginx NodePort 10.97.255.81 <none> 80:30074/TCP 4h ``` 查看Cilium eBPF生成的服务信息 ```sh kubectl exec -it -n kube-system cilium-9szpn -- cilium service list ID Frontend Service Type Backend 1 10.111.52.74:80 ClusterIP 1 => 10.244.2.249:8081 2 10.96.0.1:443 ClusterIP 1 => 172.16.13.80:6443 3 10.96.0.10:53 ClusterIP 1 => 10.244.0.37:53 2 => 10.244.0.232:53 4 10.96.0.10:9153 ClusterIP 1 => 10.244.0.37:9153 2 => 10.244.0.232:9153 5 10.102.133.211:8080 ClusterIP 1 => 10.244.2.129:8080 6 10.100.95.181:8080 ClusterIP 1 => 10.244.1.217:8080 7 0.0.0.0:31414 NodePort 1 => 10.244.1.217:8080 8 172.16.13.81:31414 NodePort 1 => 10.244.1.217:8080 9 10.97.255.81:80 ClusterIP 1 => 10.244.2.157:80 #service的10.97.255.81对应pod后端 2 => 10.244.1.9:80 10 0.0.0.0:30074 NodePort 1 => 10.244.2.157:80 #NodePort 2 => 10.244.1.9:80 11 172.16.13.81:30074 NodePort 1 => 10.244.2.157:80 2 => 10.244.1.9:80 12 10.98.83.147:80 ClusterIP 1 => 10.244.2.144:4245 13 172.16.13.81:40000 HostPort 1 => 10.244.1.217:8080 14 0.0.0.0:40000 HostPort 1 => 10.244.1.217:8080 ``` 查看iptables是否生成规则 ```sh $ iptables-save | grep KUBE-SVC [ empty line ] ``` 测试nginx服务 ```sh $curl http://10.97.255.81 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> ... ``` #### 整合kube-router ```sh $ curl -LO https://raw.githubusercontent.com/cloudnativelabs/kube-router/v0.4.0/daemonset/generic-kuberouter-only-advertise-routes.yaml ``` 修改配置文件 ```sh $ cat generic-kuberouter-only-advertise-routes.yaml ``` ```yaml ... - "--run-router=true" - "--run-firewall=false" - "--run-service-proxy=false" - "--enable-cni=false" - "--enable-pod-egress=false" - "--enable-ibgp=true" - "--enable-overlay=true" # - "--peer-router-ips=<CHANGE ME>" # - "--peer-router-asns=<CHANGE ME>" # - "--cluster-asn=<CHANGE ME>" - "--advertise-cluster-ip=true" - "--advertise-external-ip=true" - "--advertise-loadbalancer-ip=true" ... ``` ```sh $ kubectl apply -f generic-kuberouter-only-advertise-routes.yaml ``` ```sh $ kubectl -n kube-system get pods -l k8s-app=kube-router NAME READY STATUS RESTARTS AGE kube-router-5q2xw 1/1 Running 0 72m kube-router-cl9nc 1/1 Running 0 72m kube-router-tmdnt 1/1 Running 0 72m ``` 重新安装 ```sh $ helm delete cilium -n kube-system #删除旧的helm $ export REPLACE_WITH_API_SERVER_IP=172.16.13.80 $ export REPLACE_WITH_API_SERVER_PORT=6443 $ helm install cilium cilium/cilium --version 1.9.10 \ --namespace kube-system \ --set ipam.mode=kubernetes \ --set tunnel=disabled \ --set nativeRoutingCIDR=172.16.13.0/24 \ --set kubeProxyReplacement=strict \ --set k8sServiceHost=$REPLACE_WITH_API_SERVER_IP \ --set k8sServicePort=$REPLACE_WITH_API_SERVER_PORT ``` ```sh kubectl -n kube-system get pods -l k8s-app=cilium NAME READY STATUS RESTARTS AGE cilium-f22c2 1/1 Running 0 5m58s cilium-ftg8n 1/1 Running 0 5m58s cilium-s4mng 1/1 Running 0 5m58s ``` ```sh $ kubectl -n kube-system exec -ti cilium-s4mng -- ip route list scope global default via 172.16.13.254 dev ens192 proto static metric 100 10.244.0.0/24 via 10.244.0.1 dev cilium_host src 10.244.0.1 #Local PodCIDR 10.244.1.0/24 via 172.16.13.81 dev ens192 proto 17 #BGP route 10.244.2.0/24 via 172.16.13.82 dev ens192 proto 17 #BGP route ``` 验证安装 ```sh $ curl -L --remote-name-all https://github.com/cilium/cilium-cli/releases/latest/download/cilium-linux-amd64.tar.gz $ tar xzvfC cilium-linux-amd64.tar.gz /usr/local/bin ``` 验证cilium安装 ```sh $ cilium status --wait /¯¯\ /¯¯\__/¯¯\ Cilium: OK \__/¯¯\__/ Operator: OK /¯¯\__/¯¯\ Hubble: OK \__/¯¯\__/ ClusterMesh: disabled \__/ Deployment hubble-relay Desired: 1, Ready: 1/1, Available: 1/1 Deployment hubble-ui Desired: 1, Ready: 1/1, Available: 1/1 DaemonSet cilium Desired: 3, Ready: 3/3, Available: 3/3 Deployment cilium-operator Desired: 2, Ready: 2/2, Available: 2/2 Containers: cilium-operator Running: 2 hubble-relay Running: 1 hubble-ui Running: 1 cilium Running: 3 Cluster Pods: 20/20 managed by Cilium Image versions cilium quay.io/cilium/cilium:v1.9.10: 3 cilium-operator quay.io/cilium/operator-generic:v1.9.10: 2 hubble-relay quay.io/cilium/hubble-relay:v1.9.10@sha256:f15bc1a1127be143c957158651141443c9fa14683426ef8789cf688fb94cae55: 1 hubble-ui quay.io/cilium/hubble-ui:v0.7.3: 1 hubble-ui quay.io/cilium/hubble-ui-backend:v0.7.3: 1 hubble-ui docker.io/envoyproxy/envoy:v1.14.5: 1 ``` 验证k8s的cilium ```sh $ cilium connectivity test ``` 参考:https://docs.cilium.io/en/v1.9/gettingstarted/k8s-install-default/
阅读 1086 评论 0 收藏 0
阅读 1086
评论 0
收藏 0

兜兜    2021-09-06 00:30:47    2021-10-19 14:32:09   

kubernets vxlan
K8s集群pod信息 ```bash $ kubectl get pod -o wide ``` ```bash NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES busybox-deployment-6576988595-dbpq7 1/1 Running 4 33h 10.244.1.12 k8s-node1 <none> <none> busybox-deployment-6576988595-l5w7r 1/1 Running 4 33h 10.244.2.14 k8s-node2 <none> <none> busybox-deployment-6576988595-wfvn2 1/1 Running 4 33h 10.244.2.13 k8s-node2 <none> <none> ``` #### _**实验 `"pod-10.244.1.12(k8s-node1)"` ping `"pod-10.244.2.14(k8s-node2)"`,跟踪数据包的传输过程。**_ _**1. "10.244.1.12" ping "10.244.2.14" 匹配默认路由0.0.0.0走容器eth0,到达veth pair的另一端veth8001ebf4**_ kubectl连接到pod-10.244.1.12 ping 10.244.2.14 ```sh $ kubectl exec -ti busybox-deployment-6576988595-dbpq7 sh ``` ```html / # ping 10.244.2.14 PING 10.244.2.14 (10.244.2.14): 56 data bytes 64 bytes from 10.244.2.14: seq=0 ttl=62 time=0.828 ms ``` kubectl连接到pod-10.244.1.12查看路由信息 ```sh $ kubectl exec -ti busybox-deployment-6576988595-dbpq7 sh ``` ```html / # route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 10.244.1.1 0.0.0.0 UG 0 0 0 eth0 #数据包会匹配这条路由 10.244.0.0 10.244.1.1 255.255.0.0 UG 0 0 0 eth0 10.244.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 ``` 查看k8s-node1的pod-10.244.1.12 eth0对应veth pair另一端为veth8001ebf4 [(如何查看容器对应的veth网卡)](https://ynotes.cn/blog/article_detail/260) k8s-node1抓取veth8001ebf4网卡的数据包 ```sh tcpdump -i veth8001ebf4 ``` ```sh tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on veth8001ebf4, link-type EN10MB (Ethernet), capture size 262144 bytes 00:30:00.124500 IP 10.244.1.12 > 10.244.2.14: ICMP echo request, id 1336, seq 495, length 64 ``` _**2.veth8001ebf4桥接到cni0,数据包发送到cni0**_ ```sh $ tcpdump -i cni0 -e -nnn -vvv ``` ```sh tcpdump: listening on cni0, link-type EN10MB (Ethernet), capture size 262144 bytes 01:32:29.522019 d6:10:b7:91:f0:ac > 0a:58:0a:f4:01:01, ethertype IPv4 (0x0800), length 98: (tos 0x0, ttl 64, id 16442, offset 0, flags [DF], proto ICMP (1), length 84) 10.244.1.12 > 10.244.2.14: ICMP echo request, id 1862, seq 89, length 64 ``` _**3.cni0查看路由表route -n,会路由匹配10.244.2.0-flannel.1**_ ```sh [root@k8s-node1 ~]# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 172.16.100.254 0.0.0.0 UG 100 0 0 ens192 10.244.0.0 10.244.0.0 255.255.255.0 UG 0 0 0 flannel.1 10.244.1.0 0.0.0.0 255.255.255.0 U 0 0 0 cni0 10.244.2.0 10.244.2.0 255.255.255.0 UG 0 0 0 flannel.1 #会匹配到这条路由 172.16.100.0 0.0.0.0 255.255.255.0 U 100 0 0 ens192 172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0 ``` _**4.flannel.1收到cni0的数据帧。a.修改内部数据帧的地址(MAC source:FE:90:C4:18:69:A7[k8s-node1 flannel.1的MAC],MAC dest:B6:72:AE:36:7B:6C[k8s-node2 flannel.1的MAC]),b.封装vxlan头(VNI:1),c.再封装UDP头部(UDP dest port:8472),d.封装节点ip头。**_ ![enter image description here](https://files.ynotes.cn/vxlan.png "enter image title here") a.修改内部数据帧,源MAC地址为k8s-node1 flannel.1的MAC地址。目的MAC地址为10.244.2.0网络的网关10.244.2.0(k8s-node2 flannel.1)的MAC地址 ```sh arp -n|grep flannel.1 ``` ```sh 10.244.2.0 ether b6:72:Ae:36:7b:6c CM flannel.1 #内部网络网关10.244.2.0的MAC地址 10.244.0.0 ether 6e:f8:85:d7:09:17 CM flannel.1 ``` b.封装vxlan头,VNI为vetp设备的vxlan id c.封装UDP头部,dest port为vetp设备的dstport 查看flannel.1 vetp信息 ```sh $ ip -d link show flannel.1 ``` ```sh 4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN mode DEFAULT group default link/ether fe:90:c4:18:69:a7 brd ff:ff:ff:ff:ff:ff promiscuity 0 vxlan id 1 local 172.16.100.101 dev ens192 srcport 0 0 dstport 8472 nolearning ageing 300 noudpcsum noudp6zerocsumtx noudp6zerocsumrx addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 #vxlan id 为1,vetp UDP dstport 8472 ``` d.封装节点ip头,源ip为k8s-node1的ip,目的ip k8s-node2的ip。目的ip通过查看bridge fdb 对应的vetp MAC获取节点ip ```sh $ bridge fdb|grep flannel.1 ``` ```sh e2:3f:07:99:Cf:6f dev flannel.1 dst 172.16.100.102 self permanent b6:72:Ae:36:7b:6c dev flannel.1 dst 172.16.100.102 self permanent #通过vetp MAC地址找到节点ip:172.16.100.102 6e:f8:85:d7:09:17 dev flannel.1 dst 172.16.100.100 self permanent ``` 查看ens192的数据包(vetp已封包完成),分析数据包内容 ```sh tcpdump -l -nnnvvveXX -i ens192 'port 8472 and udp[8:2] = 0x0800 & 0x0800' ``` ```sh tcpdump: listening on ens192, link-type EN10MB (Ethernet), capture size 262144 bytes 02:09:45.801867 00:50:56:93:6a:3a > 00:50:56:93:63:3b, ethertype IPv4 (0x0800), length 148: (tos 0x0, ttl 64, id 30086, offset 0, flags [none], proto UDP (17), length 134) 172.16.100.101.40592 > 172.16.100.102.8472: [no cksum] OTV, flags [I] (0x08), overlay 0, instance 1 fe:90:c4:18:69:a7 > b6:72:Ae:36:7b:6c, ethertype IPv4 (0x0800), length 98: (tos 0x0, ttl 63, id 8965, offset 0, flags [DF], proto ICMP (1), length 84) 10.244.1.12 > 10.244.2.14: ICMP echo request, id 3143, seq 1102, length 64 0x0000: 0050 5693 633b 0050 5693 6a3a 0800 4500 #帧头:0050 5693 633b^DEST_MAC,0050 5693 6a3a^SRC_MAC,0800^IPV4。IP头:4^ipv4,5^四字节个数,00^TOS 0x0010: 0086 7586 0000 4011 e3f4 ac10 6465 ac10 #0086^总长度,7586^标识符,0000^偏移量,40^生存周期,11^上层协议,e3f4^校验和,ac10 6465^SRC_IP, 0x0020: 6466 9e90 2118 0072 0000 0800 0000 0000 #ac10 6466^DST_IP。UDP头:9e90^SRC_PORT,2118^DST_PORT,0072^UDP长度,0000^校验和。VXLAN头:08^标志位,00 0000^保留字段 0x0030: 0100 b672 ae36 7b6c fe90 c418 69a7 0800 #0000 01^VNID,00^保留字段。内部帧头:b672 ae36 7b6c^DST_MAC,fe90 c418 69a7^SRC_MAC,0800^IPV4。 0x0040: 4500 0054 2305 4000 3f01 ffa2 0af4 010c #内部IP头:IP头:4^ipv4,5^四字节个,00^TOS,#0054^总长度,2305^标识符,4000^偏移量,3f^生存周期,01^上层协议,ffa2^校验和,0af4 010c^SRC_IP 0x0050: 0af4 020e 0800 847d 0c47 044e 0d1e 55cf #0af4 020e^DST_IP。ICMP协议:08^请求报文,00^代码,内部数据帧:847d^校验和,0c47 044e^内部CFS,0d1e 55cf^外部CFS。 0x0060: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 0x0070: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 0x0080: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 0x0090: 0000 0000 .... ```
阅读 1348 评论 0 收藏 0
阅读 1348
评论 0
收藏 0

兜兜    2021-09-03 11:00:47    2022-01-25 09:30:51   

k8s kubernets
#### 环境 `Kubernets 1.18.8` #### 一、下载kube-prometheus ```bash git clone https://github.com/coreos/kube-prometheus.git git checkout -b release-0.6 remotes/origin/release-0.6 #release-0.6支持1.18+ ``` #### 二、安装kube-prometheus ##### 安装kube-prometheus相关资源和服务 ```bash # Create the namespace and CRDs, and then wait for them to be availble before creating the remaining resources kubectl create -f manifests/setup until kubectl get servicemonitors --all-namespaces ; do date; sleep 1; echo ""; done kubectl create -f manifests/ ``` ##### 卸载kube-prometheus相关资源和服务 ```bash kubectl delete --ignore-not-found=true -f manifests/ -f manifests/setup ``` #### 三、配置访问 prometheus ```bash $ kubectl --namespace monitoring port-forward svc/prometheus-k8s 9090 ``` http://localhost:9090 Grafana ```bash $ kubectl --namespace monitoring port-forward svc/grafana 3000 ``` http://localhost:3000 账号密码: admin:admin. AlertManager ```bash $ kubectl --namespace monitoring port-forward svc/alertmanager-main 9093 ``` http://localhost:9093 #### 四、增加钉钉告警 4.1 k8s安装webhook-dingtalk ```bash $ cat dingtalk.yaml ``` ```yaml apiVersion: apps/v1 kind: Deployment metadata: labels: app: webhook-dingtalk name: webhook-dingtalk namespace: monitoring spec: replicas: 1 selector: matchLabels: app: webhook-dingtalk template: metadata: labels: app: webhook-dingtalk spec: containers: - image: billy98/webhook-dingtalk:latest name: webhook-dingtalk args: - "https://oapi.dingtalk.com/robot/send?access_token=xxxxxxxxxxx" #access_token改为自己的即可 ports: - containerPort: 8080 protocol: TCP resources: requests: cpu: 100m memory: 100Mi limits: cpu: 500m memory: 500Mi livenessProbe: failureThreshold: 3 initialDelaySeconds: 30 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 tcpSocket: port: 8080 readinessProbe: failureThreshold: 3 initialDelaySeconds: 30 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 httpGet: port: 8080 path: / imagePullSecrets: - name: IfNotPresent --- apiVersion: v1 kind: Service metadata: labels: app: webhook-dingtalk name: webhook-dingtalk namespace: monitoring spec: ports: - name: http port: 80 protocol: TCP targetPort: 8080 selector: app: webhook-dingtalk type: ClusterIP ``` ```bash $ kubectl create -f dingtalk.yaml ``` 4.2 获取alertmanager配置 ```bash $ kubectl get secret alertmanager-main -n monitoring -o yaml #alertmanager.yaml后面即为配置文件,用base64 -d解码即可 apiVersion: v1 data: alertmanager.yaml: Imdsb2JhbCI6CiAgInJlc29sdmVfdGltZW91dCI6ICI1bSIKImluaGliaXRfcnVsZXMiOgotICJlcXVhbCI6CiAgLSAibmFtZXNwYWNlIgogIC0gImFsZXJ0bmFtZSIKICAic291cmNlX21hdGNoIjoKICAgICJzZXZlcml0eSI6ICJjcml0aWNhbCIKICAidGFyZ2V0X21hdGNoX3JlIjoKICAgICJzZXZlcml0eSI6ICJ3YXJuaW5nfGluZm8iCi0gImVxdWFsIjoKICAtICJuYW1lc3BhY2UiCiAgLSAiYWxlcnRuYW1lIgogICJzb3VyY2VfbWF0Y2giOgogICAgInNldmVyaXR5IjogIndhcm5pbmciCiAgInRhcmdldF9tYXRjaF9yZSI6CiAgICAic2V2ZXJpdHkiOiAiaW5mbyIKInJlY2VpdmVycyI6Ci0gIm5hbWUiOiAiRGVmYXVsdCIKLSAibmFtZSI6ICJXYXRjaGRvZyIKLSAibmFtZSI6ICJDcml0aWNhbCIKLSAibmFtZSI6ICJXZWJob29rIgogICJ3ZWJob29rX2NvbmZpZ3MiOgogIC0gInVybCI6ICJodHRwOi8vd2ViaG9vay1kaW5ndGFsay9kaW5ndGFsay9zZW5kLyIKICAgICJzZW5kX3Jlc29sdmVkIjogdHJ1ZQoicm91dGUiOgogICJncm91cF9ieSI6CiAgLSAibmFtZXNwYWNlIgogICJncm91cF9pbnRlcnZhbCI6ICI1bSIKICAiZ3JvdXBfd2FpdCI6ICIzMHMiCiAgInJlY2VpdmVyIjogIldlYmhvb2siCiAgInJlcGVhdF9pbnRlcnZhbCI6ICIxMmgiCiAgInJvdXRlcyI6CiAgLSAibWF0Y2giOgogICAgICAiYWxlcnRuYW1lIjogIldhdGNoZG9nIgogICAgInJlY2VpdmVyIjogIldhdGNoZG9nIgogIC0gIm1hdGNoIjoKICAgICAgInNldmVyaXR5IjogImNyaXRpY2FsIgogICAgInJlY2VpdmVyIjogIkNyaXRpY2FsIgo= kind: Secret metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"v1","data":{"alertmanager.yaml":"Imdsb2JhbCI6CiAgInJlc29sdmVfdGltZW91dCI6ICI1bSIKImluaGliaXRfcnVsZXMiOgotICJlcXVhbCI6CiAgLSAibmFtZXNwYWNlIgogIC0gImFsZXJ0bmFtZSIKICAic291cmNlX21hdGNoIjoKICAgICJzZXZlcml0eSI6ICJjcml0aWNhbCIKICAidGFyZ2V0X21hdGNoX3JlIjoKICAgICJzZXZlcml0eSI6ICJ3YXJuaW5nfGluZm8iCi0gImVxdWFsIjoKICAtICJuYW1lc3BhY2UiCiAgLSAiYWxlcnRuYW1lIgogICJzb3VyY2VfbWF0Y2giOgogICAgInNldmVyaXR5IjogIndhcm5pbmciCiAgInRhcmdldF9tYXRjaF9yZSI6CiAgICAic2V2ZXJpdHkiOiAiaW5mbyIKInJlY2VpdmVycyI6Ci0gIm5hbWUiOiAiRGVmYXVsdCIKLSAibmFtZSI6ICJXYXRjaGRvZyIKLSAibmFtZSI6ICJDcml0aWNhbCIKLSAibmFtZSI6ICJXZWJob29rIgogICJ3ZWJob29rX2NvbmZpZ3MiOgogIC0gInVybCI6ICJodHRwOi8vd2ViaG9vay1kaW5ndGFsay9kaW5ndGFsay9zZW5kLyIKICAgICJzZW5kX3Jlc29sdmVkIjogdHJ1ZQoicm91dGUiOgogICJncm91cF9ieSI6CiAgLSAibmFtZXNwYWNlIgogICJncm91cF9pbnRlcnZhbCI6ICI1bSIKICAiZ3JvdXBfd2FpdCI6ICIzMHMiCiAgInJlY2VpdmVyIjogIldlYmhvb2siCiAgInJlcGVhdF9pbnRlcnZhbCI6ICIxMmgiCiAgInJvdXRlcyI6CiAgLSAibWF0Y2giOgogICAgICAiYWxlcnRuYW1lIjogIldhdGNoZG9nIgogICAgInJlY2VpdmVyIjogIldhdGNoZG9nIgogIC0gIm1hdGNoIjoKICAgICAgInNldmVyaXR5IjogImNyaXRpY2FsIgogICAgInJlY2VpdmVyIjogIkNyaXRpY2FsIgo="},"kind":"Secret","metadata":{"annotations":{},"creationTimestamp":"2021-09-02T15:58:11Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:alertmanager.yaml":{}},"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:type":{}},"manager":"kubectl","operation":"Update","time":"2021-09-02T18:21:15Z"}],"name":"alertmanager-main","namespace":"monitoring","resourceVersion":"455008700","selfLink":"/api/v1/namespaces/monitoring/secrets/alertmanager-main","uid":"a7339c20-1b65-49f7-8b2e-db0459f14155"},"type":"Opaque"} creationTimestamp: "2021-09-03T02:06:04Z" managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:data: .: {} f:alertmanager.yaml: {} f:metadata: f:annotations: .: {} f:kubectl.kubernetes.io/last-applied-configuration: {} f:type: {} manager: kubectl operation: Update time: "2021-09-03T02:06:04Z" name: alertmanager-main namespace: monitoring resourceVersion: "457308386" selfLink: /api/v1/namespaces/monitoring/secrets/alertmanager-main uid: 6be76878-ac5d-4b6c-8f5a-e6928c1b4d67 type: Opaque ``` 4.3 更改解码后配置 ```yaml "global": "resolve_timeout": "5m" "inhibit_rules": - "equal": - "namespace" - "alertname" "source_match": "severity": "critical" "target_match_re": "severity": "warning|info" - "equal": - "namespace" - "alertname" "source_match": "severity": "warning" "target_match_re": "severity": "info" "receivers": - "name": "Default" - "name": "Watchdog" - "name": "Critical" - "name": "Webhook" #新增的webhook "webhook_configs": - "url": "http://webhook-dingtalk/dingtalk/send/" #配置前面部署服务webhook-dingtalk "send_resolved": true "route": "group_by": - "namespace" "group_interval": "5m" "group_wait": "30s" "receiver": "Webhook" #默认Webhook "repeat_interval": "12h" "routes": - "match": "alertname": "Watchdog" "receiver": "Watchdog" - "match": "severity": "critical" "receiver": "Critical" ``` 4.4 更新配置 ```bash $ kubectl edit secret alertmanager-main -n monitoring #把上面更新的内容base64加密替换旧的内容即可 ``` 4.5 重新部署Alertmanager #### 五、问题: `a.查看node-exporter的pod为pengding状态?` _解决方法: 查看日志提示spec.template.spec.containers[1].ports[1].name: Duplicate value: "https",端口冲突,9100更改端口19100_ ```bash $ sed -i s/9100/19100/g manifests/node-exporter-daemonset.yaml $ kubectl delete -f node-exporter-daemonset.yaml $ kubectl create -f node-exporter-daemonset.yaml ```
阅读 922 评论 0 收藏 0
阅读 922
评论 0
收藏 0

兜兜    2021-08-26 14:51:18    2021-09-10 12:44:37   

k8s kubernets
这里以用户访问 https://gw.example.com gateway服务为例,整个网络包的调用过程如下: `CLIENT->阿里云SLB->K8S NODE(IPVS/kube-proxy)->INGRESS POD(nginx controller)->GATEWAY POD(gateway服务)` ```BASH CLIENT IP: CLIENT_IP SLB IP: 47.107.x.x K8S NODE IP: 172.18.238.85 INGRESS POD IP: 10.151.0.78 GATEWAY POD IP: 10.151.0.107 ``` #### 1.CLIENT-->阿里云SLB ```bash 解析gw.example.com 47.107.x.x(SLB公网ip), 数据包到达阿里云SLB(CLIENT_IP:RANDOM_PORT---->47.107.x.x:443) ``` #### 2.阿里云SLB-->K8S NODE(IPVS/kube-proxy) ```bash 阿里云SLB配置后端虚拟服务: TCP:443-->172.18.238.85:30483 数据包到达K8S NODE(CLIENT_IP:RANDOM_PORT---->172.18.238.85:30483) ``` K8S NODE抓包 ```bash $ tcpdump -i eth0 ip host CLIENT_IP -n 14:39:33.043508 IP CLIENT_IP.RANDOM_PORT > 172.18.238.85.30483: Flags [S], seq 1799504552, win 29200, options [mss 1460,sackOK,TS val 1092093183 ecr 0,nop,wscale 7], length 0 ``` #### 3.K8S NODE(IPVS/kube-proxy)-->INGRESS POD(nginx controller) IPVS配置后端服务: ```BASH $ ipvsadm -L -n TCP 172.18.238.85:30483 rr -> 10.151.0.78:443 Masq 1 2 40 -> 10.151.0.83:443 Masq 1 8 42 ``` ```BASH 数据包到达nginx ingress(CLIENT_IP:RANDOM_PORT---->10.151.0.78.443) ``` K8S NODE抓包nginx ingress服务([抓包pod教程](https://ynotes.cn/blog/article_detail/260)) ```bash $ tcpdump -i vethfe247b7f -nnn |grep "\.443" #vethfe247b7f为ingress controller pod的网卡 16:45:28.687578 IP CLIENT_IP.RANDOM_PORT > 10.151.0.78.443: Flags [S], seq 2547516746, win 29200, options [mss 1460,sackOK,TS val 1099648828 ecr 0,nop,wscale 7], length 0 ``` #### 4.INGRESS POD(nginx controller)->GATEWAY POD(gateway服务) ```bash $ kubectl get pods -o wide --all-namespaces|grep 10.151.0.78 kube-system nginx-ingress-controller-8489c5b8c4-fccs5 1/1 Running 1 49d 10.151.0.78 cn-shenzhen.172.18.238.85 <none> <none> ``` ```BASH 数据包到达gateway服务(10.151.0.78.57270---->10.151.0.107.18880) ``` K8S NODE抓包gateway服务 ```bash $ tcpdump -i veth553c1000 -nnn port 18880 17:05:58.463497 IP 10.151.0.78.57270 > 10.151.0.107.18880: Flags [S], seq 3538162899, win 65535, options [mss 1460,sackOK,TS val 878505289 ecr 0,nop,wscale 9], length 0 ```
阅读 758 评论 0 收藏 0
阅读 758
评论 0
收藏 0

兜兜    2021-07-17 17:31:15    2021-07-17 17:31:15   

kubernets
阅读 2094 评论 0 收藏 0
阅读 2094
评论 0
收藏 0