`注意:cilium+kube-router目前仅beta阶段`
#### 准备阶段
系统环境
```sh
节点类型 IP 主机名 系统 内核版本
Master 172.16.13.80 shudoon101 CentOS7.9.2009 5.4.144-1.el7.elrepo.x86_64
Node 172.16.13.81 shudoon102 CentOS7.9.2009 5.4.144-1.el7.elrepo.x86_64
Node 172.16.13.82 shudoon103 CentOS7.9.2009 5.4.144-1.el7.elrepo.x86_64
```
软件环境
```sh
CentOS7 kernel 5.4.144 (cilium 内核要求4.9.17+)
kubernets 1.20
cilium(1.9.10): CNI网络插件,包含kube-proxy代理功能
kube-router: 仅使用它的BGP路由功能
```
挂载bpf文件系统
```sh
$ mount bpffs /sys/fs/bpf -t bpf
```
```sh
$ cat >> /etc/fstab <<E0F
bpffs /sys/fs/bpf bpf defaults 0 0
EOF
```
#### 安装cilium
快速安装
```sh
kubectl create -f https://raw.githubusercontent.com/cilium/cilium/v1.9/install/kubernetes/quick-install.yaml
```
查看状态
```sh
$ kubectl -n kube-system get pods
```
```sh
NAME READY STATUS RESTARTS AGE
cilium-csknd 1/1 Running 0 87m
cilium-dwz2c 1/1 Running 0 87m
cilium-nq8r9 1/1 Running 1 87m
cilium-operator-59c9d5bc94-sh4c2 1/1 Running 0 87m
coredns-7f89b7bc75-bhgrp 1/1 Running 0 7m24s
coredns-7f89b7bc75-sfgvq 1/1 Running 0 7m24s
```
部署connectivity test
```sh
$ wget https://raw.githubusercontent.com/cilium/cilium/v1.9/examples/kubernetes/connectivity-check/connectivity-check.yaml
$ sed -i 's/google.com/baidu.com/g' connectivity-check.yaml #测试外网的地址改成baidu.com
$ kubectl apply -f connectivity-check.yaml
```
查看pod的状态
```sh
kubectl get pods
NAME READY STATUS RESTARTS AGE
echo-a-dc9bcfd8f-hgc64 1/1 Running 0 9m59s
echo-b-5884b7dc69-bl5px 1/1 Running 0 9m59s
echo-b-host-cfdd57978-dg6gw 1/1 Running 0 9m59s
host-to-b-multi-node-clusterip-c4ff7ff64-m9zwz 1/1 Running 0 9m58s
host-to-b-multi-node-headless-84d8f6f4c4-8b797 1/1 Running 1 9m57s
pod-to-a-5cdfd4754d-jgmnt 1/1 Running 0 9m59s
pod-to-a-allowed-cnp-7d7c8f9f9b-f9lpc 1/1 Running 0 9m58s
pod-to-a-denied-cnp-75cb89dfd-jsjd4 1/1 Running 0 9m59s
pod-to-b-intra-node-nodeport-c6d79965d-w98jx 1/1 Running 1 9m57s
pod-to-b-multi-node-clusterip-cd4d764b6-gjvc5 1/1 Running 0 9m58s
pod-to-b-multi-node-headless-6696c5f8cd-fvcsl 1/1 Running 1 9m58s
pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns 1/1 Running 0 9m57s
pod-to-external-1111-d5c7bb4c4-sflfc 1/1 Running 0 9m59s
pod-to-external-fqdn-allow-google-cnp-dcb4d867d-dxqzx 1/1 Running 0 7m16s
```
#### 安装Hubble
##### 安装Hubble UI
配置环境变量
```sh
$ export CILIUM_NAMESPACE=kube-system
```
快速安装
`Cilium 1.9.2+`
```sh
$ kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/v1.9/install/kubernetes/quick-hubble-install.yaml
```
helm安装
`Cilium <1.9.2`
```sh
# Set this to your installed Cilium version
$ export CILIUM_VERSION=1.9.10
# Please set any custom Helm values you may need for Cilium,
# such as for example `--set operator.replicas=1` on single-cluster nodes.
$ helm template cilium cilium/cilium --version $CILIUM_VERSION \\
--namespace $CILIUM_NAMESPACE \\
--set hubble.tls.auto.method="cronJob" \\
--set hubble.listenAddress=":4244" \\
--set hubble.relay.enabled=true \\
--set hubble.ui.enabled=true > cilium-with-hubble.yaml
# This will modify your existing Cilium DaemonSet and ConfigMap
$ kubectl apply -f cilium-with-hubble.yaml
```
配置端口转发
```sh
$ kubectl port-forward -n $CILIUM_NAMESPACE svc/hubble-ui --address 0.0.0.0 --address :: 12000:80
```
##### 安装Hubble CLI
下载hubble CLI
```sh
$ export CILIUM_NAMESPACE=kube-system
$ export HUBBLE_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/hubble/master/stable.txt)
$ curl -LO "https://github.com/cilium/hubble/releases/download/$HUBBLE_VERSION/hubble-linux-amd64.tar.gz"
$ tar zxf hubble-linux-amd64.tar.gz
$ mv hubble /usr/local/bin
```
开启端口转发
```sh
$ kubectl port-forward -n $CILIUM_NAMESPACE svc/hubble-relay --address 0.0.0.0 --address :: 4245:80
```
查看节点状态
```sh
hubble --server localhost:4245 status
```
```sh
Healthcheck (via localhost:4245): Ok
Current/Max Flows: 12288/12288 (100.00%)
Flows/s: 22.71
Connected Nodes: 3/3
```
观察cilium信息
```sh
hubble --server localhost:4245 observe
```
```sh
Sep 11 06:52:06.119: default/pod-to-b-multi-node-clusterip-cd4d764b6-gjvc5:46762 -> default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: ACK, PSH)
Sep 11 06:52:06.122: default/echo-b-5884b7dc69-bl5px:8080 <> default/pod-to-b-multi-node-clusterip-cd4d764b6-gjvc5:46762 to-overlay FORWARDED (TCP Flags: ACK, PSH)
Sep 11 06:52:06.122: default/pod-to-b-multi-node-clusterip-cd4d764b6-gjvc5:46762 -> default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: ACK, FIN)
Sep 11 06:52:06.123: default/echo-b-5884b7dc69-bl5px:8080 <> default/pod-to-b-multi-node-clusterip-cd4d764b6-gjvc5:46762 to-overlay FORWARDED (TCP Flags: ACK, FIN)
Sep 11 06:52:06.123: default/pod-to-b-multi-node-clusterip-cd4d764b6-gjvc5:46762 -> default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: ACK)
Sep 11 06:52:06.793: default/pod-to-a-5cdfd4754d-jgmnt:54735 -> kube-system/coredns-7f89b7bc75-bhgrp:53 to-endpoint FORWARDED (UDP)
Sep 11 06:52:06.793: default/pod-to-a-5cdfd4754d-jgmnt:54735 -> kube-system/coredns-7f89b7bc75-bhgrp:53 to-endpoint FORWARDED (UDP)
Sep 11 06:52:06.793: kube-system/coredns-7f89b7bc75-bhgrp:53 <> default/pod-to-a-5cdfd4754d-jgmnt:54735 to-overlay FORWARDED (UDP)
Sep 11 06:52:06.793: kube-system/coredns-7f89b7bc75-bhgrp:53 <> default/pod-to-a-5cdfd4754d-jgmnt:54735 to-overlay FORWARDED (UDP)
Sep 11 06:52:06.903: kube-system/coredns-7f89b7bc75-bhgrp:32928 -> 172.16.13.80:6443 to-stack FORWARDED (TCP Flags: ACK)
Sep 11 06:52:06.903: kube-system/coredns-7f89b7bc75-bhgrp:32928 <- 172.16.13.80:6443 to-endpoint FORWARDED (TCP Flags: ACK)
Sep 11 06:52:07.003: 10.244.0.1:36858 -> kube-system/coredns-7f89b7bc75-bhgrp:8181 to-endpoint FORWARDED (TCP Flags: SYN)
Sep 11 06:52:07.003: 10.244.0.1:36858 <- kube-system/coredns-7f89b7bc75-bhgrp:8181 to-stack FORWARDED (TCP Flags: SYN, ACK)
Sep 11 06:52:07.003: 10.244.0.1:36858 -> kube-system/coredns-7f89b7bc75-bhgrp:8181 to-endpoint FORWARDED (TCP Flags: ACK)
Sep 11 06:52:07.003: 10.244.0.1:36858 -> kube-system/coredns-7f89b7bc75-bhgrp:8181 to-endpoint FORWARDED (TCP Flags: ACK, PSH)
Sep 11 06:52:07.003: 10.244.0.1:36858 <- kube-system/coredns-7f89b7bc75-bhgrp:8181 to-stack FORWARDED (TCP Flags: ACK, PSH)
Sep 11 06:52:07.003: 10.244.0.1:36858 <- kube-system/coredns-7f89b7bc75-bhgrp:8181 to-stack FORWARDED (TCP Flags: ACK, FIN)
Sep 11 06:52:07.003: 10.244.0.1:36858 -> kube-system/coredns-7f89b7bc75-bhgrp:8181 to-endpoint FORWARDED (TCP Flags: ACK, FIN)
Sep 11 06:52:07.043: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:49051 -> kube-system/coredns-7f89b7bc75-bhgrp:53 to-endpoint FORWARDED (UDP)
Sep 11 06:52:07.043: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:49051 -> kube-system/coredns-7f89b7bc75-bhgrp:53 to-endpoint FORWARDED (UDP)
Sep 11 06:52:07.043: kube-system/coredns-7f89b7bc75-bhgrp:53 <> default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:49051 to-overlay FORWARDED (UDP)
Sep 11 06:52:07.043: kube-system/coredns-7f89b7bc75-bhgrp:53 <> default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:49051 to-overlay FORWARDED (UDP)
Sep 11 06:52:07.044: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 <> default/echo-b-5884b7dc69-bl5px:8080 to-overlay FORWARDED (TCP Flags: SYN)
Sep 11 06:52:07.044: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 <- default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: SYN, ACK)
Sep 11 06:52:07.044: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 <> default/echo-b-5884b7dc69-bl5px:8080 to-overlay FORWARDED (TCP Flags: ACK)
Sep 11 06:52:07.044: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 <> default/echo-b-5884b7dc69-bl5px:8080 to-overlay FORWARDED (TCP Flags: ACK, PSH)
Sep 11 06:52:07.044: kube-system/hubble-relay-74b76459f9-d66h9:60564 -> 172.16.13.81:4244 to-stack FORWARDED (TCP Flags: ACK)
Sep 11 06:52:07.046: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 -> default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: SYN)
Sep 11 06:52:07.046: default/echo-b-5884b7dc69-bl5px:8080 <> default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 to-overlay FORWARDED (TCP Flags: SYN, ACK)
Sep 11 06:52:07.047: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 -> default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: ACK)
Sep 11 06:52:07.047: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 -> default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: ACK, PSH)
Sep 11 06:52:07.048: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 <- default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: ACK, PSH)
Sep 11 06:52:07.048: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 <> default/echo-b-5884b7dc69-bl5px:8080 to-overlay FORWARDED (TCP Flags: ACK, FIN)
Sep 11 06:52:07.048: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 <- default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: ACK, FIN)
Sep 11 06:52:07.048: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 <> default/echo-b-5884b7dc69-bl5px:8080 to-overlay FORWARDED (TCP Flags: ACK)
Sep 11 06:52:07.050: default/echo-b-5884b7dc69-bl5px:8080 <> default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 to-overlay FORWARDED (TCP Flags: ACK, PSH)
Sep 11 06:52:07.051: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 -> default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: ACK, FIN)
Sep 11 06:52:07.051: default/echo-b-5884b7dc69-bl5px:8080 <> default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 to-overlay FORWARDED (TCP Flags: ACK, FIN)
Sep 11 06:52:07.051: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 -> default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: ACK)
Sep 11 06:52:07.095: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:43239 <> kube-system/coredns-7f89b7bc75-bhgrp:53 to-overlay FORWARDED (UDP)
Sep 11 06:52:07.096: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:43239 <> kube-system/coredns-7f89b7bc75-bhgrp:53 to-overlay FORWARDED (UDP)
Sep 11 06:52:07.096: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:43239 -> kube-system/coredns-7f89b7bc75-bhgrp:53 to-endpoint FORWARDED (UDP)
Sep 11 06:52:07.096: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:43239 -> kube-system/coredns-7f89b7bc75-bhgrp:53 to-endpoint FORWARDED (UDP)
Sep 11 06:52:07.096: kube-system/coredns-7f89b7bc75-bhgrp:53 <> default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:43239 to-overlay FORWARDED (UDP)
Sep 11 06:52:07.096: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:43239 <- kube-system/coredns-7f89b7bc75-bhgrp:53 to-endpoint FORWARDED (UDP)
Sep 11 06:52:07.096: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:43239 <- kube-system/coredns-7f89b7bc75-bhgrp:53 to-endpoint FORWARDED (UDP)
Sep 11 06:52:07.096: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57236 <> default/echo-b-5884b7dc69-bl5px:8080 to-overlay FORWARDED (TCP Flags: SYN)
Sep 11 06:52:07.097: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57236 <- default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: SYN, ACK)
Sep 11 06:52:07.097: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57236 <> default/echo-b-5884b7dc69-bl5px:8080 to-overlay FORWARDED (TCP Flags: ACK)
Sep 11 06:52:07.097: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57236 <> default/echo-b-5884b7dc69-bl5px:8080 to-overlay FORWARDED (TCP Flags: ACK, PSH)
Sep 11 06:52:07.099: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57236 -> default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: SYN)
Sep 11 06:52:07.099: default/echo-b-5884b7dc69-bl5px:8080 <> default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57236 to-overlay FORWARDED (TCP Flags: SYN, ACK)
Sep 11 06:52:07.099: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57236 -> default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: ACK)
Sep 11 06:52:07.100: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57236 -> default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: ACK, PSH)
Sep 11 06:52:07.101: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57236 <- default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: ACK, PSH)
Sep 11 06:52:07.101: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57236 <> default/echo-b-5884b7dc69-bl5px:8080 to-overlay FORWARDED (TCP Flags: ACK, FIN)
Sep 11 06:52:07.102: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57236 <- default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: ACK, FIN)
Sep 11 06:52:07.103: default/echo-b-5884b7dc69-bl5px:8080 <> default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57236 to-overlay FORWARDED (TCP Flags: ACK, PSH)
Sep 11 06:52:07.104: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57236 -> default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: ACK, FIN)
Sep 11 06:52:07.104: default/echo-b-5884b7dc69-bl5px:8080 <> default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57236 to-overlay FORWARDED (TCP Flags: ACK, FIN)
```
浏览器访问 http://127.0.0.1:12000/
#### Cilium替换kube-proxy(`默认为共存模式Probe`)
删除kube-proxy
```sh
$ kubectl -n kube-system delete ds kube-proxy #删除kube-proxy DaemonSet
# Delete the configmap as well to avoid kube-proxy being reinstalled during a kubeadm upgrade (works only for K8s 1.19 and newer)
$ kubectl -n kube-system delete cm kube-proxy
# 所有节点执行,清除kube-proxy相关的规则
$ iptables-restore <(iptables-save | grep -v KUBE)
```
卸载快速安装的cilium
```sh
$ kubectl delete -f https://raw.githubusercontent.com/cilium/cilium/v1.9/install/kubernetes/quick-install.yaml
```
helm3 重新安装cilium库
```sh
$ helm repo add cilium https://helm.cilium.io/
```
```sh
$ export REPLACE_WITH_API_SERVER_IP=172.16.13.80
$ export REPLACE_WITH_API_SERVER_PORT=6443
$ helm upgrade --install cilium cilium/cilium --version 1.9.10 \
--namespace kube-system \
--set kubeProxyReplacement=strict \
--set k8sServiceHost=REPLACE_WITH_API_SERVER_IP \
--set k8sServicePort=REPLACE_WITH_API_SERVER_PORT
```
查看部署的模式
```sh
$ kubectl -n kube-system get pods -l k8s-app=cilium
NAME READY STATUS RESTARTS AGE
cilium-9szpn 1/1 Running 0 45s
cilium-fllk6 1/1 Running 0 45s
cilium-sn2q5 1/1 Running 0 45s
$ kubectl exec -it -n kube-system cilium-9szpn -- cilium status | grep KubeProxyReplacement
KubeProxyReplacement: Strict [ens192 (Direct Routing)] #Strict为严格模式
```
查看cilium的状态
```sh
$ kubectl exec -ti -n kube-system cilium-9szpn -- cilium status --verbose
```
```sh
KVStore: Ok Disabled
Kubernetes: Ok 1.20 (v1.20.10) [linux/amd64]
Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
KubeProxyReplacement: Strict [ens192 (Direct Routing)]
Cilium: Ok 1.9.10 (v1.9.10-4e26039)
NodeMonitor: Listening for events on 8 CPUs with 64x4096 of shared memory
Cilium health daemon: Ok
IPAM: IPv4: 6/255 allocated from 10.244.1.0/24,
Allocated addresses:
10.244.1.164 (health)
10.244.1.169 (default/pod-to-b-intra-node-nodeport-c6d79965d-w98jx [restored])
10.244.1.211 (default/busybox-deployment-99db9cf4d-dch94 [restored])
10.244.1.217 (default/echo-b-5884b7dc69-bl5px [restored])
10.244.1.221 (router)
10.244.1.6 (default/busybox-deployment-99db9cf4d-pgxzz [restored])
BandwidthManager: Disabled
Host Routing: Legacy
Masquerading: BPF [ens192] 10.244.1.0/24
Clock Source for BPF: ktime
Controller Status: 38/38 healthy
Name Last success Last error Count Message
cilium-health-ep 42s ago never 0 no error
dns-garbage-collector-job 53s ago never 0 no error
endpoint-1383-regeneration-recovery never never 0 no error
endpoint-1746-regeneration-recovery never never 0 no error
endpoint-1780-regeneration-recovery never never 0 no error
endpoint-3499-regeneration-recovery never never 0 no error
endpoint-546-regeneration-recovery never never 0 no error
endpoint-777-regeneration-recovery never never 0 no error
k8s-heartbeat 23s ago never 0 no error
mark-k8s-node-as-available 4m43s ago never 0 no error
metricsmap-bpf-prom-sync 8s ago never 0 no error
neighbor-table-refresh 4m43s ago never 0 no error
resolve-identity-777 4m42s ago never 0 no error
restoring-ep-identity (1383) 4m43s ago never 0 no error
restoring-ep-identity (1746) 4m43s ago never 0 no error
restoring-ep-identity (1780) 4m43s ago never 0 no error
restoring-ep-identity (3499) 4m43s ago never 0 no error
restoring-ep-identity (546) 4m43s ago never 0 no error
sync-endpoints-and-host-ips 43s ago never 0 no error
sync-lb-maps-with-k8s-services 4m43s ago never 0 no error
sync-policymap-1383 41s ago never 0 no error
sync-policymap-1746 41s ago never 0 no error
sync-policymap-1780 41s ago never 0 no error
sync-policymap-3499 41s ago never 0 no error
sync-policymap-546 40s ago never 0 no error
sync-policymap-777 41s ago never 0 no error
sync-to-k8s-ciliumendpoint (1383) 3s ago never 0 no error
sync-to-k8s-ciliumendpoint (1746) 3s ago never 0 no error
sync-to-k8s-ciliumendpoint (1780) 3s ago never 0 no error
sync-to-k8s-ciliumendpoint (3499) 3s ago never 0 no error
sync-to-k8s-ciliumendpoint (546) 3s ago never 0 no error
sync-to-k8s-ciliumendpoint (777) 12s ago never 0 no error
template-dir-watcher never never 0 no error
update-k8s-node-annotations 4m51s ago never 0 no error
waiting-initial-global-identities-ep (1383) 4m43s ago never 0 no error
waiting-initial-global-identities-ep (1746) 4m43s ago never 0 no error
waiting-initial-global-identities-ep (1780) 4m43s ago never 0 no error
waiting-initial-global-identities-ep (3499) 4m43s ago never 0 no error
Proxy Status: OK, ip 10.244.1.221, 0 redirects active on ports 10000-20000
Hubble: Ok Current/Max Flows: 3086/4096 (75.34%), Flows/s: 11.02 Metrics: Disabled
KubeProxyReplacement Details:
Status: Strict
Protocols: TCP, UDP
Devices: ens192 (Direct Routing)
Mode: SNAT
Backend Selection: Random
Session Affinity: Enabled
XDP Acceleration: Disabled
Services: #下面Enabled表示支持的类型
- ClusterIP: Enabled
- NodePort: Enabled (Range: 30000-32767)
- LoadBalancer: Enabled
- externalIPs: Enabled
- HostPort: Enabled
BPF Maps: dynamic sizing: on (ratio: 0.002500)
Name Size
Non-TCP connection tracking 73653
TCP connection tracking 147306
Endpoint policy 65535
Events 8
IP cache 512000
IP masquerading agent 16384
IPv4 fragmentation 8192
IPv4 service 65536
IPv6 service 65536
IPv4 service backend 65536
IPv6 service backend 65536
IPv4 service reverse NAT 65536
IPv6 service reverse NAT 65536
Metrics 1024
NAT 147306
Neighbor table 147306
Global policy 16384
Per endpoint policy 65536
Session affinity 65536
Signal 8
Sockmap 65535
Sock reverse NAT 73653
Tunnel 65536
Cluster health: 3/3 reachable (2021-09-11T08:08:45Z)
Name IP Node Endpoints
shudoon102 (localhost) 172.16.13.81 reachable reachable
shudoon101 172.16.13.80 reachable reachable
shudoon103 172.16.13.82 reachable reachable
```
创建nginx服务
nginx的deployment
```sh
cat nginx-deployment.yaml
```
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
spec:
selector:
matchLabels:
run: my-nginx
replicas: 2
template:
metadata:
labels:
run: my-nginx
spec:
containers:
- name: my-nginx
image: nginx
ports:
- containerPort: 80
```
执行部署
```sh
$ kubectl apply -f nginx-deployment.yaml
```
获取nginx的pod运行状态
```sh
$ kubectl get pods -l run=my-nginx -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
my-nginx-5b56ccd65f-kdrhn 1/1 Running 0 3m36s 10.244.2.157 shudoon103 <none> <none>
my-nginx-5b56ccd65f-ltxc6 1/1 Running 0 3m36s 10.244.1.9 shudoon102 <none> <none>
```
```sh
$ kubectl get svc my-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-nginx NodePort 10.97.255.81 <none> 80:30074/TCP 4h
```
查看Cilium eBPF生成的服务信息
```sh
kubectl exec -it -n kube-system cilium-9szpn -- cilium service list
ID Frontend Service Type Backend
1 10.111.52.74:80 ClusterIP 1 => 10.244.2.249:8081
2 10.96.0.1:443 ClusterIP 1 => 172.16.13.80:6443
3 10.96.0.10:53 ClusterIP 1 => 10.244.0.37:53
2 => 10.244.0.232:53
4 10.96.0.10:9153 ClusterIP 1 => 10.244.0.37:9153
2 => 10.244.0.232:9153
5 10.102.133.211:8080 ClusterIP 1 => 10.244.2.129:8080
6 10.100.95.181:8080 ClusterIP 1 => 10.244.1.217:8080
7 0.0.0.0:31414 NodePort 1 => 10.244.1.217:8080
8 172.16.13.81:31414 NodePort 1 => 10.244.1.217:8080
9 10.97.255.81:80 ClusterIP 1 => 10.244.2.157:80 #service的10.97.255.81对应pod后端
2 => 10.244.1.9:80
10 0.0.0.0:30074 NodePort 1 => 10.244.2.157:80 #NodePort
2 => 10.244.1.9:80
11 172.16.13.81:30074 NodePort 1 => 10.244.2.157:80
2 => 10.244.1.9:80
12 10.98.83.147:80 ClusterIP 1 => 10.244.2.144:4245
13 172.16.13.81:40000 HostPort 1 => 10.244.1.217:8080
14 0.0.0.0:40000 HostPort 1 => 10.244.1.217:8080
```
查看iptables是否生成规则
```sh
$ iptables-save | grep KUBE-SVC
[ empty line ]
```
测试nginx服务
```sh
$curl http://10.97.255.81
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...
```
#### 整合kube-router
```sh
$ curl -LO https://raw.githubusercontent.com/cloudnativelabs/kube-router/v0.4.0/daemonset/generic-kuberouter-only-advertise-routes.yaml
```
修改配置文件
```sh
$ cat generic-kuberouter-only-advertise-routes.yaml
```
```yaml
...
- "--run-router=true"
- "--run-firewall=false"
- "--run-service-proxy=false"
- "--enable-cni=false"
- "--enable-pod-egress=false"
- "--enable-ibgp=true"
- "--enable-overlay=true"
# - "--peer-router-ips=<CHANGE ME>"
# - "--peer-router-asns=<CHANGE ME>"
# - "--cluster-asn=<CHANGE ME>"
- "--advertise-cluster-ip=true"
- "--advertise-external-ip=true"
- "--advertise-loadbalancer-ip=true"
...
```
```sh
$ kubectl apply -f generic-kuberouter-only-advertise-routes.yaml
```
```sh
$ kubectl -n kube-system get pods -l k8s-app=kube-router
NAME READY STATUS RESTARTS AGE
kube-router-5q2xw 1/1 Running 0 72m
kube-router-cl9nc 1/1 Running 0 72m
kube-router-tmdnt 1/1 Running 0 72m
```
重新安装
```sh
$ helm delete cilium -n kube-system #删除旧的helm
$ export REPLACE_WITH_API_SERVER_IP=172.16.13.80
$ export REPLACE_WITH_API_SERVER_PORT=6443
$ helm install cilium cilium/cilium --version 1.9.10 \
--namespace kube-system \
--set ipam.mode=kubernetes \
--set tunnel=disabled \
--set nativeRoutingCIDR=172.16.13.0/24 \
--set kubeProxyReplacement=strict \
--set k8sServiceHost=$REPLACE_WITH_API_SERVER_IP \
--set k8sServicePort=$REPLACE_WITH_API_SERVER_PORT
```
```sh
kubectl -n kube-system get pods -l k8s-app=cilium
NAME READY STATUS RESTARTS AGE
cilium-f22c2 1/1 Running 0 5m58s
cilium-ftg8n 1/1 Running 0 5m58s
cilium-s4mng 1/1 Running 0 5m58s
```
```sh
$ kubectl -n kube-system exec -ti cilium-s4mng -- ip route list scope global
default via 172.16.13.254 dev ens192 proto static metric 100
10.244.0.0/24 via 10.244.0.1 dev cilium_host src 10.244.0.1 #Local PodCIDR
10.244.1.0/24 via 172.16.13.81 dev ens192 proto 17 #BGP route
10.244.2.0/24 via 172.16.13.82 dev ens192 proto 17 #BGP route
```
验证安装
```sh
$ curl -L --remote-name-all https://github.com/cilium/cilium-cli/releases/latest/download/cilium-linux-amd64.tar.gz
$ tar xzvfC cilium-linux-amd64.tar.gz /usr/local/bin
```
验证cilium安装
```sh
$ cilium status --wait
/¯¯\
/¯¯\__/¯¯\ Cilium: OK
\__/¯¯\__/ Operator: OK
/¯¯\__/¯¯\ Hubble: OK
\__/¯¯\__/ ClusterMesh: disabled
\__/
Deployment hubble-relay Desired: 1, Ready: 1/1, Available: 1/1
Deployment hubble-ui Desired: 1, Ready: 1/1, Available: 1/1
DaemonSet cilium Desired: 3, Ready: 3/3, Available: 3/3
Deployment cilium-operator Desired: 2, Ready: 2/2, Available: 2/2
Containers: cilium-operator Running: 2
hubble-relay Running: 1
hubble-ui Running: 1
cilium Running: 3
Cluster Pods: 20/20 managed by Cilium
Image versions cilium quay.io/cilium/cilium:v1.9.10: 3
cilium-operator quay.io/cilium/operator-generic:v1.9.10: 2
hubble-relay quay.io/cilium/hubble-relay:v1.9.10@sha256:f15bc1a1127be143c957158651141443c9fa14683426ef8789cf688fb94cae55: 1
hubble-ui quay.io/cilium/hubble-ui:v0.7.3: 1
hubble-ui quay.io/cilium/hubble-ui-backend:v0.7.3: 1
hubble-ui docker.io/envoyproxy/envoy:v1.14.5: 1
```
验证k8s的cilium
```sh
$ cilium connectivity test
```
参考:https://docs.cilium.io/en/v1.9/gettingstarted/k8s-install-default/