兜兜    2021-09-15 15:56:38    2021-10-27 15:14:16   

Keepalived LVS ingress
```sh 防火墙端口映射80/443到LVS的VIP对应80/443,LVS负载均衡K8S节点IP的80/443端口。ingress-nginx-controller服务暴露方式通过(HostNetwork:80/443)。实现的效果119.x.x.x:80/443-->172.16.100.99:80/443(LVS VIP)--> 172.16.100.100:80/443,172.16.100.101:80/443,172.16.100.102:80/443 ``` 配置规划 ```sh +----------------+----------------+--------+--------------------------+ | Host | IP | Port | SoftWare | +----------------+----------------+--------+--------------------------+ | LVS01 | 172.16.100.27 | 80/443 | LVS,Keepalived | | LVS02 | 172.16.100.28 | 80/443 | LVS,Keepalived | | RS/k8s-master1 | 172.16.100.100 | 80/443 | ingress-nginx-controller | | RS/k8s-master2 | 172.16.100.101 | 80/443 | ingress-nginx-controller | | RS/k8s-node1 | 172.16.100.102 | 80/443 | ingress-nginx-controller | | VIP | 172.16.100.99 | 80/443 | / | +----------------+----------------+--------+--------------------------+ ``` 安装lvs和keepalived(`172.16.100.27/172.16.100.28`) ```sh $ yum install ipvsadm keepalived -y $ systemctl enable keepavlied ``` 配置keepalived(`172.16.100.27`) ```sh $ cat > /etc/keepalived/keepalived.conf <<EOF ! Configuration File for keepalived global_defs { router_id LVS_27 #route id } vrrp_instance VI_1 { state MASTER #主节点 interface ens192 #网卡 virtual_router_id 51 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 172.16.100.99 } } virtual_server 172.16.100.99 443 { delay_loop 3 lb_algo rr lb_kind DR persistence_timeout 50 protocol TCP real_server 172.16.100.100 443 { weight 1 TCP_CHECK { connect_port 443 connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } real_server 172.16.100.101 443 { weight 1 TCP_CHECK { connect_port 443 connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } real_server 172.16.100.102 443 { weight 1 TCP_CHECK { connect_port 443 connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } } virtual_server 172.16.100.99 80 { delay_loop 3 lb_algo rr lb_kind DR persistence_timeout 50 protocol TCP real_server 172.16.100.100 80 { weight 1 TCP_CHECK { connect_port 80 connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } real_server 172.16.100.101 80 { weight 1 TCP_CHECK { connect_port 80 connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } real_server 172.16.100.102 80 { weight 1 TCP_CHECK { connect_port 80 connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } } EOF ``` 配置keepalived(`172.16.100.28`) ```sh $ cat /etc/keepalived/keepalived.conf ... global_defs { router_id LVS_28 #route id,两台机器配置不一样 } vrrp_instance VI_1 { state BACKUP #备份节点 interface ens192 #网卡名 virtual_router_id 51 priority 99 #优先级 ... ``` 配置RS节点(`172.16.100.100/172.16.100.101/172.16.100.102`) ```sh $ cat >/etc/init.d/lvs_rs.sh <<EOF vip=172.16.100.99 mask='255.255.255.255' dev=lo:1 case $1 in start) echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce ifconfig $dev $vip netmask $mask #broadcast $vip up echo "The RS Server is Ready!" ;; stop) ifconfig $dev down echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce echo 0 > /proc/sys/net/ipv4/conf/lo/arp_announce echo "The RS Server is Canceled!" ;; *) echo "Usage: $(basename $0) start|stop" exit 1 ;; esac EOF ``` 启动脚本 ```sh $ chmod +x /etc/init.d/lvs_rs.sh $ /etc/init.d/lvs_rs.sh start ``` ```sh $ ip a #查看lo:1接口VIP是否绑定 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet 172.16.100.99/32 scope global lo:1 valid_lft forever preferred_lft forever ``` 启动keepalived(`172.16.100.27/172.16.100.28`) ```sh $ systemctl start keepavlied ``` 查看VIP是否绑定 ```sh $ ip a 2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 00:50:56:93:ed:a4 brd ff:ff:ff:ff:ff:ff inet 172.16.100.27/24 brd 172.16.100.255 scope global noprefixroute ens192 valid_lft forever preferred_lft forever inet 172.16.100.99/32 scope global ens192 valid_lft forever preferred_lft forever ``` 查看LVS信息 ```sh $ ipvsadm -L -n IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 172.16.100.99:80 rr persistent 50 -> 172.16.100.100:80 Route 1 0 0 -> 172.16.100.101:80 Route 1 0 0 -> 172.16.100.102:80 Route 1 0 0 TCP 172.16.100.99:443 rr persistent 50 -> 172.16.100.100:443 Route 1 0 0 -> 172.16.100.101:443 Route 1 0 0 -> 172.16.100.102:443 Route 1 0 0 ```
阅读 105 评论 0 收藏 0
阅读 105
评论 0
收藏 0

兜兜    2021-09-15 15:55:33    2021-10-27 15:13:48   

kubernetes
部署生产环境Kubernetes集群方式 ```sh kubeadm Kubeadm是一个K8s部署工具,提供kubeadm init和kubeadm join,用于快速部署Kubernetes集群。 二进制包 从github下载发行版的二进制包,手动部署每个组件,组成Kubernetes集群。 ``` 本次测试采用kubeadm搭建集群 架构图 ```sh hostport/nodeport vip:172.16.100.99 +-----------------+ +-----------------+ | keepalived | |K8S-Master1 | | | | | 119.x.x.x | +---------+ | +------->| | | | | | | | | | | | | | | | +--------+ +--------+--> lvs27 | | | +-----------------+ | | | | | | | | 172.16.100.100 +---------+ | | | | +---------+ | | | | | | | | | | +-----------------+ | Client +--------+firewall+---+ | 172.16.100.27 | | |K8S-Master2 | | | | | | | | | | | +---------+ | | | | +---------+ +---------+------->| | | | | | | | | | | | | | | | | | | | | | +--------+ +--------+--> lvs28 | | | +-----------------+ | | | | | 172.16.100.101 | +---------+ | | +-----------------+ | | | |K8S-Node1 | | 172.16.100.2 | | | | | | | | | | | +------->| | +-----------------+ | | +-----------------+ 172.16.100.102 ``` #### 部署高可用高可用负载均衡器 ```sh Kubernetes作为容器集群系统,通过健康检查+重启策略实现了Pod故障自我修复能力,通过调度算法实现将Pod分布式部署,并保持预期副本数,根据Node失效状态自动在其他Node拉起Pod,实现了应用层的高可用性。 针对Kubernetes集群,高可用性还应包含以下两个层面的考虑:Etcd数据库的高可用性和Kubernetes Master组件的高可用性。 而kubeadm搭建的K8s集群,Etcd只起了一个,存在单点,所以我们这里会独立搭建一个Etcd集群。 Master节点扮演着总控中心的角色,通过不断与工作节点上的Kubelet和kube-proxy进行通信来维护整个集群的健康工作状态。如果Master节点故障,将无法使用kubectl工具或者API做任何集群管理。 Master节点主要有三个服务kube-apiserver、kube-controller-manager和kube-scheduler,其中kube-controller-manager和kube-scheduler组件自身通过选择机制已经实现了高可用,所以Master高可用主要针对kube-apiserver组件,而该组件是以HTTP API提供服务,因此对他高可用与Web服务器类似,增加负载均衡器对其负载均衡即可,并且可水平扩容。 ``` [查看安装文档](https://ynotes.cn/blog/article_detail/280) #### 部署Etcd集群 准备cfssl证书生成工具 cfssl是一个开源的证书管理工具,使用json文件生成证书,相比openssl更方便使用。 找任意一台服务器操作,这里用Master节点。 ```sh $ wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 $ wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 $ wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 $ chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64 $ mv cfssl_linux-amd64 /usr/local/bin/cfssl $ mv cfssljson_linux-amd64 /usr/local/bin/cfssljson $ mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo ``` 生成Etcd证书 ```sh $ mkdir -p ~/etcd_tls $ cd ~/etcd_tls ``` ```sh $ cat > ca-config.json << EOF { "signing": { "default": { "expiry": "87600h" }, "profiles": { "www": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF ``` ```sh $ cat > ca-csr.json << EOF { "CN": "etcd CA", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Guangdong", "ST": "Guangzhou" } ] } EOF ``` 生成CA证书 ```sh $ cfssl gencert -initca ca-csr.json | cfssljson -bare ca - ``` 使用自签CA签发Etcd HTTPS证书 创建证书申请文件 ```sh $ cat > server-csr.json << EOF { "CN": "etcd", "hosts": [ "172.16.100.100", "172.16.100.101", "172.16.100.102", "172.16.100.103", "172.16.100.104" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Guangdong", "ST": "Guangzhou" } ] } EOF ``` 生成证书 ```sh $ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | $ cfssljson -bare server ``` 下载二进制文件 ```sh $ wget https://github.com/etcd-io/etcd/releases/download/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz ``` ```sh $ mkdir /opt/etcd/{bin,cfg,ssl} -p $ tar zxvf etcd-v3.4.9-linux-amd64.tar.gz $ mv etcd-v3.4.9-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/ ``` 创建etcd配置文件 ```sh cat > /opt/etcd/cfg/etcd.conf << EOF #[Member] ETCD_NAME="etcd-1" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://172.16.100.100:2380" ETCD_LISTEN_CLIENT_URLS="https://172.16.100.100:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.16.100.100:2380" ETCD_ADVERTISE_CLIENT_URLS="https://172.16.100.100:2379" ETCD_INITIAL_CLUSTER="etcd-1=https://172.16.100.100:2380,etcd-2=https://172.16.100.101:2380,etcd-3=https://172.16.100.102:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" EOF ``` systemd管理etcd ```sh cat > /usr/lib/systemd/system/etcd.service << EOF [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target [Service] Type=notify EnvironmentFile=/opt/etcd/cfg/etcd.conf ExecStart=/opt/etcd/bin/etcd --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --peer-cert-file=/opt/etcd/ssl/server.pem --peer-key-file=/opt/etcd/ssl/server-key.pem --trusted-ca-file=/opt/etcd/ssl/ca.pem --peer-trusted-ca-file=/opt/etcd/ssl/ca.pem --logger=zap Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF ``` 拷贝刚才生成的证书 ```sh $ cp ~/etcd_tls/ca*pem ~/etcd_tls/server*pem /opt/etcd/ssl/ ``` 将上面节点1所有生成的文件拷贝到节点2和节点3 ```sh $ scp -r /opt/etcd/ root@172.16.100.101:/opt/ $ scp -r /opt/etcd/ root@172.16.100.102:/opt/ $ scp -r /usr/lib/systemd/system/etcd.service root@172.16.100.102:/usr/lib/systemd/system/ $ scp -r /usr/lib/systemd/system/etcd.service root@172.16.100.101:/usr/lib/systemd/system/ ``` 三个节点同时启动 ```sh $ systemctl daemon-reload $ systemctl start etcd $ systemctl enable etcd ``` ```sh $ ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://172.16.100.100:2379,https://172.16.100.101:2379,https://172.16.100.102:2379" endpoint health --write-out=table ``` ```sh +-----------------------------+--------+-------------+-------+ | ENDPOINT | HEALTH | TOOK | ERROR | +-----------------------------+--------+-------------+-------+ | https://172.16.100.100:2379 | true | 12.812954ms | | | https://172.16.100.102:2379 | true | 13.596982ms | | | https://172.16.100.101:2379 | true | 14.607151ms | | +-----------------------------+--------+-------------+-------+ ``` #### 安装Docker [查看Docker安装部分](https://ynotes.cn/blog/article_detail/271) #### 安装kubeadm、kubelet、kubectl ```sh $ cat > /etc/yum.repos.d/kubernetes.repo <<EOF [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF ``` 安装kubeadm、kubelet、kubectl ```sh $ yum install -y kubelet-1.18.20 kubeadm-1.18.20 kubectl-1.18.20 $ systemctl enable kubelet ``` 集群配置文件 ```sh $ cat > kubeadm-config.yaml <<EOF apiVersion: kubeadm.k8s.io/v1beta2 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token token: abcdef.0123456789abcdef ttl: 24h0m0s usages: - signing - authentication kind: InitConfiguration localAPIEndpoint: advertiseAddress: 172.16.100.100 bindPort: 6443 nodeRegistration: criSocket: /var/run/dockershim.sock name: k8s-master1 taints: - effect: NoSchedule key: node-role.kubernetes.io/master --- apiServer: certSANs: - k8s-master1 - k8s-master2 - k8s-master3 - 172.16.100.100 - 172.16.100.101 - 172.16.100.102 - 172.16.100.111 - 127.0.0.1 extraArgs: authorization-mode: Node,RBAC timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta2 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controlPlaneEndpoint: 172.16.100.111:16443 controllerManager: {} dns: type: CoreDNS etcd: external: endpoints: - https://172.16.100.100:2379 - https://172.16.100.101:2379 - https://172.16.100.102:2379 caFile: /opt/etcd/ssl/ca.pem certFile: /opt/etcd/ssl/server.pem keyFile: /opt/etcd/ssl/server-key.pem imageRepository: registry.aliyuncs.com/google_containers kind: ClusterConfiguration kubernetesVersion: v1.18.20 networking: dnsDomain: cluster.local podSubnet: 10.244.0.0/16 serviceSubnet: 10.96.0.0/12 scheduler: {} EOF ``` 集群初始化(`k8s-master1`) ```sh $ kubeadm init --config kubeadm-config.yaml ``` ```sh Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of control-plane nodes by copying certificate authorities and service account keys on each node and then running the following as root: kubeadm join 172.16.100.111:16443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:466694e2952961d35ca66960df917b65a5bd5da6219780274603deeb52b2cdde \ --control-plane Then you can join any number of worker nodes by running the following on each as root: kubeadm join 172.16.100.111:16443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:466694e2952961d35ca66960df917b65a5bd5da6219780274603deeb52b2cdde ``` 配置访问集群的配置 ```sh $ mkdir -p $HOME/.kube $ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config $ sudo chown $(id -u):$(id -g) $HOME/.kube/config ``` 加入集群(`k8s-master2`) 拷贝k8s-master1证书 ```sh $ scp -r 172.16.100.100:/etc/kubernetes/pki/ /etc/kubernetes/ ``` ```sh $ kubeadm join 172.16.100.111:16443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:466694e2952961d35ca66960df917b65a5bd5da6219780274603deeb52b2cdde \ --control-plane ``` 加入集群(`k8s-node1`) ```sh $ kubeadm join 172.16.100.111:16443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:466694e2952961d35ca66960df917b65a5bd5da6219780274603deeb52b2cdde ``` 修改主节点可部署pod ```sh $ kubectl taint node k8s-master1 node-role.kubernetes.io/master- $ kubectl taint node k8s-master2 node-role.kubernetes.io/master- ``` #### 部署网络 `方式一:calico(BGP/IPIP)`:比较主流的网络组件,BGP模式性能最好,缺点是pod无法和非BGP节点通信,IPIP无网络限制,相比BGP性能略差。 [参考:k8s配置calico网络](https://ynotes.cn/blog/article_detail/274) `方式二:flannel(vxlan/host-gw)`:host-gw性能最好,缺点是pod无法和非集群节点通信。vxlan可跨网络,相比host-gw性能略差。 [参考:K8S-kubeadm部署k8s集群(一) flannel部分](https://ynotes.cn/blog/article_detail/271) `方式三:cilium(eBFP)+kube-router(BGP路由)`:三种方式性能最好,缺点pod和非BGP节点无法通信,目前是Beta阶段,生产不推荐。 [参考:k8s配置cilium网络](https://ynotes.cn/blog/article_detail/286) #### 重置K8S集群 ##### kubeadm重置节点 ```sh $ kubeadm reset #kubeadm reset 负责从使用 kubeadm init 或 kubeadm join 命令创建的文件中清除节点本地文件系统。对于控制平面节点,reset 还从 etcd 集群中删除该节点的本地 etcd 堆成员,还从 kubeadm ClusterStatus 对象中删除该节点的信息。 ClusterStatus 是一个 kubeadm 管理的 Kubernetes API 对象,该对象包含 kube-apiserver 端点列表。 #kubeadm reset phase 可用于执行上述工作流程的各个阶段。 要跳过阶段列表,您可以使用 --skip-phases 参数,该参数的工作方式类似于 kubeadm join 和 kubeadm init 阶段运行器。 ``` ##### 清理网络配置(flannel) ```sh $ systemctl stop kubelet $ systemctl stop docker $ rm /etc/cni/net.d/* -rf $ ifconfig cni0 down $ ifconfig flannel.1 down $ ifconfig docker0 down $ ip link delete cni0 $ ip link delete flannel.1 $ systemctl start docker ``` ##### 清理iptables ```sh $ iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X ``` ##### 清理ipvs ```sh $ ipvsadm --clear ``` ##### 清理外部 etcd _`如果使用了外部 etcd,kubeadm reset 将不会删除任何 etcd 中的数据。这意味着,如果再次使用相同的 etcd 端点运行 kubeadm init,您将看到先前集群的状态。`_ 要清理 etcd 中的数据,建议您使用 etcdctl 这样的客户端,例如: ```sh $ etcdctl del "" --prefix ```
阅读 82 评论 0 收藏 0
阅读 82
评论 0
收藏 0

兜兜    2021-09-15 15:38:16    2021-10-27 15:14:05   

nginx Keepalived
```sh 高可用K8S ApiServer服务,通过nginx的stream做四层负载均衡,nginx高可用通过keepalived实现。实现的效果172.16.100.111:16433--> 172.16.100.100:6433/172.16.100.101:6433 ``` 配置规划 ```sh +-------------+----------------+-------+----------------------------+ | Host | IP | Port | SoftWare | +-------------+----------------+-------+----------------------------+ | k8s-master1 | 172.16.100.100 | 6433 | Nginx,Keepalived,ApiServer | | k8s-master2 | 172.16.100.101 | 6433 | Nginx,Keepalived,ApiServer | | VIP | 172.16.100.111 | 16433 | / | +-------------+----------------+-------+----------------------------+ ``` 主从LVS节点 ```sh $ yum install nginx nginx-mod-stream keepalived -y #nginx-mod-stream 四层负载均衡stream模块 ``` 配置主从节点nginx ```sh $ cat /etc/nginx/nginx.conf ... events { worker_connections 1024; } # 四层负载均衡,为两台Master apiserver组件提供负载均衡 stream { log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent'; access_log /var/log/nginx/k8s-access.log main; upstream k8s-apiserver { server 172.16.100.100:6443; # Master1 APISERVER IP:PORT server 172.16.100.101:6443; # Master1 APISERVER IP:PORT } server { listen 16443; # 由于nginx与master节点复用,这个监听端口不能是6443,否则会冲突 proxy_pass k8s-apiserver; } } ... ``` 配置主节点keepalived ```sh $cat > /etc/keepalived/keepalived.conf <<EOF global_defs { router_id keepalived_100 } vrrp_script check_nginx { script "/etc/keepalived/check_nginx.sh" } vrrp_instance VI_1 { state MASTER interface ens192 # 修改为实际网卡名 virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的 priority 100 # 优先级,备服务器设置 90 advert_int 1 # 指定VRRP 心跳包通告间隔时间,默认1秒 authentication { auth_type PASS auth_pass 1111 } # 虚拟IP virtual_ipaddress { 172.16.100.111/24 } track_script { check_nginx } } EOF ``` nginx检查脚本 ```sh $ cat >/etc/keepalived/check_nginx.sh <<EOF #!/bin/bash count=$(ss -antp |grep 16443 |egrep -cv "grep|$$") if [ "$count" -eq 0 ];then exit 1 else exit 0 fi EOF ``` ```sh systemctl restart nginx systemctl enable nginx systemctl restart keepalived systemctl enable keepalived ``` 配置从nginx节点keepalived ```sh $ cat >/etc/keepalived/keepalived.conf <<EOF global_defs { router_id keepalived_101 } vrrp_script check_nginx { script "/etc/keepalived/check_nginx.sh" } vrrp_instance VI_1 { state BACKUP interface ens192 # 修改为实际网卡名 virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的 priority 90 # 优先级,备服务器设置 90 advert_int 1 # 指定VRRP 心跳包通告间隔时间,默认1秒 authentication { auth_type PASS auth_pass 1111 } # 虚拟IP virtual_ipaddress { 172.16.100.111/24 } track_script { check_nginx } } EOF ``` 主从启动nginx和keepalived ```sh systemctl restart nginx systemctl enable nginx systemctl restart keepalived systemctl enable keepalived ``` 查看vip是否绑定 ```sh $ ip a 2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 00:50:56:93:6a:3a brd ff:ff:ff:ff:ff:ff inet 172.16.100.101/24 brd 172.16.100.255 scope global noprefixroute ens192 valid_lft forever preferred_lft forever inet 172.16.100.111/32 scope global ens192 ``` 测试 ```sh $ curl http://172.16.100.111 <html> <head><title>404 Not Found</title></head> <body> <center><h1>404 Not Found</h1></center> <hr><center>nginx</center> </body> </html> ```
阅读 32 评论 0 收藏 0
阅读 32
评论 0
收藏 0

兜兜    2021-09-11 14:22:25    2021-10-27 15:15:46   

kubernets CNI cilium
`注意:cilium+kube-router目前仅beta阶段` #### 准备阶段 系统环境 ```sh 节点类型 IP 主机名 系统 内核版本 Master 172.16.13.80 shudoon101 CentOS7.9.2009 5.4.144-1.el7.elrepo.x86_64 Node 172.16.13.81 shudoon102 CentOS7.9.2009 5.4.144-1.el7.elrepo.x86_64 Node 172.16.13.82 shudoon103 CentOS7.9.2009 5.4.144-1.el7.elrepo.x86_64 ``` 软件环境 ```sh CentOS7 kernel 5.4.144 (cilium 内核要求4.9.17+) kubernets 1.20 cilium(1.9.10): CNI网络插件,包含kube-proxy代理功能 kube-router: 仅使用它的BGP路由功能 ``` 挂载bpf文件系统 ```sh $ mount bpffs /sys/fs/bpf -t bpf ``` ```sh $ cat >> /etc/fstab <<E0F bpffs /sys/fs/bpf bpf defaults 0 0 EOF ``` #### 安装cilium 快速安装 ```sh kubectl create -f https://raw.githubusercontent.com/cilium/cilium/v1.9/install/kubernetes/quick-install.yaml ``` 查看状态 ```sh $ kubectl -n kube-system get pods ``` ```sh NAME READY STATUS RESTARTS AGE cilium-csknd 1/1 Running 0 87m cilium-dwz2c 1/1 Running 0 87m cilium-nq8r9 1/1 Running 1 87m cilium-operator-59c9d5bc94-sh4c2 1/1 Running 0 87m coredns-7f89b7bc75-bhgrp 1/1 Running 0 7m24s coredns-7f89b7bc75-sfgvq 1/1 Running 0 7m24s ``` 部署connectivity test ```sh $ wget https://raw.githubusercontent.com/cilium/cilium/v1.9/examples/kubernetes/connectivity-check/connectivity-check.yaml $ sed -i 's/google.com/baidu.com/g' connectivity-check.yaml #测试外网的地址改成baidu.com $ kubectl apply -f connectivity-check.yaml ``` 查看pod的状态 ```sh kubectl get pods NAME READY STATUS RESTARTS AGE echo-a-dc9bcfd8f-hgc64 1/1 Running 0 9m59s echo-b-5884b7dc69-bl5px 1/1 Running 0 9m59s echo-b-host-cfdd57978-dg6gw 1/1 Running 0 9m59s host-to-b-multi-node-clusterip-c4ff7ff64-m9zwz 1/1 Running 0 9m58s host-to-b-multi-node-headless-84d8f6f4c4-8b797 1/1 Running 1 9m57s pod-to-a-5cdfd4754d-jgmnt 1/1 Running 0 9m59s pod-to-a-allowed-cnp-7d7c8f9f9b-f9lpc 1/1 Running 0 9m58s pod-to-a-denied-cnp-75cb89dfd-jsjd4 1/1 Running 0 9m59s pod-to-b-intra-node-nodeport-c6d79965d-w98jx 1/1 Running 1 9m57s pod-to-b-multi-node-clusterip-cd4d764b6-gjvc5 1/1 Running 0 9m58s pod-to-b-multi-node-headless-6696c5f8cd-fvcsl 1/1 Running 1 9m58s pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns 1/1 Running 0 9m57s pod-to-external-1111-d5c7bb4c4-sflfc 1/1 Running 0 9m59s pod-to-external-fqdn-allow-google-cnp-dcb4d867d-dxqzx 1/1 Running 0 7m16s ``` #### 安装Hubble ##### 安装Hubble UI 配置环境变量 ```sh $ export CILIUM_NAMESPACE=kube-system ``` 快速安装 `Cilium 1.9.2+` ```sh $ kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/v1.9/install/kubernetes/quick-hubble-install.yaml ``` helm安装 `Cilium <1.9.2` ```sh # Set this to your installed Cilium version $ export CILIUM_VERSION=1.9.10 # Please set any custom Helm values you may need for Cilium, # such as for example `--set operator.replicas=1` on single-cluster nodes. $ helm template cilium cilium/cilium --version $CILIUM_VERSION \\ --namespace $CILIUM_NAMESPACE \\ --set hubble.tls.auto.method="cronJob" \\ --set hubble.listenAddress=":4244" \\ --set hubble.relay.enabled=true \\ --set hubble.ui.enabled=true > cilium-with-hubble.yaml # This will modify your existing Cilium DaemonSet and ConfigMap $ kubectl apply -f cilium-with-hubble.yaml ``` 配置端口转发 ```sh $ kubectl port-forward -n $CILIUM_NAMESPACE svc/hubble-ui --address 0.0.0.0 --address :: 12000:80 ``` ##### 安装Hubble CLI 下载hubble CLI ```sh $ export CILIUM_NAMESPACE=kube-system $ export HUBBLE_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/hubble/master/stable.txt) $ curl -LO "https://github.com/cilium/hubble/releases/download/$HUBBLE_VERSION/hubble-linux-amd64.tar.gz" $ tar zxf hubble-linux-amd64.tar.gz $ mv hubble /usr/local/bin ``` 开启端口转发 ```sh $ kubectl port-forward -n $CILIUM_NAMESPACE svc/hubble-relay --address 0.0.0.0 --address :: 4245:80 ``` 查看节点状态 ```sh hubble --server localhost:4245 status ``` ```sh Healthcheck (via localhost:4245): Ok Current/Max Flows: 12288/12288 (100.00%) Flows/s: 22.71 Connected Nodes: 3/3 ``` 观察cilium信息 ```sh hubble --server localhost:4245 observe ``` ```sh Sep 11 06:52:06.119: default/pod-to-b-multi-node-clusterip-cd4d764b6-gjvc5:46762 -> default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: ACK, PSH) Sep 11 06:52:06.122: default/echo-b-5884b7dc69-bl5px:8080 <> default/pod-to-b-multi-node-clusterip-cd4d764b6-gjvc5:46762 to-overlay FORWARDED (TCP Flags: ACK, PSH) Sep 11 06:52:06.122: default/pod-to-b-multi-node-clusterip-cd4d764b6-gjvc5:46762 -> default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: ACK, FIN) Sep 11 06:52:06.123: default/echo-b-5884b7dc69-bl5px:8080 <> default/pod-to-b-multi-node-clusterip-cd4d764b6-gjvc5:46762 to-overlay FORWARDED (TCP Flags: ACK, FIN) Sep 11 06:52:06.123: default/pod-to-b-multi-node-clusterip-cd4d764b6-gjvc5:46762 -> default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: ACK) Sep 11 06:52:06.793: default/pod-to-a-5cdfd4754d-jgmnt:54735 -> kube-system/coredns-7f89b7bc75-bhgrp:53 to-endpoint FORWARDED (UDP) Sep 11 06:52:06.793: default/pod-to-a-5cdfd4754d-jgmnt:54735 -> kube-system/coredns-7f89b7bc75-bhgrp:53 to-endpoint FORWARDED (UDP) Sep 11 06:52:06.793: kube-system/coredns-7f89b7bc75-bhgrp:53 <> default/pod-to-a-5cdfd4754d-jgmnt:54735 to-overlay FORWARDED (UDP) Sep 11 06:52:06.793: kube-system/coredns-7f89b7bc75-bhgrp:53 <> default/pod-to-a-5cdfd4754d-jgmnt:54735 to-overlay FORWARDED (UDP) Sep 11 06:52:06.903: kube-system/coredns-7f89b7bc75-bhgrp:32928 -> 172.16.13.80:6443 to-stack FORWARDED (TCP Flags: ACK) Sep 11 06:52:06.903: kube-system/coredns-7f89b7bc75-bhgrp:32928 <- 172.16.13.80:6443 to-endpoint FORWARDED (TCP Flags: ACK) Sep 11 06:52:07.003: 10.244.0.1:36858 -> kube-system/coredns-7f89b7bc75-bhgrp:8181 to-endpoint FORWARDED (TCP Flags: SYN) Sep 11 06:52:07.003: 10.244.0.1:36858 <- kube-system/coredns-7f89b7bc75-bhgrp:8181 to-stack FORWARDED (TCP Flags: SYN, ACK) Sep 11 06:52:07.003: 10.244.0.1:36858 -> kube-system/coredns-7f89b7bc75-bhgrp:8181 to-endpoint FORWARDED (TCP Flags: ACK) Sep 11 06:52:07.003: 10.244.0.1:36858 -> kube-system/coredns-7f89b7bc75-bhgrp:8181 to-endpoint FORWARDED (TCP Flags: ACK, PSH) Sep 11 06:52:07.003: 10.244.0.1:36858 <- kube-system/coredns-7f89b7bc75-bhgrp:8181 to-stack FORWARDED (TCP Flags: ACK, PSH) Sep 11 06:52:07.003: 10.244.0.1:36858 <- kube-system/coredns-7f89b7bc75-bhgrp:8181 to-stack FORWARDED (TCP Flags: ACK, FIN) Sep 11 06:52:07.003: 10.244.0.1:36858 -> kube-system/coredns-7f89b7bc75-bhgrp:8181 to-endpoint FORWARDED (TCP Flags: ACK, FIN) Sep 11 06:52:07.043: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:49051 -> kube-system/coredns-7f89b7bc75-bhgrp:53 to-endpoint FORWARDED (UDP) Sep 11 06:52:07.043: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:49051 -> kube-system/coredns-7f89b7bc75-bhgrp:53 to-endpoint FORWARDED (UDP) Sep 11 06:52:07.043: kube-system/coredns-7f89b7bc75-bhgrp:53 <> default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:49051 to-overlay FORWARDED (UDP) Sep 11 06:52:07.043: kube-system/coredns-7f89b7bc75-bhgrp:53 <> default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:49051 to-overlay FORWARDED (UDP) Sep 11 06:52:07.044: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 <> default/echo-b-5884b7dc69-bl5px:8080 to-overlay FORWARDED (TCP Flags: SYN) Sep 11 06:52:07.044: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 <- default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: SYN, ACK) Sep 11 06:52:07.044: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 <> default/echo-b-5884b7dc69-bl5px:8080 to-overlay FORWARDED (TCP Flags: ACK) Sep 11 06:52:07.044: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 <> default/echo-b-5884b7dc69-bl5px:8080 to-overlay FORWARDED (TCP Flags: ACK, PSH) Sep 11 06:52:07.044: kube-system/hubble-relay-74b76459f9-d66h9:60564 -> 172.16.13.81:4244 to-stack FORWARDED (TCP Flags: ACK) Sep 11 06:52:07.046: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 -> default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: SYN) Sep 11 06:52:07.046: default/echo-b-5884b7dc69-bl5px:8080 <> default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 to-overlay FORWARDED (TCP Flags: SYN, ACK) Sep 11 06:52:07.047: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 -> default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: ACK) Sep 11 06:52:07.047: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 -> default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: ACK, PSH) Sep 11 06:52:07.048: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 <- default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: ACK, PSH) Sep 11 06:52:07.048: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 <> default/echo-b-5884b7dc69-bl5px:8080 to-overlay FORWARDED (TCP Flags: ACK, FIN) Sep 11 06:52:07.048: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 <- default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: ACK, FIN) Sep 11 06:52:07.048: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 <> default/echo-b-5884b7dc69-bl5px:8080 to-overlay FORWARDED (TCP Flags: ACK) Sep 11 06:52:07.050: default/echo-b-5884b7dc69-bl5px:8080 <> default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 to-overlay FORWARDED (TCP Flags: ACK, PSH) Sep 11 06:52:07.051: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 -> default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: ACK, FIN) Sep 11 06:52:07.051: default/echo-b-5884b7dc69-bl5px:8080 <> default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 to-overlay FORWARDED (TCP Flags: ACK, FIN) Sep 11 06:52:07.051: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57234 -> default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: ACK) Sep 11 06:52:07.095: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:43239 <> kube-system/coredns-7f89b7bc75-bhgrp:53 to-overlay FORWARDED (UDP) Sep 11 06:52:07.096: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:43239 <> kube-system/coredns-7f89b7bc75-bhgrp:53 to-overlay FORWARDED (UDP) Sep 11 06:52:07.096: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:43239 -> kube-system/coredns-7f89b7bc75-bhgrp:53 to-endpoint FORWARDED (UDP) Sep 11 06:52:07.096: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:43239 -> kube-system/coredns-7f89b7bc75-bhgrp:53 to-endpoint FORWARDED (UDP) Sep 11 06:52:07.096: kube-system/coredns-7f89b7bc75-bhgrp:53 <> default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:43239 to-overlay FORWARDED (UDP) Sep 11 06:52:07.096: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:43239 <- kube-system/coredns-7f89b7bc75-bhgrp:53 to-endpoint FORWARDED (UDP) Sep 11 06:52:07.096: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:43239 <- kube-system/coredns-7f89b7bc75-bhgrp:53 to-endpoint FORWARDED (UDP) Sep 11 06:52:07.096: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57236 <> default/echo-b-5884b7dc69-bl5px:8080 to-overlay FORWARDED (TCP Flags: SYN) Sep 11 06:52:07.097: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57236 <- default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: SYN, ACK) Sep 11 06:52:07.097: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57236 <> default/echo-b-5884b7dc69-bl5px:8080 to-overlay FORWARDED (TCP Flags: ACK) Sep 11 06:52:07.097: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57236 <> default/echo-b-5884b7dc69-bl5px:8080 to-overlay FORWARDED (TCP Flags: ACK, PSH) Sep 11 06:52:07.099: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57236 -> default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: SYN) Sep 11 06:52:07.099: default/echo-b-5884b7dc69-bl5px:8080 <> default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57236 to-overlay FORWARDED (TCP Flags: SYN, ACK) Sep 11 06:52:07.099: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57236 -> default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: ACK) Sep 11 06:52:07.100: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57236 -> default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: ACK, PSH) Sep 11 06:52:07.101: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57236 <- default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: ACK, PSH) Sep 11 06:52:07.101: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57236 <> default/echo-b-5884b7dc69-bl5px:8080 to-overlay FORWARDED (TCP Flags: ACK, FIN) Sep 11 06:52:07.102: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57236 <- default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: ACK, FIN) Sep 11 06:52:07.103: default/echo-b-5884b7dc69-bl5px:8080 <> default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57236 to-overlay FORWARDED (TCP Flags: ACK, PSH) Sep 11 06:52:07.104: default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57236 -> default/echo-b-5884b7dc69-bl5px:8080 to-endpoint FORWARDED (TCP Flags: ACK, FIN) Sep 11 06:52:07.104: default/echo-b-5884b7dc69-bl5px:8080 <> default/pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns:57236 to-overlay FORWARDED (TCP Flags: ACK, FIN) ``` 浏览器访问 http://127.0.0.1:12000/ #### Cilium替换kube-proxy(`默认为共存模式Probe`) 删除kube-proxy ```sh $ kubectl -n kube-system delete ds kube-proxy #删除kube-proxy DaemonSet # Delete the configmap as well to avoid kube-proxy being reinstalled during a kubeadm upgrade (works only for K8s 1.19 and newer) $ kubectl -n kube-system delete cm kube-proxy # 所有节点执行,清除kube-proxy相关的规则 $ iptables-restore <(iptables-save | grep -v KUBE) ``` 卸载快速安装的cilium ```sh $ kubectl delete -f https://raw.githubusercontent.com/cilium/cilium/v1.9/install/kubernetes/quick-install.yaml ``` helm3 重新安装cilium库 ```sh $ helm repo add cilium https://helm.cilium.io/ ``` ```sh $ export REPLACE_WITH_API_SERVER_IP=172.16.13.80 $ export REPLACE_WITH_API_SERVER_PORT=6443 $ helm upgrade --install cilium cilium/cilium --version 1.9.10 \ --namespace kube-system \ --set kubeProxyReplacement=strict \ --set k8sServiceHost=REPLACE_WITH_API_SERVER_IP \ --set k8sServicePort=REPLACE_WITH_API_SERVER_PORT ``` 查看部署的模式 ```sh $ kubectl -n kube-system get pods -l k8s-app=cilium NAME READY STATUS RESTARTS AGE cilium-9szpn 1/1 Running 0 45s cilium-fllk6 1/1 Running 0 45s cilium-sn2q5 1/1 Running 0 45s $ kubectl exec -it -n kube-system cilium-9szpn -- cilium status | grep KubeProxyReplacement KubeProxyReplacement: Strict [ens192 (Direct Routing)] #Strict为严格模式 ``` 查看cilium的状态 ```sh $ kubectl exec -ti -n kube-system cilium-9szpn -- cilium status --verbose ``` ```sh KVStore: Ok Disabled Kubernetes: Ok 1.20 (v1.20.10) [linux/amd64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Strict [ens192 (Direct Routing)] Cilium: Ok 1.9.10 (v1.9.10-4e26039) NodeMonitor: Listening for events on 8 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 6/255 allocated from 10.244.1.0/24, Allocated addresses: 10.244.1.164 (health) 10.244.1.169 (default/pod-to-b-intra-node-nodeport-c6d79965d-w98jx [restored]) 10.244.1.211 (default/busybox-deployment-99db9cf4d-dch94 [restored]) 10.244.1.217 (default/echo-b-5884b7dc69-bl5px [restored]) 10.244.1.221 (router) 10.244.1.6 (default/busybox-deployment-99db9cf4d-pgxzz [restored]) BandwidthManager: Disabled Host Routing: Legacy Masquerading: BPF [ens192] 10.244.1.0/24 Clock Source for BPF: ktime Controller Status: 38/38 healthy Name Last success Last error Count Message cilium-health-ep 42s ago never 0 no error dns-garbage-collector-job 53s ago never 0 no error endpoint-1383-regeneration-recovery never never 0 no error endpoint-1746-regeneration-recovery never never 0 no error endpoint-1780-regeneration-recovery never never 0 no error endpoint-3499-regeneration-recovery never never 0 no error endpoint-546-regeneration-recovery never never 0 no error endpoint-777-regeneration-recovery never never 0 no error k8s-heartbeat 23s ago never 0 no error mark-k8s-node-as-available 4m43s ago never 0 no error metricsmap-bpf-prom-sync 8s ago never 0 no error neighbor-table-refresh 4m43s ago never 0 no error resolve-identity-777 4m42s ago never 0 no error restoring-ep-identity (1383) 4m43s ago never 0 no error restoring-ep-identity (1746) 4m43s ago never 0 no error restoring-ep-identity (1780) 4m43s ago never 0 no error restoring-ep-identity (3499) 4m43s ago never 0 no error restoring-ep-identity (546) 4m43s ago never 0 no error sync-endpoints-and-host-ips 43s ago never 0 no error sync-lb-maps-with-k8s-services 4m43s ago never 0 no error sync-policymap-1383 41s ago never 0 no error sync-policymap-1746 41s ago never 0 no error sync-policymap-1780 41s ago never 0 no error sync-policymap-3499 41s ago never 0 no error sync-policymap-546 40s ago never 0 no error sync-policymap-777 41s ago never 0 no error sync-to-k8s-ciliumendpoint (1383) 3s ago never 0 no error sync-to-k8s-ciliumendpoint (1746) 3s ago never 0 no error sync-to-k8s-ciliumendpoint (1780) 3s ago never 0 no error sync-to-k8s-ciliumendpoint (3499) 3s ago never 0 no error sync-to-k8s-ciliumendpoint (546) 3s ago never 0 no error sync-to-k8s-ciliumendpoint (777) 12s ago never 0 no error template-dir-watcher never never 0 no error update-k8s-node-annotations 4m51s ago never 0 no error waiting-initial-global-identities-ep (1383) 4m43s ago never 0 no error waiting-initial-global-identities-ep (1746) 4m43s ago never 0 no error waiting-initial-global-identities-ep (1780) 4m43s ago never 0 no error waiting-initial-global-identities-ep (3499) 4m43s ago never 0 no error Proxy Status: OK, ip 10.244.1.221, 0 redirects active on ports 10000-20000 Hubble: Ok Current/Max Flows: 3086/4096 (75.34%), Flows/s: 11.02 Metrics: Disabled KubeProxyReplacement Details: Status: Strict Protocols: TCP, UDP Devices: ens192 (Direct Routing) Mode: SNAT Backend Selection: Random Session Affinity: Enabled XDP Acceleration: Disabled Services: #下面Enabled表示支持的类型 - ClusterIP: Enabled - NodePort: Enabled (Range: 30000-32767) - LoadBalancer: Enabled - externalIPs: Enabled - HostPort: Enabled BPF Maps: dynamic sizing: on (ratio: 0.002500) Name Size Non-TCP connection tracking 73653 TCP connection tracking 147306 Endpoint policy 65535 Events 8 IP cache 512000 IP masquerading agent 16384 IPv4 fragmentation 8192 IPv4 service 65536 IPv6 service 65536 IPv4 service backend 65536 IPv6 service backend 65536 IPv4 service reverse NAT 65536 IPv6 service reverse NAT 65536 Metrics 1024 NAT 147306 Neighbor table 147306 Global policy 16384 Per endpoint policy 65536 Session affinity 65536 Signal 8 Sockmap 65535 Sock reverse NAT 73653 Tunnel 65536 Cluster health: 3/3 reachable (2021-09-11T08:08:45Z) Name IP Node Endpoints shudoon102 (localhost) 172.16.13.81 reachable reachable shudoon101 172.16.13.80 reachable reachable shudoon103 172.16.13.82 reachable reachable ``` 创建nginx服务 nginx的deployment ```sh cat nginx-deployment.yaml ``` ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: my-nginx spec: selector: matchLabels: run: my-nginx replicas: 2 template: metadata: labels: run: my-nginx spec: containers: - name: my-nginx image: nginx ports: - containerPort: 80 ``` 执行部署 ```sh $ kubectl apply -f nginx-deployment.yaml ``` 获取nginx的pod运行状态 ```sh $ kubectl get pods -l run=my-nginx -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES my-nginx-5b56ccd65f-kdrhn 1/1 Running 0 3m36s 10.244.2.157 shudoon103 <none> <none> my-nginx-5b56ccd65f-ltxc6 1/1 Running 0 3m36s 10.244.1.9 shudoon102 <none> <none> ``` ```sh $ kubectl get svc my-nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE my-nginx NodePort 10.97.255.81 <none> 80:30074/TCP 4h ``` 查看Cilium eBPF生成的服务信息 ```sh kubectl exec -it -n kube-system cilium-9szpn -- cilium service list ID Frontend Service Type Backend 1 10.111.52.74:80 ClusterIP 1 => 10.244.2.249:8081 2 10.96.0.1:443 ClusterIP 1 => 172.16.13.80:6443 3 10.96.0.10:53 ClusterIP 1 => 10.244.0.37:53 2 => 10.244.0.232:53 4 10.96.0.10:9153 ClusterIP 1 => 10.244.0.37:9153 2 => 10.244.0.232:9153 5 10.102.133.211:8080 ClusterIP 1 => 10.244.2.129:8080 6 10.100.95.181:8080 ClusterIP 1 => 10.244.1.217:8080 7 0.0.0.0:31414 NodePort 1 => 10.244.1.217:8080 8 172.16.13.81:31414 NodePort 1 => 10.244.1.217:8080 9 10.97.255.81:80 ClusterIP 1 => 10.244.2.157:80 #service的10.97.255.81对应pod后端 2 => 10.244.1.9:80 10 0.0.0.0:30074 NodePort 1 => 10.244.2.157:80 #NodePort 2 => 10.244.1.9:80 11 172.16.13.81:30074 NodePort 1 => 10.244.2.157:80 2 => 10.244.1.9:80 12 10.98.83.147:80 ClusterIP 1 => 10.244.2.144:4245 13 172.16.13.81:40000 HostPort 1 => 10.244.1.217:8080 14 0.0.0.0:40000 HostPort 1 => 10.244.1.217:8080 ``` 查看iptables是否生成规则 ```sh $ iptables-save | grep KUBE-SVC [ empty line ] ``` 测试nginx服务 ```sh $curl http://10.97.255.81 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> ... ``` #### 整合kube-router ```sh $ curl -LO https://raw.githubusercontent.com/cloudnativelabs/kube-router/v0.4.0/daemonset/generic-kuberouter-only-advertise-routes.yaml ``` 修改配置文件 ```sh $ cat generic-kuberouter-only-advertise-routes.yaml ``` ```yaml ... - "--run-router=true" - "--run-firewall=false" - "--run-service-proxy=false" - "--enable-cni=false" - "--enable-pod-egress=false" - "--enable-ibgp=true" - "--enable-overlay=true" # - "--peer-router-ips=<CHANGE ME>" # - "--peer-router-asns=<CHANGE ME>" # - "--cluster-asn=<CHANGE ME>" - "--advertise-cluster-ip=true" - "--advertise-external-ip=true" - "--advertise-loadbalancer-ip=true" ... ``` ```sh $ kubectl apply -f generic-kuberouter-only-advertise-routes.yaml ``` ```sh $ kubectl -n kube-system get pods -l k8s-app=kube-router NAME READY STATUS RESTARTS AGE kube-router-5q2xw 1/1 Running 0 72m kube-router-cl9nc 1/1 Running 0 72m kube-router-tmdnt 1/1 Running 0 72m ``` 重新安装 ```sh $ helm delete cilium -n kube-system #删除旧的helm $ export REPLACE_WITH_API_SERVER_IP=172.16.13.80 $ export REPLACE_WITH_API_SERVER_PORT=6443 $ helm install cilium cilium/cilium --version 1.9.10 \ --namespace kube-system \ --set ipam.mode=kubernetes \ --set tunnel=disabled \ --set nativeRoutingCIDR=172.16.13.0/24 \ --set kubeProxyReplacement=strict \ --set k8sServiceHost=$REPLACE_WITH_API_SERVER_IP \ --set k8sServicePort=$REPLACE_WITH_API_SERVER_PORT ``` ```sh kubectl -n kube-system get pods -l k8s-app=cilium NAME READY STATUS RESTARTS AGE cilium-f22c2 1/1 Running 0 5m58s cilium-ftg8n 1/1 Running 0 5m58s cilium-s4mng 1/1 Running 0 5m58s ``` ```sh $ kubectl -n kube-system exec -ti cilium-s4mng -- ip route list scope global default via 172.16.13.254 dev ens192 proto static metric 100 10.244.0.0/24 via 10.244.0.1 dev cilium_host src 10.244.0.1 #Local PodCIDR 10.244.1.0/24 via 172.16.13.81 dev ens192 proto 17 #BGP route 10.244.2.0/24 via 172.16.13.82 dev ens192 proto 17 #BGP route ``` 验证安装 ```sh $ curl -L --remote-name-all https://github.com/cilium/cilium-cli/releases/latest/download/cilium-linux-amd64.tar.gz $ tar xzvfC cilium-linux-amd64.tar.gz /usr/local/bin ``` 验证cilium安装 ```sh $ cilium status --wait /¯¯\ /¯¯\__/¯¯\ Cilium: OK \__/¯¯\__/ Operator: OK /¯¯\__/¯¯\ Hubble: OK \__/¯¯\__/ ClusterMesh: disabled \__/ Deployment hubble-relay Desired: 1, Ready: 1/1, Available: 1/1 Deployment hubble-ui Desired: 1, Ready: 1/1, Available: 1/1 DaemonSet cilium Desired: 3, Ready: 3/3, Available: 3/3 Deployment cilium-operator Desired: 2, Ready: 2/2, Available: 2/2 Containers: cilium-operator Running: 2 hubble-relay Running: 1 hubble-ui Running: 1 cilium Running: 3 Cluster Pods: 20/20 managed by Cilium Image versions cilium quay.io/cilium/cilium:v1.9.10: 3 cilium-operator quay.io/cilium/operator-generic:v1.9.10: 2 hubble-relay quay.io/cilium/hubble-relay:v1.9.10@sha256:f15bc1a1127be143c957158651141443c9fa14683426ef8789cf688fb94cae55: 1 hubble-ui quay.io/cilium/hubble-ui:v0.7.3: 1 hubble-ui quay.io/cilium/hubble-ui-backend:v0.7.3: 1 hubble-ui docker.io/envoyproxy/envoy:v1.14.5: 1 ``` 验证k8s的cilium ```sh $ cilium connectivity test ``` 参考:https://docs.cilium.io/en/v1.9/gettingstarted/k8s-install-default/
阅读 20 评论 0 收藏 0
阅读 20
评论 0
收藏 0

兜兜    2021-09-08 14:00:24    2021-10-27 15:15:32   

kubernetes calico flannel
Calico网络 ```sh Calico主要由三个部分组成: Felix:以DaemonSet方式部署,运行在每一个Node节点上,主要负责维护宿主机上路由规则以及ACL规则。 BGP Client(BIRD):主要负责把 Felix 写入 Kernel 的路由信息分发到集群 Calico 网络。 Etcd:分布式键值存储,保存Calico的策略和网络配置状态。 calicoctl:允许您从简单的命令行界面实现高级策略和网络。 ``` #### 一、卸载flannel 1.1 k8s删除flannel pod ```sh kubectl delete -f kube-flanneld.yaml ``` 1.2 删除flannel网卡 ```sh $ ip link delete cni0 $ ip link delete flannel.1 #删除flannel网卡,如果是udp模式,则网卡为:flannel.0 ``` 1.3 查看路由 ```sh ip route ``` ```sh default via 172.16.13.254 dev ens192 proto static metric 100 10.244.0.0/24 dev cni0 proto kernel scope link src 10.244.0.1 10.244.1.0/24 via 172.16.13.81 dev ens192 10.244.2.0/24 via 172.16.13.82 dev ens192 172.16.13.0/24 dev ens192 proto kernel scope link src 172.16.13.80 metric 100 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 ``` 1.4 删除路由 ```sh ip route delete 10.244.1.0/24 via 172.16.13.81 dev ens192 ip route delete 10.244.2.0/24 via 172.16.13.82 dev ens192 ``` `注意:不要清除防火墙规则` #### 二、安装calico ##### 2.1 下载calico安装文件 ```sh wget https://docs.projectcalico.org/manifests/calico-etcd.yaml ``` ##### 2.2 修改etcd证书 获取证书和秘钥的base64编码结果 ```sh $ cat /etc/kubernetes/pki/etcd/ca.crt |base64 -w 0 $ cat /etc/kubernetes/pki/etcd/server.crt |base64 -w 0 $ cat /etc/kubernetes/pki/etcd/server.key |base64 -w 0 ``` 修改calico-etcd.yaml ```sh $ vim calico-etcd.yaml ``` ```yaml --- apiVersion: v1 kind: Secret type: Opaque metadata: name: calico-etcd-secrets namespace: kube-system data: etcd-ca: "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM0VENDQWNtZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFTTVJBd0RnWURWUVFERXdkbGRHTmsKTFdOaE1CNFhEVEl4TURrd05EQXl..." #cat /etc/kubernetes/pki/ca.crt |base64 -w 0 输出的结果 etcd-cert: "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURQVENDQWlXZ0F3SUJBZ0lJUmsrNkR4Szdrb0V3RFFZSktvWklodmNOQVFFTEJRQXdFakVRTUE0R0ExVUUKQXhNSFpYUmpaQzFqWVRBZUZ3M" #cat /etc/kubernetes/pki/server.crt |base64 -w 0 输出的结果 etcd-key: "LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcFFJQkFBS0NBUUVBdFNTTHBDMUxyWjdTcTBCTmh5UjlTYi83OThXTHJxNHNoZUUzc2RKQVA2UzJpR0VxCnBtUVh" #cat /etc/kubernetes/pki/server.crt |base64 -w 0 输出的结果 --- kind: ConfigMap apiVersion: v1 metadata: name: calico-config namespace: kube-system data: etcd_endpoints: "https://172.16.13.80:2379" #etc的地址 etcd_ca: "/calico-secrets/etcd-ca" etcd_cert: "/calico-secrets/etcd-cert" etcd_key: "/calico-secrets/etcd-key" ... - name: IP_AUTODETECTION_METHOD value: "interface=ens.*" #修改查找节点网卡名的匹配规则 # Auto-detect the BGP IP address. - name: IP value: "autodetect" # Enable IPIP - name: CALICO_IPV4POOL_IPIP value: "Never" #Always表示IPIP模式,修改为Never,启用BGP - name: CALICO_IPV4POOL_CIDR value: "10.244.0.0/16" #修改CIDR的子网 ... ``` 部署calico ```sh $ kubectl apply -f calico-etcd.yaml ``` 查看calico的pod状态 ```sh $ kubectl get pods -n kube-system ``` ```sh kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-5499fb6db5-w4b4z 1/1 Running 0 15h calico-node-569mh 1/1 Running 0 15h calico-node-g6m6j 1/1 Running 0 15h calico-node-g7p7w 1/1 Running 0 15h coredns-7f89b7bc75-gdzxn 0/1 pending 13 4d3h coredns-7f89b7bc75-s5shx 0/1 pending 13 4d3h etcd-shudoon101 1/1 Running 1 4d3h kube-apiserver-shudoon101 1/1 Running 1 4d3h kube-controller-manager-shudoon101 1/1 Running 1 4d3h kube-proxy-dpvzs 1/1 Running 0 4d2h kube-proxy-svckb 1/1 Running 1 4d3h kube-proxy-xlqvh 1/1 Running 0 4d2h kube-scheduler-shudoon101 1/1 Running 2 4d3h ``` 重建coredns(网络不通) ```sh $ kubectl get deployment -n kube-system -o yaml >coredns.yaml $ kubectl delete -f coredns.yaml $ kubectl apply -f coredns.yaml ``` 查看coredns网络信息 ```sh $ kubectl get pods -n kube-system -o wide ``` ```sh NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES calico-kube-controllers-5499fb6db5-79g4k 1/1 Running 0 3m6s 172.16.13.80 shudoon101 <none> <none> calico-node-hr46s 1/1 Running 0 90m 172.16.13.81 shudoon102 <none> <none> calico-node-n5h78 1/1 Running 0 90m 172.16.13.82 shudoon103 <none> <none> calico-node-vmrbq 1/1 Running 0 90m 172.16.13.80 shudoon101 <none> <none> coredns-7f89b7bc75-c874x 1/1 Running 0 3m6s 10.244.236.192 shudoon101 <none> <none> coredns-7f89b7bc75-ssv86 1/1 Running 0 3m6s 10.244.236.193 shudoon101 <none> <none> etcd-shudoon101 1/1 Running 0 125m 172.16.13.80 shudoon101 <none> <none> kube-apiserver-shudoon101 1/1 Running 0 125m 172.16.13.80 shudoon101 <none> <none> kube-controller-manager-shudoon101 1/1 Running 0 125m 172.16.13.80 shudoon101 <none> <none> kube-proxy-fbbkw 1/1 Running 0 124m 172.16.13.81 shudoon102 <none> <none> kube-proxy-mrghg 1/1 Running 0 125m 172.16.13.80 shudoon101 <none> <none> kube-proxy-t7555 1/1 Running 0 124m 172.16.13.82 shudoon103 <none> <none> kube-scheduler-shudoon101 1/1 Running 0 125m 172.16.13.80 shudoon101 <none> <none> ``` ```sh $ ip route ``` ```sh default via 172.16.13.254 dev ens192 proto static metric 100 10.244.0.0/24 dev cni0 proto kernel scope link src 10.244.0.1 10.244.1.0/24 via 172.16.13.81 dev ens192 proto bird 10.244.2.0/24 via 172.16.13.82 dev ens192 proto bird 10.244.193.128/26 via 172.16.13.82 dev ens192 proto bird 10.244.202.0/26 via 172.16.13.81 dev ens192 proto bird 10.244.236.192 dev calif9ec1619c50 scope link blackhole 10.244.236.192/26 proto bird 10.244.236.193 dev califa55073ed4c scope link 172.16.13.0/24 dev ens192 proto kernel scope link src 172.16.13.80 metric 100 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 ``` #### 三、安装calicoctl ```sh $ wget -O /usr/local/bin/calicoctl https://github.com/projectcalico/calicoctl/releases/download/v3.11.1/calicoctl $ chmod +x /usr/local/bin/calicoctl ``` 创建配置文件 ```sh $ mkdir /etc/calico $ vim /etc/calico/calicoctl.cfg ``` ```yaml apiVersion: projectcalico.org/v3 kind: CalicoAPIConfig metadata: spec: datastoreType: "etcdv3" etcdEndpoints: "https://172.16.13.80:2379" etcdKeyFile: "/etc/kubernetes/pki/etcd/server.key" etcdCertFile: "/etc/kubernetes/pki/etcd/server.crt" etcdCACertFile: "/etc/kubernetes/pki/etcd/ca.crt" ``` ```sh $ calicoctl node status ``` ```sh Calico process is running. IPv4 BGP status +--------------+-------------------+-------+----------+-------------+ | PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO | +--------------+-------------------+-------+----------+-------------+ | 172.16.13.81 | node-to-node mesh | up | 14:33:24 | Established | | 172.16.13.82 | node-to-node mesh | up | 14:33:25 | Established | +--------------+-------------------+-------+----------+-------------+ IPv6 BGP status No IPv6 peers found. ``` #### 四、配置Route Reflector模式 ```sh Calico 维护的网络在默认是(Node-to-Node Mesh)全互联模式,Calico集群中的节点之间都会相互建立连接,用于路由交换。但是随着集群规模的扩大,mesh模式将形成一个巨大服务网格,连接数成倍增加。 这时就需要使用 Route Reflector(路由器反射)模式解决这个问题。 确定一个或多个Calico节点充当路由反射器,让其他节点从这个RR节点获取路由信息。 在BGP中可以通过calicoctl node status看到启动是node-to-node mesh网格的形式,这种形式是一个全互联的模式,默认的BGP在k8s的每个节点担任了一个BGP的一个喇叭,一直吆喝着扩散到其他节点,随着集群节点的数量的增加,那么上百台节点就要构建上百台链接,就是全互联的方式,都要来回建立连接来保证网络的互通性,那么增加一个节点就要成倍的增加这种链接保证网络的互通性,这样的话就会使用大量的网络消耗,所以这时就需要使用Route reflector,也就是找几个大的节点,让他们去这个大的节点建立连接,也叫RR,也就是公司的员工没有微信群的时候,找每个人沟通都很麻烦,那么建个群,里面的人都能收到,所以要找节点或着多个节点充当路由反射器,建议至少是2到3个,一个做备用,一个在维护的时候不影响其他的使用。 ``` ##### 4.1 关闭 node-to-node BGP网格 添加default BGP配置,调整nodeToNodeMeshEnabled和asNumber: ```sh $ cat bgp.yaml ``` ```yaml apiVersion: projectcalico.org/v3 kind: BGPConfiguration metadata: name: default spec: logSeverityScreen: Info nodeToNodeMeshEnabled: false #禁用node-to-node mesh asNumber: 64512 #calicoctl get nodes --output=wide 获取 ``` ##### 4.2 查看bgp配置,MESHENABLED为false ```sh $ calicoctl get bgpconfig ``` ```sh NAME LOGSEVERITY MESHENABLED ASNUMBER default Info false 64512 ``` ##### 4.3 配置指定节点充当路由反射器 为方便让BGPPeer轻松选择节点,通过标签选择器匹配,也就是可以去调用k8s里面的标签进行关联,我们可以给哪个节点作为路由发射器打个标签 给路由器反射器节点打标签,我这将shudoon102打上标签 ```sh $ kubectl label node shudoon102 route-reflector=true ``` ##### 4.4 配置路由器反射器节点,配置集群ID ```sh $ calicoctl get node shudoon102 -o yaml > node.yaml ``` ```yml apiVersion: projectcalico.org/v3 kind: Node metadata: annotations: projectcalico.org/kube-labels: '{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"shudoon102","kubernetes.io/os":"linux","route-reflector":"true"}' creationTimestamp: "2021-09-09T03:54:27Z" labels: beta.kubernetes.io/arch: amd64 beta.kubernetes.io/os: linux kubernetes.io/arch: amd64 kubernetes.io/hostname: shudoon102 kubernetes.io/os: linux route-reflector: "true" name: shudoon102 resourceVersion: "27093" uid: 3c1f56e8-4c35-46d7-9f46-ef20984fec41 spec: bgp: ipv4Address: 172.16.13.81/24 routeReflectorClusterID: 244.0.0.1 #集群ID orchRefs: - nodeName: shudoon102 orchestrator: k8s ``` 应用配置 ```sh $ calicoctl apply -f node.yaml ``` ##### 4.5 配置其他节点连接反射器 ```sh $ cat bgp1.yaml ``` ```yml apiVersion: projectcalico.org/v3 kind: BGPPeer metadata: name: peer-with-route-reflectors spec: nodeSelector: all() #所有节点 peerSelector: route-reflector == 'true' ``` ```sh $ calicoctl apply -f bgp1.yaml Successfully applied 1 'BGPPeer' resource(s) ``` ```sh $ calicoctl get bgppeer NAME PEERIP NODE ASN peer-with-route-reflectors all() 0 ``` ```sh $ calicoctl node status ``` ```sh Calico process is running. IPv4 BGP status +--------------+---------------+-------+----------+-------------+ | PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO | +--------------+---------------+-------+----------+-------------+ | 172.16.13.81 | node specific | up | 08:50:34 | Established | +--------------+---------------+-------+----------+-------------+ ``` ##### 4.6 添加多个路由反射器 现在进行对路由反射器添加多个,100个节点以内建议2-3个路由反射器 ```sh $ kubectl label node shudoon103 route-reflector=true ``` ```sh $ calicoctl get node shudoon103 -o yaml > node2.yaml ``` ```yml apiVersion: projectcalico.org/v3 kind: Node metadata: annotations: projectcalico.org/kube-labels: '{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"shudoon103","kubernetes.io/os":"linux","route-reflector":"true"}' creationTimestamp: "2021-09-09T03:54:28Z" labels: beta.kubernetes.io/arch: amd64 beta.kubernetes.io/os: linux kubernetes.io/arch: amd64 kubernetes.io/hostname: shudoon103 kubernetes.io/os: linux route-reflector: "true" name: shudoon103 resourceVersion: "29289" uid: da510109-75bf-4e92-9074-409f1de496b9 spec: bgp: ipv4Address: 172.16.13.82/24 routeReflectorClusterID: 244.0.0.1 #添加集群ID orchRefs: - nodeName: shudoon103 orchestrator: k8s ``` 应用配置 ```sh $ calicoctl apply -f node.yaml ``` 查看calico节点状态 ```sh $calicoctl node status ``` ```sh Calico process is running. IPv4 BGP status +--------------+---------------+-------+----------+-------------+ | PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO | +--------------+---------------+-------+----------+-------------+ | 172.16.13.81 | node specific | up | 08:50:34 | Established | | 172.16.13.82 | node specific | up | 08:54:47 | Established | +--------------+---------------+-------+----------+-------------+ ``` #### 问题: `a.如何卸载calico网络?` ```sh $ kubectl delete -f calico-etcd.yaml ``` 参考: https://blog.51cto.com/u_14143894/2463392
阅读 46 评论 0 收藏 0
阅读 46
评论 0
收藏 0

兜兜    2021-09-06 00:30:47    2021-10-19 14:32:09   

kubernets vxlan
K8s集群pod信息 ```bash $ kubectl get pod -o wide ``` ```bash NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES busybox-deployment-6576988595-dbpq7 1/1 Running 4 33h 10.244.1.12 k8s-node1 <none> <none> busybox-deployment-6576988595-l5w7r 1/1 Running 4 33h 10.244.2.14 k8s-node2 <none> <none> busybox-deployment-6576988595-wfvn2 1/1 Running 4 33h 10.244.2.13 k8s-node2 <none> <none> ``` #### _**实验 `"pod-10.244.1.12(k8s-node1)"` ping `"pod-10.244.2.14(k8s-node2)"`,跟踪数据包的传输过程。**_ _**1. "10.244.1.12" ping "10.244.2.14" 匹配默认路由0.0.0.0走容器eth0,到达veth pair的另一端veth8001ebf4**_ kubectl连接到pod-10.244.1.12 ping 10.244.2.14 ```sh $ kubectl exec -ti busybox-deployment-6576988595-dbpq7 sh ``` ```html / # ping 10.244.2.14 PING 10.244.2.14 (10.244.2.14): 56 data bytes 64 bytes from 10.244.2.14: seq=0 ttl=62 time=0.828 ms ``` kubectl连接到pod-10.244.1.12查看路由信息 ```sh $ kubectl exec -ti busybox-deployment-6576988595-dbpq7 sh ``` ```html / # route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 10.244.1.1 0.0.0.0 UG 0 0 0 eth0 #数据包会匹配这条路由 10.244.0.0 10.244.1.1 255.255.0.0 UG 0 0 0 eth0 10.244.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 ``` 查看k8s-node1的pod-10.244.1.12 eth0对应veth pair另一端为veth8001ebf4 [(如何查看容器对应的veth网卡)](https://ynotes.cn/blog/article_detail/260) k8s-node1抓取veth8001ebf4网卡的数据包 ```sh tcpdump -i veth8001ebf4 ``` ```sh tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on veth8001ebf4, link-type EN10MB (Ethernet), capture size 262144 bytes 00:30:00.124500 IP 10.244.1.12 > 10.244.2.14: ICMP echo request, id 1336, seq 495, length 64 ``` _**2.veth8001ebf4桥接到cni0,数据包发送到cni0**_ ```sh $ tcpdump -i cni0 -e -nnn -vvv ``` ```sh tcpdump: listening on cni0, link-type EN10MB (Ethernet), capture size 262144 bytes 01:32:29.522019 d6:10:b7:91:f0:ac > 0a:58:0a:f4:01:01, ethertype IPv4 (0x0800), length 98: (tos 0x0, ttl 64, id 16442, offset 0, flags [DF], proto ICMP (1), length 84) 10.244.1.12 > 10.244.2.14: ICMP echo request, id 1862, seq 89, length 64 ``` _**3.cni0查看路由表route -n,会路由匹配10.244.2.0-flannel.1**_ ```sh [root@k8s-node1 ~]# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 172.16.100.254 0.0.0.0 UG 100 0 0 ens192 10.244.0.0 10.244.0.0 255.255.255.0 UG 0 0 0 flannel.1 10.244.1.0 0.0.0.0 255.255.255.0 U 0 0 0 cni0 10.244.2.0 10.244.2.0 255.255.255.0 UG 0 0 0 flannel.1 #会匹配到这条路由 172.16.100.0 0.0.0.0 255.255.255.0 U 100 0 0 ens192 172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0 ``` _**4.flannel.1收到cni0的数据帧。a.修改内部数据帧的地址(MAC source:FE:90:C4:18:69:A7[k8s-node1 flannel.1的MAC],MAC dest:B6:72:AE:36:7B:6C[k8s-node2 flannel.1的MAC]),b.封装vxlan头(VNI:1),c.再封装UDP头部(UDP dest port:8472),d.封装节点ip头。**_ ![enter image description here](https://files.ynotes.cn/vxlan.png "enter image title here") a.修改内部数据帧,源MAC地址为k8s-node1 flannel.1的MAC地址。目的MAC地址为10.244.2.0网络的网关10.244.2.0(k8s-node2 flannel.1)的MAC地址 ```sh arp -n|grep flannel.1 ``` ```sh 10.244.2.0 ether b6:72:Ae:36:7b:6c CM flannel.1 #内部网络网关10.244.2.0的MAC地址 10.244.0.0 ether 6e:f8:85:d7:09:17 CM flannel.1 ``` b.封装vxlan头,VNI为vetp设备的vxlan id c.封装UDP头部,dest port为vetp设备的dstport 查看flannel.1 vetp信息 ```sh $ ip -d link show flannel.1 ``` ```sh 4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN mode DEFAULT group default link/ether fe:90:c4:18:69:a7 brd ff:ff:ff:ff:ff:ff promiscuity 0 vxlan id 1 local 172.16.100.101 dev ens192 srcport 0 0 dstport 8472 nolearning ageing 300 noudpcsum noudp6zerocsumtx noudp6zerocsumrx addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 #vxlan id 为1,vetp UDP dstport 8472 ``` d.封装节点ip头,源ip为k8s-node1的ip,目的ip k8s-node2的ip。目的ip通过查看bridge fdb 对应的vetp MAC获取节点ip ```sh $ bridge fdb|grep flannel.1 ``` ```sh e2:3f:07:99:Cf:6f dev flannel.1 dst 172.16.100.102 self permanent b6:72:Ae:36:7b:6c dev flannel.1 dst 172.16.100.102 self permanent #通过vetp MAC地址找到节点ip:172.16.100.102 6e:f8:85:d7:09:17 dev flannel.1 dst 172.16.100.100 self permanent ``` 查看ens192的数据包(vetp已封包完成),分析数据包内容 ```sh tcpdump -l -nnnvvveXX -i ens192 'port 8472 and udp[8:2] = 0x0800 & 0x0800' ``` ```sh tcpdump: listening on ens192, link-type EN10MB (Ethernet), capture size 262144 bytes 02:09:45.801867 00:50:56:93:6a:3a > 00:50:56:93:63:3b, ethertype IPv4 (0x0800), length 148: (tos 0x0, ttl 64, id 30086, offset 0, flags [none], proto UDP (17), length 134) 172.16.100.101.40592 > 172.16.100.102.8472: [no cksum] OTV, flags [I] (0x08), overlay 0, instance 1 fe:90:c4:18:69:a7 > b6:72:Ae:36:7b:6c, ethertype IPv4 (0x0800), length 98: (tos 0x0, ttl 63, id 8965, offset 0, flags [DF], proto ICMP (1), length 84) 10.244.1.12 > 10.244.2.14: ICMP echo request, id 3143, seq 1102, length 64 0x0000: 0050 5693 633b 0050 5693 6a3a 0800 4500 #帧头:0050 5693 633b^DEST_MAC,0050 5693 6a3a^SRC_MAC,0800^IPV4。IP头:4^ipv4,5^四字节个数,00^TOS 0x0010: 0086 7586 0000 4011 e3f4 ac10 6465 ac10 #0086^总长度,7586^标识符,0000^偏移量,40^生存周期,11^上层协议,e3f4^校验和,ac10 6465^SRC_IP, 0x0020: 6466 9e90 2118 0072 0000 0800 0000 0000 #ac10 6466^DST_IP。UDP头:9e90^SRC_PORT,2118^DST_PORT,0072^UDP长度,0000^校验和。VXLAN头:08^标志位,00 0000^保留字段 0x0030: 0100 b672 ae36 7b6c fe90 c418 69a7 0800 #0000 01^VNID,00^保留字段。内部帧头:b672 ae36 7b6c^DST_MAC,fe90 c418 69a7^SRC_MAC,0800^IPV4。 0x0040: 4500 0054 2305 4000 3f01 ffa2 0af4 010c #内部IP头:IP头:4^ipv4,5^四字节个,00^TOS,#0054^总长度,2305^标识符,4000^偏移量,3f^生存周期,01^上层协议,ffa2^校验和,0af4 010c^SRC_IP 0x0050: 0af4 020e 0800 847d 0c47 044e 0d1e 55cf #0af4 020e^DST_IP。ICMP协议:08^请求报文,00^代码,内部数据帧:847d^校验和,0c47 044e^内部CFS,0d1e 55cf^外部CFS。 0x0060: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 0x0070: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 0x0080: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 0x0090: 0000 0000 .... ```
阅读 33 评论 0 收藏 0
阅读 33
评论 0
收藏 0

兜兜    2021-09-03 11:27:17    2021-10-27 15:13:57   

k8s
#### 软件版本 ```bash docker 19.03 kubernets 1.20.0 flannel ``` ### 准备工作 #### 更新系统包 ```bash $ yum update -y ``` #### 配置hostname ```bash 172.16.13.80 k8s-master 172.16.13.81 k8s-node1 172.16.13.82 k8s-node2 ``` #### 关闭swap ```bash $ swapoff -a $ cat /etc/fstab #修改去掉swap ``` ```sh ... #/dev/mapper/centos-swap swap swap defaults 0 0 ``` #### 关闭selinux ```bash $ setenforce 0 $ cat /etc/selinux/config #关闭selinux ``` ```sh ... SELINUX=disabled ... ``` #### 配置iptables ```sh $ systemctl stop firewalld&&systemctl disable firewalld $ systemctl stop iptables&&systemctl disable iptables $ cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF $ sysctl --system ``` #### 开启时间同步 ##### 安装chrony: ```sh $ yum install -y chrony ``` ##### 注释默认ntp服务器 ```sh $ sed -i 's/^server/#&/' /etc/chrony.conf ``` ##### 指定上游公共 ntp 服务器,并允许其他节点同步时间 ```sh $ cat >> /etc/chrony.conf << EOF server 0.asia.pool.ntp.org iburst server 1.asia.pool.ntp.org iburst server 2.asia.pool.ntp.org iburst server 3.asia.pool.ntp.org iburst allow all EOF ``` ##### 重启chronyd服务并设为开机启动: ```sh $ systemctl enable chronyd && systemctl restart chronyd ``` ##### 开启网络时间同步功能 ```sh $ timedatectl set-ntp true ``` ### 安装docker 安装docker相关包 ```bash $ yum install -y yum-utils $ yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo ``` 查看docker包版本 ```bash $ yum list docker-ce --showduplicates ``` ```sh Installed Packages docker-ce.x86_64 3:19.03.0-3.el7 @docker-ce-stable Available Packages docker-ce.x86_64 3:18.09.0-3.el7 docker-ce-stable docker-ce.x86_64 3:18.09.1-3.el7 docker-ce-stable docker-ce.x86_64 3:18.09.2-3.el7 docker-ce-stable docker-ce.x86_64 3:18.09.3-3.el7 docker-ce-stable docker-ce.x86_64 3:18.09.4-3.el7 docker-ce-stable docker-ce.x86_64 3:18.09.5-3.el7 docker-ce-stable docker-ce.x86_64 3:18.09.6-3.el7 docker-ce-stable docker-ce.x86_64 3:18.09.7-3.el7 docker-ce-stable docker-ce.x86_64 3:18.09.8-3.el7 docker-ce-stable docker-ce.x86_64 3:18.09.9-3.el7 docker-ce-stable docker-ce.x86_64 3:19.03.0-3.el7 docker-ce-stable docker-ce.x86_64 3:19.03.1-3.el7 docker-ce-stable docker-ce.x86_64 3:19.03.2-3.el7 docker-ce-stable docker-ce.x86_64 3:19.03.3-3.el7 docker-ce-stable docker-ce.x86_64 3:19.03.4-3.el7 docker-ce-stable docker-ce.x86_64 3:19.03.5-3.el7 docker-ce-stable docker-ce.x86_64 3:19.03.6-3.el7 docker-ce-stable docker-ce.x86_64 3:19.03.7-3.el7 docker-ce-stable docker-ce.x86_64 3:19.03.8-3.el7 docker-ce-stable docker-ce.x86_64 3:19.03.9-3.el7 docker-ce-stable docker-ce.x86_64 3:19.03.10-3.el7 docker-ce-stable docker-ce.x86_64 3:19.03.11-3.el7 docker-ce-stable docker-ce.x86_64 3:19.03.12-3.el7 docker-ce-stable docker-ce.x86_64 3:19.03.13-3.el7 docker-ce-stable docker-ce.x86_64 3:19.03.14-3.el7 docker-ce-stable docker-ce.x86_64 3:19.03.15-3.el7 docker-ce-stable docker-ce.x86_64 3:20.10.0-3.el7 docker-ce-stable docker-ce.x86_64 3:20.10.1-3.el7 docker-ce-stable docker-ce.x86_64 3:20.10.2-3.el7 docker-ce-stable docker-ce.x86_64 3:20.10.3-3.el7 docker-ce-stable docker-ce.x86_64 3:20.10.4-3.el7 docker-ce-stable docker-ce.x86_64 3:20.10.5-3.el7 docker-ce-stable docker-ce.x86_64 3:20.10.6-3.el7 docker-ce-stable docker-ce.x86_64 3:20.10.7-3.el7 docker-ce-stable ``` 安装 ```bash $ yum install -y docker-ce-19.03.0 docker-ce-cli-19.03.0 containerd.io ``` 修改Cgroup Driver为systemd ```bash $ cat /usr/lib/systemd/system/docker.service ... ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --exec-opt native.cgroupdriver=systemd #添加 --exec-opt native.cgroupdriver=systemd ... ``` ```sh $ systemctl daemon-reload $ docker info|grep Cgroup Cgroup Driver: systemd ``` 启动docker ```bash $ systemctl start docker $ systemctl enable docker ``` 查看docker版本 ```bash $ docker version ``` ```sh Client: Docker Engine - Community Version: 19.03.0 API version: 1.40 Go version: go1.12.5 Git commit: aeac9490dc Built: Wed Jul 17 18:15:40 2019 OS/Arch: linux/amd64 Experimental: false Server: Docker Engine - Community Engine: Version: 19.03.0 API version: 1.40 (minimum version 1.12) Go version: go1.12.5 Git commit: aeac9490dc Built: Wed Jul 17 18:14:16 2019 OS/Arch: linux/amd64 Experimental: false containerd: Version: 1.4.9 GitCommit: e25210fe30a0a703442421b0f60afac609f950a3 runc: Version: 1.0.1 GitCommit: v1.0.1-0-g4144b63 docker-init: Version: 0.18.0 GitCommit: fec3683 ``` ### 安装Kubernetes ```bash cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF ``` ```bash $ yum check-update $ yum list kubelet --showduplicates ``` ```sh Installed Packages kubelet.x86_64 1.20.0-0 @kubernetes Available Packages kubelet.x86_64 1.18.0-0 kubernetes kubelet.x86_64 1.18.1-0 kubernetes kubelet.x86_64 1.18.2-0 kubernetes kubelet.x86_64 1.18.3-0 kubernetes kubelet.x86_64 1.18.4-0 kubernetes kubelet.x86_64 1.18.4-1 kubernetes kubelet.x86_64 1.18.5-0 kubernetes kubelet.x86_64 1.18.6-0 kubernetes kubelet.x86_64 1.18.8-0 kubernetes kubelet.x86_64 1.18.9-0 kubernetes kubelet.x86_64 1.18.10-0 kubernetes kubelet.x86_64 1.18.12-0 kubernetes kubelet.x86_64 1.18.13-0 kubernetes kubelet.x86_64 1.18.14-0 kubernetes kubelet.x86_64 1.18.15-0 kubernetes kubelet.x86_64 1.18.16-0 kubernetes kubelet.x86_64 1.18.17-0 kubernetes kubelet.x86_64 1.18.18-0 kubernetes kubelet.x86_64 1.18.19-0 kubernetes kubelet.x86_64 1.18.20-0 kubernetes kubelet.x86_64 1.19.0-0 kubernetes kubelet.x86_64 1.19.1-0 kubernetes kubelet.x86_64 1.19.2-0 kubernetes kubelet.x86_64 1.19.3-0 kubernetes kubelet.x86_64 1.19.4-0 kubernetes kubelet.x86_64 1.19.5-0 kubernetes kubelet.x86_64 1.19.6-0 kubernetes kubelet.x86_64 1.19.7-0 kubernetes kubelet.x86_64 1.19.8-0 kubernetes kubelet.x86_64 1.19.9-0 kubernetes kubelet.x86_64 1.19.10-0 kubernetes kubelet.x86_64 1.19.11-0 kubernetes kubelet.x86_64 1.19.12-0 kubernetes kubelet.x86_64 1.19.13-0 kubernetes kubelet.x86_64 1.19.14-0 kubernetes kubelet.x86_64 1.20.0-0 kubernetes kubelet.x86_64 1.20.1-0 kubernetes kubelet.x86_64 1.20.2-0 kubernetes kubelet.x86_64 1.20.4-0 kubernetes kubelet.x86_64 1.20.5-0 kubernetes kubelet.x86_64 1.20.6-0 kubernetes kubelet.x86_64 1.20.7-0 kubernetes ``` ```sh $ yum install -y kubelet-1.20.0 kubeadm-1.20.0 kubectl-1.20.0 ``` ```bash $ systemctl enable kubelet ``` 初始化K8S(k8s-master节点) ```bash $ kubeadm init \ --kubernetes-version=v1.20.10 \ --pod-network-cidr=10.244.0.0/16 \ --image-repository registry.aliyuncs.com/google_containers \ --apiserver-advertise-address 172.16.13.80 \ --v=6 ``` ```sh I0904 10:39:55.512878 18003 initconfiguration.go:104] detected and using CRI socket: /var/run/dockershim.sock [init] Using Kubernetes version: v1.20.10 [preflight] Running pre-flight checks I0904 10:39:55.609411 18003 checks.go:577] validating Kubernetes and kubeadm version I0904 10:39:55.609436 18003 checks.go:166] validating if the firewall is enabled and active I0904 10:39:55.615977 18003 checks.go:201] validating availability of port 6443 I0904 10:39:55.616145 18003 checks.go:201] validating availability of port 10259 I0904 10:39:55.616175 18003 checks.go:201] validating availability of port 10257 I0904 10:39:55.616202 18003 checks.go:286] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml I0904 10:39:55.616218 18003 checks.go:286] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml I0904 10:39:55.616225 18003 checks.go:286] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml I0904 10:39:55.616231 18003 checks.go:286] validating the existence of file /etc/kubernetes/manifests/etcd.yaml I0904 10:39:55.616243 18003 checks.go:432] validating if the connectivity type is via proxy or direct I0904 10:39:55.616278 18003 checks.go:471] validating http connectivity to first IP address in the CIDR I0904 10:39:55.616300 18003 checks.go:471] validating http connectivity to first IP address in the CIDR I0904 10:39:55.616311 18003 checks.go:102] validating the container runtime I0904 10:39:55.710933 18003 checks.go:128] validating if the "docker" service is enabled and active I0904 10:39:55.812851 18003 checks.go:335] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables I0904 10:39:55.812907 18003 checks.go:335] validating the contents of file /proc/sys/net/ipv4/ip_forward I0904 10:39:55.812930 18003 checks.go:649] validating whether swap is enabled or not I0904 10:39:55.812975 18003 checks.go:376] validating the presence of executable conntrack I0904 10:39:55.812999 18003 checks.go:376] validating the presence of executable ip I0904 10:39:55.813017 18003 checks.go:376] validating the presence of executable iptables I0904 10:39:55.813037 18003 checks.go:376] validating the presence of executable mount I0904 10:39:55.813051 18003 checks.go:376] validating the presence of executable nsenter I0904 10:39:55.813063 18003 checks.go:376] validating the presence of executable ebtables I0904 10:39:55.813074 18003 checks.go:376] validating the presence of executable ethtool I0904 10:39:55.813085 18003 checks.go:376] validating the presence of executable socat I0904 10:39:55.813099 18003 checks.go:376] validating the presence of executable tc I0904 10:39:55.813109 18003 checks.go:376] validating the presence of executable touch I0904 10:39:55.813123 18003 checks.go:520] running all checks I0904 10:39:55.915575 18003 checks.go:406] checking whether the given node name is reachable using net.LookupHost I0904 10:39:55.915792 18003 checks.go:618] validating kubelet version I0904 10:39:55.985451 18003 checks.go:128] validating if the "kubelet" service is enabled and active I0904 10:39:55.994819 18003 checks.go:201] validating availability of port 10250 I0904 10:39:55.994889 18003 checks.go:201] validating availability of port 2379 I0904 10:39:55.994913 18003 checks.go:201] validating availability of port 2380 I0904 10:39:55.994936 18003 checks.go:249] validating the existence and emptiness of directory /var/lib/etcd [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' I0904 10:39:56.043119 18003 checks.go:839] image exists: registry.aliyuncs.com/google_containers/kube-apiserver:v1.20.10 I0904 10:39:56.095120 18003 checks.go:839] image exists: registry.aliyuncs.com/google_containers/kube-controller-manager:v1.20.10 I0904 10:39:56.159069 18003 checks.go:839] image exists: registry.aliyuncs.com/google_containers/kube-scheduler:v1.20.10 I0904 10:39:56.212530 18003 checks.go:839] image exists: registry.aliyuncs.com/google_containers/kube-proxy:v1.20.10 I0904 10:39:56.265125 18003 checks.go:839] image exists: registry.aliyuncs.com/google_containers/pause:3.2 I0904 10:39:56.320004 18003 checks.go:839] image exists: registry.aliyuncs.com/google_containers/etcd:3.4.13-0 I0904 10:39:56.371299 18003 checks.go:839] image exists: registry.aliyuncs.com/google_containers/coredns:1.7.0 [certs] Using certificateDir folder "/etc/kubernetes/pki" I0904 10:39:56.371382 18003 certs.go:110] creating a new certificate authority for ca [certs] Generating "ca" certificate and key I0904 10:39:56.729903 18003 certs.go:474] validating certificate period for ca certificate [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local shudoon101] and IPs [10.96.0.1 172.16.13.80] [certs] Generating "apiserver-kubelet-client" certificate and key I0904 10:39:57.334553 18003 certs.go:110] creating a new certificate authority for front-proxy-ca [certs] Generating "front-proxy-ca" certificate and key I0904 10:39:57.486574 18003 certs.go:474] validating certificate period for front-proxy-ca certificate [certs] Generating "front-proxy-client" certificate and key I0904 10:39:57.694560 18003 certs.go:110] creating a new certificate authority for etcd-ca [certs] Generating "etcd/ca" certificate and key I0904 10:39:57.821367 18003 certs.go:474] validating certificate period for etcd/ca certificate [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [localhost shudoon101] and IPs [172.16.13.80 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [localhost shudoon101] and IPs [172.16.13.80 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key I0904 10:39:58.861298 18003 certs.go:76] creating new public/private key files for signing service account users [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" I0904 10:39:59.035771 18003 kubeconfig.go:101] creating kubeconfig file for admin.conf [kubeconfig] Writing "admin.conf" kubeconfig file I0904 10:39:59.330053 18003 kubeconfig.go:101] creating kubeconfig file for kubelet.conf [kubeconfig] Writing "kubelet.conf" kubeconfig file I0904 10:39:59.481405 18003 kubeconfig.go:101] creating kubeconfig file for controller-manager.conf [kubeconfig] Writing "controller-manager.conf" kubeconfig file I0904 10:39:59.645125 18003 kubeconfig.go:101] creating kubeconfig file for scheduler.conf [kubeconfig] Writing "scheduler.conf" kubeconfig file I0904 10:40:00.334922 18003 kubelet.go:63] Stopping the kubelet [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" I0904 10:40:00.420357 18003 manifests.go:96] [control-plane] getting StaticPodSpecs I0904 10:40:00.420779 18003 certs.go:474] validating certificate period for CA certificate I0904 10:40:00.420895 18003 manifests.go:109] [control-plane] adding volume "ca-certs" for component "kube-apiserver" I0904 10:40:00.420908 18003 manifests.go:109] [control-plane] adding volume "etc-pki" for component "kube-apiserver" I0904 10:40:00.420916 18003 manifests.go:109] [control-plane] adding volume "k8s-certs" for component "kube-apiserver" I0904 10:40:00.428795 18003 manifests.go:126] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml" [control-plane] Creating static Pod manifest for "kube-controller-manager" I0904 10:40:00.428822 18003 manifests.go:96] [control-plane] getting StaticPodSpecs I0904 10:40:00.429308 18003 manifests.go:109] [control-plane] adding volume "ca-certs" for component "kube-controller-manager" I0904 10:40:00.429323 18003 manifests.go:109] [control-plane] adding volume "etc-pki" for component "kube-controller-manager" I0904 10:40:00.429331 18003 manifests.go:109] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager" I0904 10:40:00.429337 18003 manifests.go:109] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager" I0904 10:40:00.429341 18003 manifests.go:109] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager" I0904 10:40:00.431212 18003 manifests.go:126] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [control-plane] Creating static Pod manifest for "kube-scheduler" I0904 10:40:00.431233 18003 manifests.go:96] [control-plane] getting StaticPodSpecs I0904 10:40:00.431917 18003 manifests.go:109] [control-plane] adding volume "kubeconfig" for component "kube-scheduler" I0904 10:40:00.432442 18003 manifests.go:126] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" I0904 10:40:00.433542 18003 local.go:74] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml" I0904 10:40:00.433568 18003 waitcontrolplane.go:87] [wait-control-plane] Waiting for the API server to be healthy I0904 10:40:00.435037 18003 loader.go:379] Config loaded from file: /etc/kubernetes/admin.conf [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s I0904 10:40:00.436681 18003 round_trippers.go:445] GET https://172.16.13.80:6443/healthz?timeout=10s in 0 milliseconds I0904 10:40:00.938722 18003 round_trippers.go:445] GET https://172.16.13.80:6443/healthz?timeout=10s in 1 milliseconds I0904 10:40:01.437273 18003 round_trippers.go:445] GET https://172.16.13.80:6443/healthz?timeout=10s in 0 milliseconds I0904 10:40:01.937162 18003 round_trippers.go:445] GET https://172.16.13.80:6443/healthz?timeout=10s in 0 milliseconds I0904 10:40:02.437215 18003 round_trippers.go:445] GET https://172.16.13.80:6443/healthz?timeout=10s in 0 milliseconds I0904 10:40:02.937090 18003 round_trippers.go:445] GET https://172.16.13.80:6443/healthz?timeout=10s in 0 milliseconds I0904 10:40:03.437168 18003 round_trippers.go:445] GET https://172.16.13.80:6443/healthz?timeout=10s in 0 milliseconds I0904 10:40:03.937151 18003 round_trippers.go:445] GET https://172.16.13.80:6443/healthz?timeout=10s in 0 milliseconds I0904 10:40:04.437369 18003 round_trippers.go:445] GET https://172.16.13.80:6443/healthz?timeout=10s in 0 milliseconds I0904 10:40:04.937187 18003 round_trippers.go:445] GET https://172.16.13.80:6443/healthz?timeout=10s in 0 milliseconds I0904 10:40:05.437078 18003 round_trippers.go:445] GET https://172.16.13.80:6443/healthz?timeout=10s in 0 milliseconds I0904 10:40:05.937120 18003 round_trippers.go:445] GET https://172.16.13.80:6443/healthz?timeout=10s in 0 milliseconds I0904 10:40:06.437218 18003 round_trippers.go:445] GET https://172.16.13.80:6443/healthz?timeout=10s in 0 milliseconds I0904 10:40:06.937134 18003 round_trippers.go:445] GET https://172.16.13.80:6443/healthz?timeout=10s in 0 milliseconds I0904 10:40:07.437199 18003 round_trippers.go:445] GET https://172.16.13.80:6443/healthz?timeout=10s in 0 milliseconds I0904 10:40:07.937158 18003 round_trippers.go:445] GET https://172.16.13.80:6443/healthz?timeout=10s in 0 milliseconds I0904 10:40:08.437692 18003 round_trippers.go:445] GET https://172.16.13.80:6443/healthz?timeout=10s in 0 milliseconds I0904 10:40:12.453695 18003 round_trippers.go:445] GET https://172.16.13.80:6443/healthz?timeout=10s 500 Internal Server Error in 3516 milliseconds I0904 10:40:12.938805 18003 round_trippers.go:445] GET https://172.16.13.80:6443/healthz?timeout=10s 500 Internal Server Error in 1 milliseconds I0904 10:40:13.438240 18003 round_trippers.go:445] GET https://172.16.13.80:6443/healthz?timeout=10s 500 Internal Server Error in 1 milliseconds I0904 10:40:13.938725 18003 round_trippers.go:445] GET https://172.16.13.80:6443/healthz?timeout=10s 200 OK in 1 milliseconds [apiclient] All control plane components are healthy after 13.502539 seconds I0904 10:40:13.938847 18003 uploadconfig.go:108] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace I0904 10:40:13.943583 18003 round_trippers.go:445] POST https://172.16.13.80:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s 201 Created in 2 milliseconds I0904 10:40:13.946914 18003 round_trippers.go:445] POST https://172.16.13.80:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles?timeout=10s 201 Created in 2 milliseconds I0904 10:40:13.949757 18003 round_trippers.go:445] POST https://172.16.13.80:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings?timeout=10s 201 Created in 2 milliseconds I0904 10:40:13.950284 18003 uploadconfig.go:122] [upload-config] Uploading the kubelet component config to a ConfigMap [kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster I0904 10:40:13.952552 18003 round_trippers.go:445] POST https://172.16.13.80:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s 201 Created in 1 milliseconds I0904 10:40:13.954630 18003 round_trippers.go:445] POST https://172.16.13.80:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles?timeout=10s 201 Created in 1 milliseconds I0904 10:40:13.956733 18003 round_trippers.go:445] POST https://172.16.13.80:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings?timeout=10s 201 Created in 1 milliseconds I0904 10:40:13.956848 18003 uploadconfig.go:127] [upload-config] Preserving the CRISocket information for the control-plane node I0904 10:40:13.956861 18003 patchnode.go:30] [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "shudoon101" as an annotation I0904 10:40:14.460485 18003 round_trippers.go:445] GET https://172.16.13.80:6443/api/v1/nodes/shudoon101?timeout=10s 200 OK in 3 milliseconds I0904 10:40:14.467558 18003 round_trippers.go:445] PATCH https://172.16.13.80:6443/api/v1/nodes/shudoon101?timeout=10s 200 OK in 4 milliseconds [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node shudoon101 as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)" [mark-control-plane] Marking the node shudoon101 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] I0904 10:40:14.969784 18003 round_trippers.go:445] GET https://172.16.13.80:6443/api/v1/nodes/shudoon101?timeout=10s 200 OK in 1 milliseconds I0904 10:40:14.976503 18003 round_trippers.go:445] PATCH https://172.16.13.80:6443/api/v1/nodes/shudoon101?timeout=10s 200 OK in 4 milliseconds [bootstrap-token] Using token: vqlfov.pkv1r7fsucnvijix [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles I0904 10:40:14.979889 18003 round_trippers.go:445] GET https://172.16.13.80:6443/api/v1/namespaces/kube-system/secrets/bootstrap-token-vqlfov?timeout=10s 404 Not Found in 2 milliseconds I0904 10:40:14.983266 18003 round_trippers.go:445] POST https://172.16.13.80:6443/api/v1/namespaces/kube-system/secrets?timeout=10s 201 Created in 2 milliseconds [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes I0904 10:40:14.986413 18003 round_trippers.go:445] POST https://172.16.13.80:6443/apis/rbac.authorization.k8s.io/v1/clusterroles?timeout=10s 201 Created in 2 milliseconds I0904 10:40:14.989537 18003 round_trippers.go:445] POST https://172.16.13.80:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s 201 Created in 2 milliseconds [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials I0904 10:40:14.991819 18003 round_trippers.go:445] POST https://172.16.13.80:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s 201 Created in 1 milliseconds [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token I0904 10:40:14.993446 18003 round_trippers.go:445] POST https://172.16.13.80:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s 201 Created in 1 milliseconds [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster I0904 10:40:14.995203 18003 round_trippers.go:445] POST https://172.16.13.80:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s 201 Created in 1 milliseconds [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace I0904 10:40:14.995311 18003 clusterinfo.go:45] [bootstrap-token] loading admin kubeconfig I0904 10:40:14.995908 18003 loader.go:379] Config loaded from file: /etc/kubernetes/admin.conf I0904 10:40:14.995924 18003 clusterinfo.go:53] [bootstrap-token] copying the cluster from admin.conf to the bootstrap kubeconfig I0904 10:40:14.996407 18003 clusterinfo.go:65] [bootstrap-token] creating/updating ConfigMap in kube-public namespace I0904 10:40:14.999040 18003 round_trippers.go:445] POST https://172.16.13.80:6443/api/v1/namespaces/kube-public/configmaps?timeout=10s 201 Created in 2 milliseconds I0904 10:40:14.999190 18003 clusterinfo.go:79] creating the RBAC rules for exposing the cluster-info ConfigMap in the kube-public namespace I0904 10:40:15.001641 18003 round_trippers.go:445] POST https://172.16.13.80:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles?timeout=10s 201 Created in 2 milliseconds I0904 10:40:15.003854 18003 round_trippers.go:445] POST https://172.16.13.80:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings?timeout=10s 201 Created in 1 milliseconds I0904 10:40:15.004058 18003 kubeletfinalize.go:88] [kubelet-finalize] Assuming that kubelet client certificate rotation is enabled: found "/var/lib/kubelet/pki/kubelet-client-current.pem" [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key I0904 10:40:15.004638 18003 loader.go:379] Config loaded from file: /etc/kubernetes/kubelet.conf I0904 10:40:15.005181 18003 kubeletfinalize.go:132] [kubelet-finalize] Restarting the kubelet to enable client certificate rotation I0904 10:40:15.086465 18003 round_trippers.go:445] GET https://172.16.13.80:6443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%3Dkube-dns 200 OK in 5 milliseconds I0904 10:40:15.092852 18003 round_trippers.go:445] GET https://172.16.13.80:6443/api/v1/namespaces/kube-system/configmaps/kube-dns?timeout=10s 404 Not Found in 1 milliseconds I0904 10:40:15.094538 18003 round_trippers.go:445] GET https://172.16.13.80:6443/api/v1/namespaces/kube-system/configmaps/coredns?timeout=10s 404 Not Found in 1 milliseconds I0904 10:40:15.097004 18003 round_trippers.go:445] POST https://172.16.13.80:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s 201 Created in 2 milliseconds I0904 10:40:15.099782 18003 round_trippers.go:445] POST https://172.16.13.80:6443/apis/rbac.authorization.k8s.io/v1/clusterroles?timeout=10s 201 Created in 2 milliseconds I0904 10:40:15.104903 18003 round_trippers.go:445] POST https://172.16.13.80:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s 201 Created in 4 milliseconds I0904 10:40:15.109540 18003 round_trippers.go:445] POST https://172.16.13.80:6443/api/v1/namespaces/kube-system/serviceaccounts?timeout=10s 201 Created in 3 milliseconds I0904 10:40:15.132165 18003 round_trippers.go:445] POST https://172.16.13.80:6443/apis/apps/v1/namespaces/kube-system/deployments?timeout=10s 201 Created in 11 milliseconds I0904 10:40:15.143679 18003 round_trippers.go:445] POST https://172.16.13.80:6443/api/v1/namespaces/kube-system/services?timeout=10s 201 Created in 9 milliseconds [addons] Applied essential addon: CoreDNS I0904 10:40:15.170722 18003 round_trippers.go:445] POST https://172.16.13.80:6443/api/v1/namespaces/kube-system/serviceaccounts?timeout=10s 201 Created in 2 milliseconds I0904 10:40:15.368235 18003 request.go:591] Throttling request took 195.771994ms, request: POST:https://172.16.13.80:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s I0904 10:40:15.371885 18003 round_trippers.go:445] POST https://172.16.13.80:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s 201 Created in 3 milliseconds I0904 10:40:15.389809 18003 round_trippers.go:445] POST https://172.16.13.80:6443/apis/apps/v1/namespaces/kube-system/daemonsets?timeout=10s 201 Created in 10 milliseconds I0904 10:40:15.392633 18003 round_trippers.go:445] POST https://172.16.13.80:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s 201 Created in 2 milliseconds I0904 10:40:15.395548 18003 round_trippers.go:445] POST https://172.16.13.80:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles?timeout=10s 201 Created in 2 milliseconds I0904 10:40:15.398242 18003 round_trippers.go:445] POST https://172.16.13.80:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings?timeout=10s 201 Created in 2 milliseconds [addons] Applied essential addon: kube-proxy I0904 10:40:15.399040 18003 loader.go:379] Config loaded from file: /etc/kubernetes/admin.conf I0904 10:40:15.399615 18003 loader.go:379] Config loaded from file: /etc/kubernetes/admin.conf Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 172.16.13.80:6443 --token vqlfov.pkv1r7fsucnvijix \ --discovery-token-ca-cert-hash sha256:93f10f90ee14d64eaa3f5d6f7086673a7264ac9d00674853d39bf34fce4a5622 ``` 查看节点 ```bash $ kubectl get nodes NAME STATUS ROLES AGE VERSION shudoon101 NotReady control-plane,master 12m v1.20.0 ``` ### 安装Flannel(所有节点) #### flannel通信原理图 ![enter image description here](https://files.ynotes.cn/flannel.png "enter image title here") 下载kube-flannel.yml ```bash $ wget https://github.com/coreos/flannel/raw/master/Documentation/kube-flannel.yml $ cat kube-flannel.yml ``` ```yaml --- apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: name: psp.flannel.unprivileged annotations: seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default spec: privileged: false volumes: - configMap - secret - emptyDir - hostPath allowedHostPaths: - pathPrefix: "/etc/cni/net.d" - pathPrefix: "/etc/kube-flannel" - pathPrefix: "/run/flannel" readOnlyRootFilesystem: false # Users and groups runAsUser: rule: RunAsAny supplementalGroups: rule: RunAsAny fsGroup: rule: RunAsAny # Privilege Escalation allowPrivilegeEscalation: false defaultAllowPrivilegeEscalation: false # Capabilities allowedCapabilities: ['NET_ADMIN', 'NET_RAW'] defaultAddCapabilities: [] requiredDropCapabilities: [] # Host namespaces hostPID: false hostIPC: false hostNetwork: true hostPorts: - min: 0 max: 65535 # SELinux seLinux: # SELinux is unused in CaaSP rule: 'RunAsAny' --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: flannel rules: - apiGroups: ['extensions'] resources: ['podsecuritypolicies'] verbs: ['use'] resourceNames: ['psp.flannel.unprivileged'] - apiGroups: - "" resources: - pods verbs: - get - apiGroups: - "" resources: - nodes verbs: - list - watch - apiGroups: - "" resources: - nodes/status verbs: - patch --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: flannel roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: flannel subjects: - kind: ServiceAccount name: flannel namespace: kube-system --- apiVersion: v1 kind: ServiceAccount metadata: name: flannel namespace: kube-system --- kind: ConfigMap apiVersion: v1 metadata: name: kube-flannel-cfg namespace: kube-system labels: tier: node app: flannel data: cni-conf.json: | { "name": "cbr0", "cniVersion": "0.3.1", "plugins": [ { "type": "flannel", "delegate": { "hairpinMode": true, "isDefaultGateway": true } }, { "type": "portmap", "capabilities": { "portMappings": true } } ] } net-conf.json: | { "Network": "10.244.0.0/16", "Backend": { "Type": "vxlan" } } --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/os operator: In values: - linux hostNetwork: true priorityClassName: system-node-critical tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.14.0 command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.14.0 command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN", "NET_RAW"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg ``` 替换镜像地址 ```bash $ sed -i 's#quay.io/coreos/flannel#quay.mirrors.ustc.edu.cn/coreos/flannel#' kube-flannel.yml ``` ```bash $ kubectl apply -f kube-flannel.yml ``` #### 加入其他节点 ##### 加入其他节点 ```bash $ kubeadm join 172.16.13.80:6443 --token vqlfov.pkv1r7fsucnvijix \ --discovery-token-ca-cert-hash sha256:93f10f90ee14d64eaa3f5d6f7086673a7264ac9d00674853d39bf34fce4a5622 ``` ##### 查看集群节点信息 ```sh $ kubectl get node -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME shudoon101 Ready control-plane,master 66m v1.20.0 172.16.13.80 <none> CentOS Linux 7 (Core) 3.10.0-1160.41.1.el7.x86_64 docker://19.3.0 shudoon102 Ready <none> 7m51s v1.20.0 172.16.13.81 <none> CentOS Linux 7 (Core) 3.10.0-1062.el7.x86_64 docker://19.3.0 shudoon103 NotReady <none> 4m33s v1.20.0 172.16.13.82 <none> CentOS Linux 7 (Core) 3.10.0-1160.41.1.el7.x86_64 docker://19.3.0 ``` #### 问题 ##### 1.Flannel三种模式区别? ```sh udp模式:flanneld进程udp封装上层的数据包,用户空间到内核空间上下文切换,性能最差,已放弃。 vxlan模式:vetp添加vxlan头封装在udp包中实现虚拟局域网,通过隧道通信,性能较好,支持100node左右。 host-gw模式:节点添加容器子网路由,性能最好,支持130node左右。 ``` ##### 2.修改Flannel VxLAN为Direct routing模式? `VxLAN为Direct routing模式,节点同网段使用host-gw模式,不同网段使用vxlan模式` ```sh $ vim kube-flannel.yml ``` ```yaml ...... net-conf.json: | { "Network": "10.244.0.0/16", #默认网段 "Backend": { "Type": "vxlan", "Directrouting": true #增加 } } ...... ``` ```sh $ kubectl apply -f kube-flannel.yml ``` ```sh clusterrole.rbac.authorization.k8s.io/flannel configured clusterrolebinding.rbac.authorization.k8s.io/flannel configured serviceaccount/flannel unchanged configmap/kube-flannel-cfg configured daemonset.extensions/kube-flannel-ds-amd64 created daemonset.extensions/kube-flannel-ds-arm64 created daemonset.extensions/kube-flannel-ds-arm created daemonset.extensions/kube-flannel-ds-ppc64le created daemonset.extensions/kube-flannel-ds-s390x created ``` ```sh $ route -n ``` ```sh Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 172.16.13.254 0.0.0.0 UG 100 0 0 ens192 10.244.0.0 0.0.0.0 255.255.255.0 U 0 0 0 cni0 10.244.1.0 172.16.13.81 255.255.255.0 UG 0 0 0 ens192 #同网段host-gw,直接路由 10.244.2.0 172.16.13.82 255.255.255.0 UG 0 0 0 ens192 #同网段host-gw,直接路由 10.244.4.0 10.244.4.0 255.255.255.0 UG 0 0 0 flannel.1 #不同网段vxlan 172.16.13.0 0.0.0.0 255.255.255.0 U 100 0 0 ens192 172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0 ``` ##### 3.卸载flannel网络配置 在master节点删除flannel ```sh $ kubectl delete -f kube-flannel.yml ``` 在node节点清理flannel网络留下的文件 ```sh $ ifconfig flannel.1 down $ ip link delete flannel.1 $ rm -rf /var/lib/cni/ $ rm -f /etc/cni/net.d/* $ systemctl restart kubelet ``` ##### 4.节点退出k8s集群? k8s master节点操作 ```sh $ kubectl delete node node-01 ``` k8s 需要退出集群的节点操作 ```sh $ systemctl stop kubelet $ rm -rf /etc/kubernetes/* $ kubeadm reset $ iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X ``` ##### 5.加入进集群的token过期,如何创建新token? `方法一`: ```sh $ kubeadm token create --ttl 0 --print-join-command ``` `方法二`: ```sh $ kubeadm token create #重新生成token ``` ```sh $ kubeadm token list | awk -F" " '{print $1}' |tail -n 1 #列出token ``` ```sh hucls9.zea52rjxsmt0ze0b ``` ```sh $ openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^ .* //' #获取hash值 ``` ```sh (stdin)= 93f10f90ee14d64eaa3f5d6f7086673a7264ac9d00674853d39bf34fce4a5622 ``` ```sh $ kubeadm join 172.16.13.80:6443 --token hucls9.zea52rjxsmt0ze0b --discovery-token-ca-cert-hash sha256:93f10f90ee14d64eaa3f5d6f7086673a7264ac9d00674853d39bf34fce4a5622 #从节点加入集群 ```
阅读 42 评论 0 收藏 0
阅读 42
评论 0
收藏 0

兜兜    2021-09-03 11:26:43    2021-10-19 14:36:12   

k8s
阅读 25 评论 0 收藏 0
阅读 25
评论 0
收藏 0

第 2 页 / 共 10 页
 
第 2 页 / 共 10 页