#### 软件版本
```bash
docker 19.03
kubernets 1.20.0
flannel
```
### 准备工作
#### 更新系统包
```bash
$ yum update -y
```
#### 配置hostname
```bash
172.16.13.80 k8s-master
172.16.13.81 k8s-node1
172.16.13.82 k8s-node2
```
#### 关闭swap
```bash
$ swapoff -a
$ cat /etc/fstab #修改去掉swap
```
```sh
...
#/dev/mapper/centos-swap swap swap defaults 0 0
```
#### 关闭selinux
```bash
$ setenforce 0
$ cat /etc/selinux/config #关闭selinux
```
```sh
...
SELINUX=disabled
...
```
#### 配置iptables
```sh
$ systemctl stop firewalld&&systemctl disable firewalld
$ systemctl stop iptables&&systemctl disable iptables
$ cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
$ sysctl --system
```
#### 开启时间同步
##### 安装chrony:
```sh
$ yum install -y chrony
```
##### 注释默认ntp服务器
```sh
$ sed -i 's/^server/#&/' /etc/chrony.conf
```
##### 指定上游公共 ntp 服务器,并允许其他节点同步时间
```sh
$ cat >> /etc/chrony.conf << EOF
server ntp.aliyun.com iburst
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
allow all
local stratum 10
logdir /var/log/chrony
EOF
```
##### 重启chronyd服务并设为开机启动:
```sh
$ systemctl enable chronyd && systemctl restart chronyd
```
##### 查看时间服务器源
```bash
$ chronyc sources -v
```
##### 查看时间同步源状态
```bash
$ chronyc sourcestats -v
```
##### 开启网络时间同步功能
```sh
$ chronyc -a 'burst 4/4'
$ chronyc -a makestep
```
### 安装docker
安装docker相关包
```bash
$ yum install -y yum-utils
$ yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
```
查看docker包版本
```bash
$ yum list docker-ce --showduplicates
```
```sh
Installed Packages
docker-ce.x86_64 3:19.03.0-3.el7 @docker-ce-stable
Available Packages
docker-ce.x86_64 3:18.09.0-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.1-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.2-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.3-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.4-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.5-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.6-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.7-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.8-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.9-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.0-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.1-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.2-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.3-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.4-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.5-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.6-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.7-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.8-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.9-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.10-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.11-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.12-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.13-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.14-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.15-3.el7 docker-ce-stable
docker-ce.x86_64 3:20.10.0-3.el7 docker-ce-stable
docker-ce.x86_64 3:20.10.1-3.el7 docker-ce-stable
docker-ce.x86_64 3:20.10.2-3.el7 docker-ce-stable
docker-ce.x86_64 3:20.10.3-3.el7 docker-ce-stable
docker-ce.x86_64 3:20.10.4-3.el7 docker-ce-stable
docker-ce.x86_64 3:20.10.5-3.el7 docker-ce-stable
docker-ce.x86_64 3:20.10.6-3.el7 docker-ce-stable
docker-ce.x86_64 3:20.10.7-3.el7 docker-ce-stable
```
安装
```bash
$ yum install -y docker-ce-19.03.0 docker-ce-cli-19.03.0 containerd.io
```
修改Cgroup Driver为systemd
```bash
$ cat /usr/lib/systemd/system/docker.service
...
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --exec-opt native.cgroupdriver=systemd #添加 --exec-opt native.cgroupdriver=systemd
...
```
```sh
$ systemctl daemon-reload
$ docker info|grep Cgroup
Cgroup Driver: systemd
```
启动docker
```bash
$ systemctl start docker
$ systemctl enable docker
```
查看docker版本
```bash
$ docker version
```
```sh
Client: Docker Engine - Community
Version: 19.03.0
API version: 1.40
Go version: go1.12.5
Git commit: aeac9490dc
Built: Wed Jul 17 18:15:40 2019
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 19.03.0
API version: 1.40 (minimum version 1.12)
Go version: go1.12.5
Git commit: aeac9490dc
Built: Wed Jul 17 18:14:16 2019
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.4.9
GitCommit: e25210fe30a0a703442421b0f60afac609f950a3
runc:
Version: 1.0.1
GitCommit: v1.0.1-0-g4144b63
docker-init:
Version: 0.18.0
GitCommit: fec3683
```
### 安装Kubernetes
```bash
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
```
```bash
$ yum check-update
$ yum list kubelet --showduplicates
```
```sh
Installed Packages
kubelet.x86_64 1.20.0-0 @kubernetes
Available Packages
kubelet.x86_64 1.18.0-0 kubernetes
kubelet.x86_64 1.18.1-0 kubernetes
kubelet.x86_64 1.18.2-0 kubernetes
kubelet.x86_64 1.18.3-0 kubernetes
kubelet.x86_64 1.18.4-0 kubernetes
kubelet.x86_64 1.18.4-1 kubernetes
kubelet.x86_64 1.18.5-0 kubernetes
kubelet.x86_64 1.18.6-0 kubernetes
kubelet.x86_64 1.18.8-0 kubernetes
kubelet.x86_64 1.18.9-0 kubernetes
kubelet.x86_64 1.18.10-0 kubernetes
kubelet.x86_64 1.18.12-0 kubernetes
kubelet.x86_64 1.18.13-0 kubernetes
kubelet.x86_64 1.18.14-0 kubernetes
kubelet.x86_64 1.18.15-0 kubernetes
kubelet.x86_64 1.18.16-0 kubernetes
kubelet.x86_64 1.18.17-0 kubernetes
kubelet.x86_64 1.18.18-0 kubernetes
kubelet.x86_64 1.18.19-0 kubernetes
kubelet.x86_64 1.18.20-0 kubernetes
kubelet.x86_64 1.19.0-0 kubernetes
kubelet.x86_64 1.19.1-0 kubernetes
kubelet.x86_64 1.19.2-0 kubernetes
kubelet.x86_64 1.19.3-0 kubernetes
kubelet.x86_64 1.19.4-0 kubernetes
kubelet.x86_64 1.19.5-0 kubernetes
kubelet.x86_64 1.19.6-0 kubernetes
kubelet.x86_64 1.19.7-0 kubernetes
kubelet.x86_64 1.19.8-0 kubernetes
kubelet.x86_64 1.19.9-0 kubernetes
kubelet.x86_64 1.19.10-0 kubernetes
kubelet.x86_64 1.19.11-0 kubernetes
kubelet.x86_64 1.19.12-0 kubernetes
kubelet.x86_64 1.19.13-0 kubernetes
kubelet.x86_64 1.19.14-0 kubernetes
kubelet.x86_64 1.20.0-0 kubernetes
kubelet.x86_64 1.20.1-0 kubernetes
kubelet.x86_64 1.20.2-0 kubernetes
kubelet.x86_64 1.20.4-0 kubernetes
kubelet.x86_64 1.20.5-0 kubernetes
kubelet.x86_64 1.20.6-0 kubernetes
kubelet.x86_64 1.20.7-0 kubernetes
```
```sh
$ yum install -y kubelet-1.20.0 kubeadm-1.20.0 kubectl-1.20.0
```
```bash
$ systemctl enable kubelet
```
初始化K8S(k8s-master节点)
```bash
$ kubeadm init \
--kubernetes-version=v1.20.10 \
--pod-network-cidr=10.244.0.0/16 \
--image-repository registry.aliyuncs.com/google_containers \
--apiserver-advertise-address 172.16.13.80 \
--v=6
```
```sh
I0904 10:39:55.512878 18003 initconfiguration.go:104] detected and using CRI socket: /var/run/dockershim.sock
[init] Using Kubernetes version: v1.20.10
[preflight] Running pre-flight checks
I0904 10:39:55.609411 18003 checks.go:577] validating Kubernetes and kubeadm version
I0904 10:39:55.609436 18003 checks.go:166] validating if the firewall is enabled and active
I0904 10:39:55.615977 18003 checks.go:201] validating availability of port 6443
I0904 10:39:55.616145 18003 checks.go:201] validating availability of port 10259
I0904 10:39:55.616175 18003 checks.go:201] validating availability of port 10257
I0904 10:39:55.616202 18003 checks.go:286] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml
I0904 10:39:55.616218 18003 checks.go:286] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml
I0904 10:39:55.616225 18003 checks.go:286] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml
I0904 10:39:55.616231 18003 checks.go:286] validating the existence of file /etc/kubernetes/manifests/etcd.yaml
I0904 10:39:55.616243 18003 checks.go:432] validating if the connectivity type is via proxy or direct
I0904 10:39:55.616278 18003 checks.go:471] validating http connectivity to first IP address in the CIDR
I0904 10:39:55.616300 18003 checks.go:471] validating http connectivity to first IP address in the CIDR
I0904 10:39:55.616311 18003 checks.go:102] validating the container runtime
I0904 10:39:55.710933 18003 checks.go:128] validating if the "docker" service is enabled and active
I0904 10:39:55.812851 18003 checks.go:335] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
I0904 10:39:55.812907 18003 checks.go:335] validating the contents of file /proc/sys/net/ipv4/ip_forward
I0904 10:39:55.812930 18003 checks.go:649] validating whether swap is enabled or not
I0904 10:39:55.812975 18003 checks.go:376] validating the presence of executable conntrack
I0904 10:39:55.812999 18003 checks.go:376] validating the presence of executable ip
I0904 10:39:55.813017 18003 checks.go:376] validating the presence of executable iptables
I0904 10:39:55.813037 18003 checks.go:376] validating the presence of executable mount
I0904 10:39:55.813051 18003 checks.go:376] validating the presence of executable nsenter
I0904 10:39:55.813063 18003 checks.go:376] validating the presence of executable ebtables
I0904 10:39:55.813074 18003 checks.go:376] validating the presence of executable ethtool
I0904 10:39:55.813085 18003 checks.go:376] validating the presence of executable socat
I0904 10:39:55.813099 18003 checks.go:376] validating the presence of executable tc
I0904 10:39:55.813109 18003 checks.go:376] validating the presence of executable touch
I0904 10:39:55.813123 18003 checks.go:520] running all checks
I0904 10:39:55.915575 18003 checks.go:406] checking whether the given node name is reachable using net.LookupHost
I0904 10:39:55.915792 18003 checks.go:618] validating kubelet version
I0904 10:39:55.985451 18003 checks.go:128] validating if the "kubelet" service is enabled and active
I0904 10:39:55.994819 18003 checks.go:201] validating availability of port 10250
I0904 10:39:55.994889 18003 checks.go:201] validating availability of port 2379
I0904 10:39:55.994913 18003 checks.go:201] validating availability of port 2380
I0904 10:39:55.994936 18003 checks.go:249] validating the existence and emptiness of directory /var/lib/etcd
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0904 10:39:56.043119 18003 checks.go:839] image exists: registry.aliyuncs.com/google_containers/kube-apiserver:v1.20.10
I0904 10:39:56.095120 18003 checks.go:839] image exists: registry.aliyuncs.com/google_containers/kube-controller-manager:v1.20.10
I0904 10:39:56.159069 18003 checks.go:839] image exists: registry.aliyuncs.com/google_containers/kube-scheduler:v1.20.10
I0904 10:39:56.212530 18003 checks.go:839] image exists: registry.aliyuncs.com/google_containers/kube-proxy:v1.20.10
I0904 10:39:56.265125 18003 checks.go:839] image exists: registry.aliyuncs.com/google_containers/pause:3.2
I0904 10:39:56.320004 18003 checks.go:839] image exists: registry.aliyuncs.com/google_containers/etcd:3.4.13-0
I0904 10:39:56.371299 18003 checks.go:839] image exists: registry.aliyuncs.com/google_containers/coredns:1.7.0
[certs] Using certificateDir folder "/etc/kubernetes/pki"
I0904 10:39:56.371382 18003 certs.go:110] creating a new certificate authority for ca
[certs] Generating "ca" certificate and key
I0904 10:39:56.729903 18003 certs.go:474] validating certificate period for ca certificate
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local shudoon101] and IPs [10.96.0.1 172.16.13.80]
[certs] Generating "apiserver-kubelet-client" certificate and key
I0904 10:39:57.334553 18003 certs.go:110] creating a new certificate authority for front-proxy-ca
[certs] Generating "front-proxy-ca" certificate and key
I0904 10:39:57.486574 18003 certs.go:474] validating certificate period for front-proxy-ca certificate
[certs] Generating "front-proxy-client" certificate and key
I0904 10:39:57.694560 18003 certs.go:110] creating a new certificate authority for etcd-ca
[certs] Generating "etcd/ca" certificate and key
I0904 10:39:57.821367 18003 certs.go:474] validating certificate period for etcd/ca certificate
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost shudoon101] and IPs [172.16.13.80 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost shudoon101] and IPs [172.16.13.80 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
I0904 10:39:58.861298 18003 certs.go:76] creating new public/private key files for signing service account users
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0904 10:39:59.035771 18003 kubeconfig.go:101] creating kubeconfig file for admin.conf
[kubeconfig] Writing "admin.conf" kubeconfig file
I0904 10:39:59.330053 18003 kubeconfig.go:101] creating kubeconfig file for kubelet.conf
[kubeconfig] Writing "kubelet.conf" kubeconfig file
I0904 10:39:59.481405 18003 kubeconfig.go:101] creating kubeconfig file for controller-manager.conf
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0904 10:39:59.645125 18003 kubeconfig.go:101] creating kubeconfig file for scheduler.conf
[kubeconfig] Writing "scheduler.conf" kubeconfig file
I0904 10:40:00.334922 18003 kubelet.go:63] Stopping the kubelet
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
I0904 10:40:00.420357 18003 manifests.go:96] [control-plane] getting StaticPodSpecs
I0904 10:40:00.420779 18003 certs.go:474] validating certificate period for CA certificate
I0904 10:40:00.420895 18003 manifests.go:109] [control-plane] adding volume "ca-certs" for component "kube-apiserver"
I0904 10:40:00.420908 18003 manifests.go:109] [control-plane] adding volume "etc-pki" for component "kube-apiserver"
I0904 10:40:00.420916 18003 manifests.go:109] [control-plane] adding volume "k8s-certs" for component "kube-apiserver"
I0904 10:40:00.428795 18003 manifests.go:126] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
I0904 10:40:00.428822 18003 manifests.go:96] [control-plane] getting StaticPodSpecs
I0904 10:40:00.429308 18003 manifests.go:109] [control-plane] adding volume "ca-certs" for component "kube-controller-manager"
I0904 10:40:00.429323 18003 manifests.go:109] [control-plane] adding volume "etc-pki" for component "kube-controller-manager"
I0904 10:40:00.429331 18003 manifests.go:109] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager"
I0904 10:40:00.429337 18003 manifests.go:109] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager"
I0904 10:40:00.429341 18003 manifests.go:109] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager"
I0904 10:40:00.431212 18003 manifests.go:126] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[control-plane] Creating static Pod manifest for "kube-scheduler"
I0904 10:40:00.431233 18003 manifests.go:96] [control-plane] getting StaticPodSpecs
I0904 10:40:00.431917 18003 manifests.go:109] [control-plane] adding volume "kubeconfig" for component "kube-scheduler"
I0904 10:40:00.432442 18003 manifests.go:126] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0904 10:40:00.433542 18003 local.go:74] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
I0904 10:40:00.433568 18003 waitcontrolplane.go:87] [wait-control-plane] Waiting for the API server to be healthy
I0904 10:40:00.435037 18003 loader.go:379] Config loaded from file: /etc/kubernetes/admin.conf
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0904 10:40:00.436681 18003 round_trippers.go:445] GET https://172.16.13.80:6443/healthz?timeout=10s in 0 milliseconds
I0904 10:40:00.938722 18003 round_trippers.go:445] GET https://172.16.13.80:6443/healthz?timeout=10s in 1 milliseconds
I0904 10:40:01.437273 18003 round_trippers.go:445] GET https://172.16.13.80:6443/healthz?timeout=10s in 0 milliseconds
I0904 10:40:01.937162 18003 round_trippers.go:445] GET https://172.16.13.80:6443/healthz?timeout=10s in 0 milliseconds
I0904 10:40:02.437215 18003 round_trippers.go:445] GET https://172.16.13.80:6443/healthz?timeout=10s in 0 milliseconds
I0904 10:40:02.937090 18003 round_trippers.go:445] GET https://172.16.13.80:6443/healthz?timeout=10s in 0 milliseconds
I0904 10:40:03.437168 18003 round_trippers.go:445] GET https://172.16.13.80:6443/healthz?timeout=10s in 0 milliseconds
I0904 10:40:03.937151 18003 round_trippers.go:445] GET https://172.16.13.80:6443/healthz?timeout=10s in 0 milliseconds
I0904 10:40:04.437369 18003 round_trippers.go:445] GET https://172.16.13.80:6443/healthz?timeout=10s in 0 milliseconds
I0904 10:40:04.937187 18003 round_trippers.go:445] GET https://172.16.13.80:6443/healthz?timeout=10s in 0 milliseconds
I0904 10:40:05.437078 18003 round_trippers.go:445] GET https://172.16.13.80:6443/healthz?timeout=10s in 0 milliseconds
I0904 10:40:05.937120 18003 round_trippers.go:445] GET https://172.16.13.80:6443/healthz?timeout=10s in 0 milliseconds
I0904 10:40:06.437218 18003 round_trippers.go:445] GET https://172.16.13.80:6443/healthz?timeout=10s in 0 milliseconds
I0904 10:40:06.937134 18003 round_trippers.go:445] GET https://172.16.13.80:6443/healthz?timeout=10s in 0 milliseconds
I0904 10:40:07.437199 18003 round_trippers.go:445] GET https://172.16.13.80:6443/healthz?timeout=10s in 0 milliseconds
I0904 10:40:07.937158 18003 round_trippers.go:445] GET https://172.16.13.80:6443/healthz?timeout=10s in 0 milliseconds
I0904 10:40:08.437692 18003 round_trippers.go:445] GET https://172.16.13.80:6443/healthz?timeout=10s in 0 milliseconds
I0904 10:40:12.453695 18003 round_trippers.go:445] GET https://172.16.13.80:6443/healthz?timeout=10s 500 Internal Server Error in 3516 milliseconds
I0904 10:40:12.938805 18003 round_trippers.go:445] GET https://172.16.13.80:6443/healthz?timeout=10s 500 Internal Server Error in 1 milliseconds
I0904 10:40:13.438240 18003 round_trippers.go:445] GET https://172.16.13.80:6443/healthz?timeout=10s 500 Internal Server Error in 1 milliseconds
I0904 10:40:13.938725 18003 round_trippers.go:445] GET https://172.16.13.80:6443/healthz?timeout=10s 200 OK in 1 milliseconds
[apiclient] All control plane components are healthy after 13.502539 seconds
I0904 10:40:13.938847 18003 uploadconfig.go:108] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0904 10:40:13.943583 18003 round_trippers.go:445] POST https://172.16.13.80:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s 201 Created in 2 milliseconds
I0904 10:40:13.946914 18003 round_trippers.go:445] POST https://172.16.13.80:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles?timeout=10s 201 Created in 2 milliseconds
I0904 10:40:13.949757 18003 round_trippers.go:445] POST https://172.16.13.80:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings?timeout=10s 201 Created in 2 milliseconds
I0904 10:40:13.950284 18003 uploadconfig.go:122] [upload-config] Uploading the kubelet component config to a ConfigMap
[kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
I0904 10:40:13.952552 18003 round_trippers.go:445] POST https://172.16.13.80:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s 201 Created in 1 milliseconds
I0904 10:40:13.954630 18003 round_trippers.go:445] POST https://172.16.13.80:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles?timeout=10s 201 Created in 1 milliseconds
I0904 10:40:13.956733 18003 round_trippers.go:445] POST https://172.16.13.80:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings?timeout=10s 201 Created in 1 milliseconds
I0904 10:40:13.956848 18003 uploadconfig.go:127] [upload-config] Preserving the CRISocket information for the control-plane node
I0904 10:40:13.956861 18003 patchnode.go:30] [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "shudoon101" as an annotation
I0904 10:40:14.460485 18003 round_trippers.go:445] GET https://172.16.13.80:6443/api/v1/nodes/shudoon101?timeout=10s 200 OK in 3 milliseconds
I0904 10:40:14.467558 18003 round_trippers.go:445] PATCH https://172.16.13.80:6443/api/v1/nodes/shudoon101?timeout=10s 200 OK in 4 milliseconds
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node shudoon101 as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
[mark-control-plane] Marking the node shudoon101 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
I0904 10:40:14.969784 18003 round_trippers.go:445] GET https://172.16.13.80:6443/api/v1/nodes/shudoon101?timeout=10s 200 OK in 1 milliseconds
I0904 10:40:14.976503 18003 round_trippers.go:445] PATCH https://172.16.13.80:6443/api/v1/nodes/shudoon101?timeout=10s 200 OK in 4 milliseconds
[bootstrap-token] Using token: vqlfov.pkv1r7fsucnvijix
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0904 10:40:14.979889 18003 round_trippers.go:445] GET https://172.16.13.80:6443/api/v1/namespaces/kube-system/secrets/bootstrap-token-vqlfov?timeout=10s 404 Not Found in 2 milliseconds
I0904 10:40:14.983266 18003 round_trippers.go:445] POST https://172.16.13.80:6443/api/v1/namespaces/kube-system/secrets?timeout=10s 201 Created in 2 milliseconds
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0904 10:40:14.986413 18003 round_trippers.go:445] POST https://172.16.13.80:6443/apis/rbac.authorization.k8s.io/v1/clusterroles?timeout=10s 201 Created in 2 milliseconds
I0904 10:40:14.989537 18003 round_trippers.go:445] POST https://172.16.13.80:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s 201 Created in 2 milliseconds
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0904 10:40:14.991819 18003 round_trippers.go:445] POST https://172.16.13.80:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s 201 Created in 1 milliseconds
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0904 10:40:14.993446 18003 round_trippers.go:445] POST https://172.16.13.80:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s 201 Created in 1 milliseconds
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0904 10:40:14.995203 18003 round_trippers.go:445] POST https://172.16.13.80:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s 201 Created in 1 milliseconds
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0904 10:40:14.995311 18003 clusterinfo.go:45] [bootstrap-token] loading admin kubeconfig
I0904 10:40:14.995908 18003 loader.go:379] Config loaded from file: /etc/kubernetes/admin.conf
I0904 10:40:14.995924 18003 clusterinfo.go:53] [bootstrap-token] copying the cluster from admin.conf to the bootstrap kubeconfig
I0904 10:40:14.996407 18003 clusterinfo.go:65] [bootstrap-token] creating/updating ConfigMap in kube-public namespace
I0904 10:40:14.999040 18003 round_trippers.go:445] POST https://172.16.13.80:6443/api/v1/namespaces/kube-public/configmaps?timeout=10s 201 Created in 2 milliseconds
I0904 10:40:14.999190 18003 clusterinfo.go:79] creating the RBAC rules for exposing the cluster-info ConfigMap in the kube-public namespace
I0904 10:40:15.001641 18003 round_trippers.go:445] POST https://172.16.13.80:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles?timeout=10s 201 Created in 2 milliseconds
I0904 10:40:15.003854 18003 round_trippers.go:445] POST https://172.16.13.80:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings?timeout=10s 201 Created in 1 milliseconds
I0904 10:40:15.004058 18003 kubeletfinalize.go:88] [kubelet-finalize] Assuming that kubelet client certificate rotation is enabled: found "/var/lib/kubelet/pki/kubelet-client-current.pem"
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0904 10:40:15.004638 18003 loader.go:379] Config loaded from file: /etc/kubernetes/kubelet.conf
I0904 10:40:15.005181 18003 kubeletfinalize.go:132] [kubelet-finalize] Restarting the kubelet to enable client certificate rotation
I0904 10:40:15.086465 18003 round_trippers.go:445] GET https://172.16.13.80:6443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%3Dkube-dns 200 OK in 5 milliseconds
I0904 10:40:15.092852 18003 round_trippers.go:445] GET https://172.16.13.80:6443/api/v1/namespaces/kube-system/configmaps/kube-dns?timeout=10s 404 Not Found in 1 milliseconds
I0904 10:40:15.094538 18003 round_trippers.go:445] GET https://172.16.13.80:6443/api/v1/namespaces/kube-system/configmaps/coredns?timeout=10s 404 Not Found in 1 milliseconds
I0904 10:40:15.097004 18003 round_trippers.go:445] POST https://172.16.13.80:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s 201 Created in 2 milliseconds
I0904 10:40:15.099782 18003 round_trippers.go:445] POST https://172.16.13.80:6443/apis/rbac.authorization.k8s.io/v1/clusterroles?timeout=10s 201 Created in 2 milliseconds
I0904 10:40:15.104903 18003 round_trippers.go:445] POST https://172.16.13.80:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s 201 Created in 4 milliseconds
I0904 10:40:15.109540 18003 round_trippers.go:445] POST https://172.16.13.80:6443/api/v1/namespaces/kube-system/serviceaccounts?timeout=10s 201 Created in 3 milliseconds
I0904 10:40:15.132165 18003 round_trippers.go:445] POST https://172.16.13.80:6443/apis/apps/v1/namespaces/kube-system/deployments?timeout=10s 201 Created in 11 milliseconds
I0904 10:40:15.143679 18003 round_trippers.go:445] POST https://172.16.13.80:6443/api/v1/namespaces/kube-system/services?timeout=10s 201 Created in 9 milliseconds
[addons] Applied essential addon: CoreDNS
I0904 10:40:15.170722 18003 round_trippers.go:445] POST https://172.16.13.80:6443/api/v1/namespaces/kube-system/serviceaccounts?timeout=10s 201 Created in 2 milliseconds
I0904 10:40:15.368235 18003 request.go:591] Throttling request took 195.771994ms, request: POST:https://172.16.13.80:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s
I0904 10:40:15.371885 18003 round_trippers.go:445] POST https://172.16.13.80:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s 201 Created in 3 milliseconds
I0904 10:40:15.389809 18003 round_trippers.go:445] POST https://172.16.13.80:6443/apis/apps/v1/namespaces/kube-system/daemonsets?timeout=10s 201 Created in 10 milliseconds
I0904 10:40:15.392633 18003 round_trippers.go:445] POST https://172.16.13.80:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s 201 Created in 2 milliseconds
I0904 10:40:15.395548 18003 round_trippers.go:445] POST https://172.16.13.80:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles?timeout=10s 201 Created in 2 milliseconds
I0904 10:40:15.398242 18003 round_trippers.go:445] POST https://172.16.13.80:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings?timeout=10s 201 Created in 2 milliseconds
[addons] Applied essential addon: kube-proxy
I0904 10:40:15.399040 18003 loader.go:379] Config loaded from file: /etc/kubernetes/admin.conf
I0904 10:40:15.399615 18003 loader.go:379] Config loaded from file: /etc/kubernetes/admin.conf
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 172.16.13.80:6443 --token vqlfov.pkv1r7fsucnvijix \
--discovery-token-ca-cert-hash sha256:93f10f90ee14d64eaa3f5d6f7086673a7264ac9d00674853d39bf34fce4a5622
```
查看节点
```bash
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
shudoon101 NotReady control-plane,master 12m v1.20.0
```
### 安装Flannel(所有节点)
#### flannel通信原理图

下载kube-flannel.yml
```bash
$ wget https://github.com/coreos/flannel/raw/master/Documentation/kube-flannel.yml
$ cat kube-flannel.yml
```
```yaml
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: psp.flannel.unprivileged
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
privileged: false
volumes:
- configMap
- secret
- emptyDir
- hostPath
allowedHostPaths:
- pathPrefix: "/etc/cni/net.d"
- pathPrefix: "/etc/kube-flannel"
- pathPrefix: "/run/flannel"
readOnlyRootFilesystem: false
# Users and groups
runAsUser:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
fsGroup:
rule: RunAsAny
# Privilege Escalation
allowPrivilegeEscalation: false
defaultAllowPrivilegeEscalation: false
# Capabilities
allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
defaultAddCapabilities: []
requiredDropCapabilities: []
# Host namespaces
hostPID: false
hostIPC: false
hostNetwork: true
hostPorts:
- min: 0
max: 65535
# SELinux
seLinux:
# SELinux is unused in CaaSP
rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
rules:
- apiGroups: ['extensions']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: flannel
namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-system
labels:
tier: node
app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
hostNetwork: true
priorityClassName: system-node-critical
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay.io/coreos/flannel:v0.14.0
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.14.0
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN", "NET_RAW"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
```
替换镜像地址
```bash
$ sed -i 's#quay.io/coreos/flannel#quay.mirrors.ustc.edu.cn/coreos/flannel#' kube-flannel.yml
```
```bash
$ kubectl apply -f kube-flannel.yml
```
#### 加入其他节点
##### 加入其他节点
```bash
$ kubeadm join 172.16.13.80:6443 --token vqlfov.pkv1r7fsucnvijix \
--discovery-token-ca-cert-hash sha256:93f10f90ee14d64eaa3f5d6f7086673a7264ac9d00674853d39bf34fce4a5622
```
##### 查看集群节点信息
```sh
$ kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
shudoon101 Ready control-plane,master 66m v1.20.0 172.16.13.80 <none> CentOS Linux 7 (Core) 3.10.0-1160.41.1.el7.x86_64 docker://19.3.0
shudoon102 Ready <none> 7m51s v1.20.0 172.16.13.81 <none> CentOS Linux 7 (Core) 3.10.0-1062.el7.x86_64 docker://19.3.0
shudoon103 NotReady <none> 4m33s v1.20.0 172.16.13.82 <none> CentOS Linux 7 (Core) 3.10.0-1160.41.1.el7.x86_64 docker://19.3.0
```
#### 问题
##### 1.Flannel三种模式区别?
```sh
udp模式:flanneld进程udp封装上层的数据包,用户空间到内核空间上下文切换,性能最差,已放弃。
vxlan模式:vetp添加vxlan头封装在udp包中实现虚拟局域网,通过隧道通信,性能较好,支持100node左右。
host-gw模式:节点添加容器子网路由,性能最好,支持130node左右。
```
##### 2.修改Flannel VxLAN为Direct routing模式?
`VxLAN为Direct routing模式,节点同网段使用host-gw模式,不同网段使用vxlan模式`
```sh
$ vim kube-flannel.yml
```
```yaml
......
net-conf.json: |
{
"Network": "10.244.0.0/16", #默认网段
"Backend": {
"Type": "vxlan",
"Directrouting": true #增加
}
}
......
```
```sh
$ kubectl apply -f kube-flannel.yml
```
```sh
clusterrole.rbac.authorization.k8s.io/flannel configured
clusterrolebinding.rbac.authorization.k8s.io/flannel configured
serviceaccount/flannel unchanged
configmap/kube-flannel-cfg configured
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created
```
```sh
$ route -n
```
```sh
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 172.16.13.254 0.0.0.0 UG 100 0 0 ens192
10.244.0.0 0.0.0.0 255.255.255.0 U 0 0 0 cni0
10.244.1.0 172.16.13.81 255.255.255.0 UG 0 0 0 ens192 #同网段host-gw,直接路由
10.244.2.0 172.16.13.82 255.255.255.0 UG 0 0 0 ens192 #同网段host-gw,直接路由
10.244.4.0 10.244.4.0 255.255.255.0 UG 0 0 0 flannel.1 #不同网段vxlan
172.16.13.0 0.0.0.0 255.255.255.0 U 100 0 0 ens192
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
```
##### 3.卸载flannel网络配置
在master节点删除flannel
```sh
$ kubectl delete -f kube-flannel.yml
```
在node节点清理flannel网络留下的文件
```sh
$ ifconfig flannel.1 down
$ ip link delete flannel.1
$ rm -rf /var/lib/cni/
$ rm -f /etc/cni/net.d/*
$ systemctl restart kubelet
```
##### 4.节点退出k8s集群?
k8s master节点操作
```sh
$ kubectl delete node node-01
```
k8s 需要退出集群的节点操作
```sh
$ systemctl stop kubelet
$ rm -rf /etc/kubernetes/*
$ kubeadm reset
$ iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
```
##### 5.加入进集群的token过期,如何创建新token?
`方法一`:
```sh
$ kubeadm token create --ttl 0 --print-join-command
```
`方法二`:
```sh
$ kubeadm token create #重新生成token
```
```sh
$ kubeadm token list | awk -F" " '{print $1}' |tail -n 1 #列出token
```
```sh
hucls9.zea52rjxsmt0ze0b
```
```sh
$ openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^ .* //' #获取hash值
```
```sh
(stdin)= 93f10f90ee14d64eaa3f5d6f7086673a7264ac9d00674853d39bf34fce4a5622
```
```sh
$ kubeadm join 172.16.13.80:6443 --token hucls9.zea52rjxsmt0ze0b --discovery-token-ca-cert-hash sha256:93f10f90ee14d64eaa3f5d6f7086673a7264ac9d00674853d39bf34fce4a5622 #从节点加入集群
```