文章类别:

兜兜    2021-09-15 15:56:38    2022-01-25 09:20:20   

Keepalived LVS ingress
```sh 防火墙端口映射80/443到LVS的VIP对应80/443,LVS负载均衡K8S节点IP的80/443端口。ingress-nginx-controller服务暴露方式通过(HostNetwork:80/443)。实现的效果119.x.x.x:80/443-->172.16.100.99:80/443(LVS VIP)--> 172.16.100.100:80/443,172.16.100.101:80/443,172.16.100.102:80/443 ``` 配置规划 ```sh +----------------+----------------+--------+--------------------------+ | Host | IP | Port | SoftWare | +----------------+----------------+--------+--------------------------+ | LVS01 | 172.16.100.27 | 80/443 | LVS,Keepalived | | LVS02 | 172.16.100.28 | 80/443 | LVS,Keepalived | | RS/k8s-master1 | 172.16.100.100 | 80/443 | ingress-nginx-controller | | RS/k8s-master2 | 172.16.100.101 | 80/443 | ingress-nginx-controller | | RS/k8s-node1 | 172.16.100.102 | 80/443 | ingress-nginx-controller | | VIP | 172.16.100.99 | 80/443 | / | +----------------+----------------+--------+--------------------------+ ``` 安装lvs和keepalived(`172.16.100.27/172.16.100.28`) ```sh $ yum install ipvsadm keepalived -y $ systemctl enable keepavlied ``` 配置keepalived(`172.16.100.27`) ```sh $ cat > /etc/keepalived/keepalived.conf <<EOF ! Configuration File for keepalived global_defs { router_id LVS_27 #route id } vrrp_instance VI_1 { state MASTER #主节点 interface ens192 #网卡 virtual_router_id 51 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 172.16.100.99 } } virtual_server 172.16.100.99 443 { delay_loop 3 lb_algo rr lb_kind DR persistence_timeout 50 protocol TCP real_server 172.16.100.100 443 { weight 1 TCP_CHECK { connect_port 443 connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } real_server 172.16.100.101 443 { weight 1 TCP_CHECK { connect_port 443 connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } real_server 172.16.100.102 443 { weight 1 TCP_CHECK { connect_port 443 connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } } virtual_server 172.16.100.99 80 { delay_loop 3 lb_algo rr lb_kind DR persistence_timeout 50 protocol TCP real_server 172.16.100.100 80 { weight 1 TCP_CHECK { connect_port 80 connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } real_server 172.16.100.101 80 { weight 1 TCP_CHECK { connect_port 80 connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } real_server 172.16.100.102 80 { weight 1 TCP_CHECK { connect_port 80 connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } } EOF ``` 配置keepalived(`172.16.100.28`) ```sh $ cat /etc/keepalived/keepalived.conf ... global_defs { router_id LVS_28 #route id,两台机器配置不一样 } vrrp_instance VI_1 { state BACKUP #备份节点 interface ens192 #网卡名 virtual_router_id 51 priority 99 #优先级 ... ``` 配置RS节点(`172.16.100.100/172.16.100.101/172.16.100.102`) ```sh $ cat >/etc/init.d/lvs_rs.sh <<EOF vip=172.16.100.99 mask='255.255.255.255' dev=lo:1 case $1 in start) echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce ifconfig $dev $vip netmask $mask #broadcast $vip up echo "The RS Server is Ready!" ;; stop) ifconfig $dev down echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce echo 0 > /proc/sys/net/ipv4/conf/lo/arp_announce echo "The RS Server is Canceled!" ;; *) echo "Usage: $(basename $0) start|stop" exit 1 ;; esac EOF ``` 启动脚本 ```sh $ chmod +x /etc/init.d/lvs_rs.sh $ /etc/init.d/lvs_rs.sh start ``` ```sh $ ip a #查看lo:1接口VIP是否绑定 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet 172.16.100.99/32 scope global lo:1 valid_lft forever preferred_lft forever ``` 启动keepalived(`172.16.100.27/172.16.100.28`) ```sh $ systemctl start keepavlied ``` 查看VIP是否绑定 ```sh $ ip a 2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 00:50:56:93:ed:a4 brd ff:ff:ff:ff:ff:ff inet 172.16.100.27/24 brd 172.16.100.255 scope global noprefixroute ens192 valid_lft forever preferred_lft forever inet 172.16.100.99/32 scope global ens192 valid_lft forever preferred_lft forever ``` 查看LVS信息 ```sh $ ipvsadm -L -n IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 172.16.100.99:80 rr persistent 50 -> 172.16.100.100:80 Route 1 0 0 -> 172.16.100.101:80 Route 1 0 0 -> 172.16.100.102:80 Route 1 0 0 TCP 172.16.100.99:443 rr persistent 50 -> 172.16.100.100:443 Route 1 0 0 -> 172.16.100.101:443 Route 1 0 0 -> 172.16.100.102:443 Route 1 0 0 ```
阅读 1485 评论 0 收藏 0
阅读 1485
评论 0
收藏 0

兜兜    2021-09-15 15:55:33    2022-01-25 09:21:07   

kubernetes
部署生产环境Kubernetes集群方式 ```sh kubeadm Kubeadm是一个K8s部署工具,提供kubeadm init和kubeadm join,用于快速部署Kubernetes集群。 二进制包 从github下载发行版的二进制包,手动部署每个组件,组成Kubernetes集群。 ``` 本次测试采用kubeadm搭建集群 架构图 ```sh hostport/nodeport vip:172.16.100.99 +-----------------+ +-----------------+ | keepalived | |K8S-Master1 | | | | | 119.x.x.x | +---------+ | +------->| | | | | | | | | | | | | | | | +--------+ +--------+--> lvs27 | | | +-----------------+ | | | | | | | | 172.16.100.100 +---------+ | | | | +---------+ | | | | | | | | | | +-----------------+ | Client +--------+firewall+---+ | 172.16.100.27 | | |K8S-Master2 | | | | | | | | | | | +---------+ | | | | +---------+ +---------+------->| | | | | | | | | | | | | | | | | | | | | | +--------+ +--------+--> lvs28 | | | +-----------------+ | | | | | 172.16.100.101 | +---------+ | | +-----------------+ | | | |K8S-Node1 | | 172.16.100.2 | | | | | | | | | | | +------->| | +-----------------+ | | +-----------------+ 172.16.100.102 ``` #### 部署高可用高可用负载均衡器 ```sh Kubernetes作为容器集群系统,通过健康检查+重启策略实现了Pod故障自我修复能力,通过调度算法实现将Pod分布式部署,并保持预期副本数,根据Node失效状态自动在其他Node拉起Pod,实现了应用层的高可用性。 针对Kubernetes集群,高可用性还应包含以下两个层面的考虑:Etcd数据库的高可用性和Kubernetes Master组件的高可用性。 而kubeadm搭建的K8s集群,Etcd只起了一个,存在单点,所以我们这里会独立搭建一个Etcd集群。 Master节点扮演着总控中心的角色,通过不断与工作节点上的Kubelet和kube-proxy进行通信来维护整个集群的健康工作状态。如果Master节点故障,将无法使用kubectl工具或者API做任何集群管理。 Master节点主要有三个服务kube-apiserver、kube-controller-manager和kube-scheduler,其中kube-controller-manager和kube-scheduler组件自身通过选择机制已经实现了高可用,所以Master高可用主要针对kube-apiserver组件,而该组件是以HTTP API提供服务,因此对他高可用与Web服务器类似,增加负载均衡器对其负载均衡即可,并且可水平扩容。 ``` [查看安装文档](https://ynotes.cn/blog/article_detail/280) #### 部署Etcd集群 准备cfssl证书生成工具 cfssl是一个开源的证书管理工具,使用json文件生成证书,相比openssl更方便使用。 找任意一台服务器操作,这里用Master节点。 ```sh $ wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 $ wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 $ wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 $ chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64 $ mv cfssl_linux-amd64 /usr/local/bin/cfssl $ mv cfssljson_linux-amd64 /usr/local/bin/cfssljson $ mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo ``` 生成Etcd证书 ```sh $ mkdir -p ~/etcd_tls $ cd ~/etcd_tls ``` ```sh $ cat > ca-config.json << EOF { "signing": { "default": { "expiry": "87600h" }, "profiles": { "www": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF ``` ```sh $ cat > ca-csr.json << EOF { "CN": "etcd CA", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Guangdong", "ST": "Guangzhou" } ] } EOF ``` 生成CA证书 ```sh $ cfssl gencert -initca ca-csr.json | cfssljson -bare ca - ``` 使用自签CA签发Etcd HTTPS证书 创建证书申请文件 ```sh $ cat > server-csr.json << EOF { "CN": "etcd", "hosts": [ "172.16.100.100", "172.16.100.101", "172.16.100.102", "172.16.100.103", "172.16.100.104" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Guangdong", "ST": "Guangzhou" } ] } EOF ``` 生成证书 ```sh $ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | $ cfssljson -bare server ``` 下载二进制文件 ```sh $ wget https://github.com/etcd-io/etcd/releases/download/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz ``` ```sh $ mkdir /opt/etcd/{bin,cfg,ssl} -p $ tar zxvf etcd-v3.4.9-linux-amd64.tar.gz $ mv etcd-v3.4.9-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/ ``` 创建etcd配置文件 ```sh cat > /opt/etcd/cfg/etcd.conf << EOF #[Member] ETCD_NAME="etcd-1" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://172.16.100.100:2380" ETCD_LISTEN_CLIENT_URLS="https://172.16.100.100:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.16.100.100:2380" ETCD_ADVERTISE_CLIENT_URLS="https://172.16.100.100:2379" ETCD_INITIAL_CLUSTER="etcd-1=https://172.16.100.100:2380,etcd-2=https://172.16.100.101:2380,etcd-3=https://172.16.100.102:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" EOF ``` systemd管理etcd ```sh cat > /usr/lib/systemd/system/etcd.service << EOF [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target [Service] Type=notify EnvironmentFile=/opt/etcd/cfg/etcd.conf ExecStart=/opt/etcd/bin/etcd --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --peer-cert-file=/opt/etcd/ssl/server.pem --peer-key-file=/opt/etcd/ssl/server-key.pem --trusted-ca-file=/opt/etcd/ssl/ca.pem --peer-trusted-ca-file=/opt/etcd/ssl/ca.pem --logger=zap Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF ``` 拷贝刚才生成的证书 ```sh $ cp ~/etcd_tls/ca*pem ~/etcd_tls/server*pem /opt/etcd/ssl/ ``` 将上面节点1所有生成的文件拷贝到节点2和节点3 ```sh $ scp -r /opt/etcd/ root@172.16.100.101:/opt/ $ scp -r /opt/etcd/ root@172.16.100.102:/opt/ $ scp -r /usr/lib/systemd/system/etcd.service root@172.16.100.102:/usr/lib/systemd/system/ $ scp -r /usr/lib/systemd/system/etcd.service root@172.16.100.101:/usr/lib/systemd/system/ ``` 三个节点同时启动 ```sh $ systemctl daemon-reload $ systemctl start etcd $ systemctl enable etcd ``` ```sh $ ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://172.16.100.100:2379,https://172.16.100.101:2379,https://172.16.100.102:2379" endpoint health --write-out=table ``` ```sh +-----------------------------+--------+-------------+-------+ | ENDPOINT | HEALTH | TOOK | ERROR | +-----------------------------+--------+-------------+-------+ | https://172.16.100.100:2379 | true | 12.812954ms | | | https://172.16.100.102:2379 | true | 13.596982ms | | | https://172.16.100.101:2379 | true | 14.607151ms | | +-----------------------------+--------+-------------+-------+ ``` #### 安装Docker [查看Docker安装部分](https://ynotes.cn/blog/article_detail/271) #### 安装kubeadm、kubelet、kubectl ```sh $ cat > /etc/yum.repos.d/kubernetes.repo <<EOF [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF ``` 安装kubeadm、kubelet、kubectl ```sh $ yum install -y kubelet-1.18.20 kubeadm-1.18.20 kubectl-1.18.20 $ systemctl enable kubelet ``` 集群配置文件 ```sh $ cat > kubeadm-config.yaml <<EOF apiVersion: kubeadm.k8s.io/v1beta2 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token token: abcdef.0123456789abcdef ttl: 24h0m0s usages: - signing - authentication kind: InitConfiguration localAPIEndpoint: advertiseAddress: 172.16.100.100 bindPort: 6443 nodeRegistration: criSocket: /var/run/dockershim.sock name: k8s-master1 taints: - effect: NoSchedule key: node-role.kubernetes.io/master --- apiServer: certSANs: - k8s-master1 - k8s-master2 - k8s-master3 - 172.16.100.100 - 172.16.100.101 - 172.16.100.102 - 172.16.100.111 - 127.0.0.1 extraArgs: authorization-mode: Node,RBAC timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta2 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controlPlaneEndpoint: 172.16.100.111:16443 controllerManager: {} dns: type: CoreDNS etcd: external: endpoints: - https://172.16.100.100:2379 - https://172.16.100.101:2379 - https://172.16.100.102:2379 caFile: /opt/etcd/ssl/ca.pem certFile: /opt/etcd/ssl/server.pem keyFile: /opt/etcd/ssl/server-key.pem imageRepository: registry.aliyuncs.com/google_containers kind: ClusterConfiguration kubernetesVersion: v1.18.20 networking: dnsDomain: cluster.local podSubnet: 10.244.0.0/16 serviceSubnet: 10.96.0.0/12 scheduler: {} EOF ``` 集群初始化(`k8s-master1`) ```sh $ kubeadm init --config kubeadm-config.yaml ``` ```sh Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of control-plane nodes by copying certificate authorities and service account keys on each node and then running the following as root: kubeadm join 172.16.100.111:16443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:466694e2952961d35ca66960df917b65a5bd5da6219780274603deeb52b2cdde \ --control-plane Then you can join any number of worker nodes by running the following on each as root: kubeadm join 172.16.100.111:16443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:466694e2952961d35ca66960df917b65a5bd5da6219780274603deeb52b2cdde ``` 配置访问集群的配置 ```sh $ mkdir -p $HOME/.kube $ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config $ sudo chown $(id -u):$(id -g) $HOME/.kube/config ``` 加入集群(`k8s-master2`) 拷贝k8s-master1证书 ```sh $ scp -r 172.16.100.100:/etc/kubernetes/pki/ /etc/kubernetes/ ``` ```sh $ kubeadm join 172.16.100.111:16443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:466694e2952961d35ca66960df917b65a5bd5da6219780274603deeb52b2cdde \ --control-plane ``` 加入集群(`k8s-node1`) ```sh $ kubeadm join 172.16.100.111:16443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:466694e2952961d35ca66960df917b65a5bd5da6219780274603deeb52b2cdde ``` 修改主节点可部署pod ```sh $ kubectl taint node k8s-master1 node-role.kubernetes.io/master- $ kubectl taint node k8s-master2 node-role.kubernetes.io/master- ``` #### 部署网络 `方式一:calico(BGP/IPIP)`:比较主流的网络组件,BGP模式性能最好,缺点是pod无法和非BGP节点通信,IPIP无网络限制,相比BGP性能略差。 [参考:k8s配置calico网络](https://ynotes.cn/blog/article_detail/274) `方式二:flannel(vxlan/host-gw)`:host-gw性能最好,缺点是pod无法和非集群节点通信。vxlan可跨网络,相比host-gw性能略差。 [参考:K8S-kubeadm部署k8s集群(一) flannel部分](https://ynotes.cn/blog/article_detail/271) `方式三:cilium(eBFP)+kube-router(BGP路由)`:三种方式性能最好,缺点pod和非BGP节点无法通信,目前是Beta阶段,生产不推荐。 [参考:k8s配置cilium网络](https://ynotes.cn/blog/article_detail/286) #### 重置K8S集群 ##### kubeadm重置节点 ```sh $ kubeadm reset #kubeadm reset 负责从使用 kubeadm init 或 kubeadm join 命令创建的文件中清除节点本地文件系统。对于控制平面节点,reset 还从 etcd 集群中删除该节点的本地 etcd 堆成员,还从 kubeadm ClusterStatus 对象中删除该节点的信息。 ClusterStatus 是一个 kubeadm 管理的 Kubernetes API 对象,该对象包含 kube-apiserver 端点列表。 #kubeadm reset phase 可用于执行上述工作流程的各个阶段。 要跳过阶段列表,您可以使用 --skip-phases 参数,该参数的工作方式类似于 kubeadm join 和 kubeadm init 阶段运行器。 ``` ##### 清理网络配置(flannel) ```sh $ systemctl stop kubelet $ systemctl stop docker $ rm /etc/cni/net.d/* -rf $ ifconfig cni0 down $ ifconfig flannel.1 down $ ifconfig docker0 down $ ip link delete cni0 $ ip link delete flannel.1 $ systemctl start docker ``` ##### 清理iptables ```sh $ iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X ``` ##### 清理ipvs ```sh $ ipvsadm --clear ``` ##### 清理外部 etcd _`如果使用了外部 etcd,kubeadm reset 将不会删除任何 etcd 中的数据。这意味着,如果再次使用相同的 etcd 端点运行 kubeadm init,您将看到先前集群的状态。`_ 要清理 etcd 中的数据,建议您使用 etcdctl 这样的客户端,例如: ```sh $ etcdctl del "" --prefix ```
阅读 1002 评论 0 收藏 0
阅读 1002
评论 0
收藏 0

兜兜    2021-09-15 15:38:16    2022-01-25 09:20:14   

nginx Keepalived
```sh 高可用K8S ApiServer服务,通过nginx的stream做四层负载均衡,nginx高可用通过keepalived实现。实现的效果172.16.100.111:16433--> 172.16.100.100:6433/172.16.100.101:6433 ``` 配置规划 ```sh +-------------+----------------+-------+----------------------------+ | Host | IP | Port | SoftWare | +-------------+----------------+-------+----------------------------+ | k8s-master1 | 172.16.100.100 | 6433 | Nginx,Keepalived,ApiServer | | k8s-master2 | 172.16.100.101 | 6433 | Nginx,Keepalived,ApiServer | | VIP | 172.16.100.111 | 16433 | / | +-------------+----------------+-------+----------------------------+ ``` 主从LVS节点 ```sh $ yum install nginx nginx-mod-stream keepalived -y #nginx-mod-stream 四层负载均衡stream模块 ``` 配置主从节点nginx ```sh $ cat /etc/nginx/nginx.conf ... events { worker_connections 1024; } # 四层负载均衡,为两台Master apiserver组件提供负载均衡 stream { log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent'; access_log /var/log/nginx/k8s-access.log main; upstream k8s-apiserver { server 172.16.100.100:6443; # Master1 APISERVER IP:PORT server 172.16.100.101:6443; # Master1 APISERVER IP:PORT } server { listen 16443; # 由于nginx与master节点复用,这个监听端口不能是6443,否则会冲突 proxy_pass k8s-apiserver; } } ... ``` 配置主节点keepalived ```sh $cat > /etc/keepalived/keepalived.conf <<EOF global_defs { router_id keepalived_100 } vrrp_script check_nginx { script "/etc/keepalived/check_nginx.sh" } vrrp_instance VI_1 { state MASTER interface ens192 # 修改为实际网卡名 virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的 priority 100 # 优先级,备服务器设置 90 advert_int 1 # 指定VRRP 心跳包通告间隔时间,默认1秒 authentication { auth_type PASS auth_pass 1111 } # 虚拟IP virtual_ipaddress { 172.16.100.111/24 } track_script { check_nginx } } EOF ``` nginx检查脚本 ```sh $ cat >/etc/keepalived/check_nginx.sh <<EOF #!/bin/bash count=$(ss -antp |grep 16443 |egrep -cv "grep|$$") if [ "$count" -eq 0 ];then exit 1 else exit 0 fi EOF ``` ```sh systemctl restart nginx systemctl enable nginx systemctl restart keepalived systemctl enable keepalived ``` 配置从nginx节点keepalived ```sh $ cat >/etc/keepalived/keepalived.conf <<EOF global_defs { router_id keepalived_101 } vrrp_script check_nginx { script "/etc/keepalived/check_nginx.sh" } vrrp_instance VI_1 { state BACKUP interface ens192 # 修改为实际网卡名 virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的 priority 90 # 优先级,备服务器设置 90 advert_int 1 # 指定VRRP 心跳包通告间隔时间,默认1秒 authentication { auth_type PASS auth_pass 1111 } # 虚拟IP virtual_ipaddress { 172.16.100.111/24 } track_script { check_nginx } } EOF ``` 主从启动nginx和keepalived ```sh systemctl restart nginx systemctl enable nginx systemctl restart keepalived systemctl enable keepalived ``` 查看vip是否绑定 ```sh $ ip a 2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 00:50:56:93:6a:3a brd ff:ff:ff:ff:ff:ff inet 172.16.100.101/24 brd 172.16.100.255 scope global noprefixroute ens192 valid_lft forever preferred_lft forever inet 172.16.100.111/32 scope global ens192 ``` 测试 ```sh $ curl http://172.16.100.111 <html> <head><title>404 Not Found</title></head> <body> <center><h1>404 Not Found</h1></center> <hr><center>nginx</center> </body> </html> ```
阅读 972 评论 0 收藏 0
阅读 972
评论 0
收藏 0


第 5 页 / 共 29 页
 
第 5 页 / 共 29 页