兜兜    2018-07-22 17:00:39    2019-11-14 14:33:22   

高可用 DRBD
### 介绍 DRBD(Distributed Replicated Block Device)是一个用软件实现的、无共享的、服务器之间镜像块设备内容的存储复制解决方案。 #### DRBD的工作原理 ```bash +-----------+ | 文件系统 | +-----------+ | V +--------------+ | 块设备层 | | (/dev/drbd1) | +--------------+ | | | | V V +-------------+ +------------+ | 本地硬盘 | | 远程硬盘 | | (/dev/hdb1) | | (/dev/hdb1)| +-------------+ +------------+ host1 host2 ``` #### DRBD单主和双主模式 单主模式:`一个集群内一个资源在任何给定的时间内仅有一个primary角色,另一个为secondary。文件系统可以是ext3、ext4、xfs等` 双主模式:`对于一个资源,在任何给定的时刻该集群都有两个primary节点,也就是drbd两个节点均为primary,因此可以实现并发访问。使用共享集群文件系统例如gfs和ocfs系统` #### DRBD的复制模式 三种模式: `协议A:异步复制协议。本地写成功后立即返回,数据放在发送buffer中,可能丢失。` `协议B:内存同步(半同步)复制协议。本地写成功并将数据发送到对方后立即返回,如果双机掉电,数据可能丢失。` `协议C:同步复制协议。本地和对方写成功确认后返回。如果双机掉电或磁盘同时损坏,则数据可能丢失。` **在使用时,一般用协议C。由于协议C是本地和对方写成功时再认为写入成功,因此会有一定时延。** ### 准备环境: 所有节点: - 系统: `CentOS7.6` - 同步硬盘:`/dev/vdb1` 主节点: - IP/主机:`172.16.0.3(node1)` 从节点: - IP/主机:`172.16.0.4(node2)` ### 安装DRBD #### `node1和node2执行` 导入GPG key和安装elrepo库 ```bash rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm ``` 安装drbd软件包 ```bash yum install drbd90-utils kmod-drbd90 -y ``` 加载drbd模块 ```bash modprobe drbd echo drbd > /etc/modules-load.d/drbd.conf #开机加载drbd模块 ``` ### 配置DRBD #### `node1和node2执行` 配置global_common.conf文件 ```bash vim /etc/drbd.d/global_common.conf ``` ```bash global { usage-count no; #是否参加DRBD使用统计,默认为yes。官方统计drbd的装机量,改为no } common { protocol C; #DRBD的同步复制协议 handlers { pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f"; pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f"; local-io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f"; } startup { } options { } disk { on-io-error detach; #配置I/O错误处理策略为分离,添加这一行 } net { cram-hmac-alg "sha1"; #drbd同步验证方式 shared-secret "test"; #drbd同步密码信息 } syncer { rate 1024M; #设置主备节点同步时的网络速率,添加这个选项 } } ``` 配置资源文件 ```bash vim /etc/drbd.d/test.res ``` ```bash resource test { protocol C; meta-disk internal; device /dev/drbd1; syncer { verify-alg sha1; } on node1 { disk /dev/vdb; address 172.16.0.3:7789; } on node2 { disk /dev/vdb; address 172.16.0.4:7789; } ``` 初始化meta数据 ```bash drbdadm create-md test ``` 启动和开启DRBD ```bash systemctl start drbd systemctl enable drbd ``` #### `node1节点执行` ```bash drbdadm up test drbdadm primary test #如果遇到任何错误,执行:drbdadm primary test --force ``` #### `node2节点执行` ```bash drbdadm up test ``` 查看DRBD状态 ```bash cat /proc/drbd ``` ``` version: 8.4.11-1 (api:1/proto:86-101) GIT-hash: 66145a308421e9c124ec391a7848ac20203bb03c build by mockbuild@, 2018-11-03 01:26:55 1: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r----- ns:10557016 nr:8 dw:299576 dr:10266018 al:78 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0 ```   ### 测试DRBD 格式化存储 ```bash mkfs.ext4 /dev/drbd1 ``` 挂载 ```bash mount /dev/drbd1 /mnt ``` 创建测试数据 ```bash touch /mnt/f{1..5} ls -l /mnt/ ``` ``` -rw-r--r-- 1 root root 0 Jul 22 08:59 f1 -rw-r--r-- 1 root root 0 Jul 22 08:59 f2 -rw-r--r-- 1 root root 0 Jul 22 08:59 f3 -rw-r--r-- 1 root root 0 Jul 22 08:59 f4 -rw-r--r-- 1 root root 0 Jul 22 08:59 f5 ``` #### 交换主从 `node1执行` ```bash umount /mnt ``` ```bash drbdadm secondary test ``` `node2执行` ```bash drbdadm primary test ``` 挂载 ```bash mount /dev/drbd1 /mnt ``` 查看数据 ```bash ls -l /mnt ``` ``` -rw-r--r-- 1 root root 0 Jul 22 08:59 f1 -rw-r--r-- 1 root root 0 Jul 22 08:59 f2 -rw-r--r-- 1 root root 0 Jul 22 08:59 f3 -rw-r--r-- 1 root root 0 Jul 22 08:59 f4 -rw-r--r-- 1 root root 0 Jul 22 08:59 f5 ```   ### 管理命令 查看资源的状态 ```bash drbdadm cstate resouce_name #resouce_name为资源名 ``` ``` 资源的连接状态;一个资源可能有以下连接状态中的一种 StandAlone 独立的:网络配置不可用;资源还没有被连接或是被管理断开(使用 drbdadm disconnect 命令),或是由于出现认证失败或是脑裂的情况 Disconnecting 断开:断开只是临时状态,下一个状态是StandAlone独立的 Unconnected 悬空:是尝试连接前的临时状态,可能下一个状态为WFconnection和WFReportParams Timeout 超时:与对等节点连接超时,也是临时状态,下一个状态为Unconected悬空 BrokerPipe:与对等节点连接丢失,也是临时状态,下一个状态为Unconected悬空 NetworkFailure:与对等节点推动连接后的临时状态,下一个状态为Unconected悬空 ProtocolError:与对等节点推动连接后的临时状态,下一个状态为Unconected悬空 TearDown 拆解:临时状态,对等节点关闭,下一个状态为Unconected悬空 WFConnection:等待和对等节点建立网络连接 WFReportParams:已经建立TCP连接,本节点等待从对等节点传来的第一个网络包 Connected 连接:DRBD已经建立连接,数据镜像现在可用,节点处于正常状态 StartingSyncS:完全同步,有管理员发起的刚刚开始同步,未来可能的状态为SyncSource或PausedSyncS StartingSyncT:完全同步,有管理员发起的刚刚开始同步,下一状态为WFSyncUUID WFBitMapS:部分同步刚刚开始,下一步可能的状态为SyncSource或PausedSyncS WFBitMapT:部分同步刚刚开始,下一步可能的状态为WFSyncUUID WFSyncUUID:同步即将开始,下一步可能的状态为SyncTarget或PausedSyncT SyncSource:以本节点为同步源的同步正在进行 SyncTarget:以本节点为同步目标的同步正在进行 PausedSyncS:以本地节点是一个持续同步的源,但是目前同步已经暂停,可能是因为另外一个同步正在进行或是使用命令(drbdadm pause-sync)暂停了同步 PausedSyncT:以本地节点为持续同步的目标,但是目前同步已经暂停,这可以是因为另外一个同步正在进行或是使用命令(drbdadm pause-sync)暂停了同步 VerifyS:以本地节点为验证源的线上设备验证正在执行 VerifyT:以本地节点为验证目标的线上设备验证正在执行 ``` 查看资源的角色 ```bash drbdadm role resouce_name ``` ``` Parimary 主:资源目前为主,并且可能正在被读取或写入,如果不是双主只会出现在两个节点中的其中一个节点上 Secondary 次:资源目前为次,正常接收对等节点的更新 Unknown 未知:资源角色目前未知,本地的资源不会出现这种状态 ``` 查看硬盘状态命令 ```bash drbdadm dstate resouce_name ``` ``` 本地和对等节点的硬盘有可能为下列状态之一: Diskless 无盘:本地没有块设备分配给DRBD使用,这表示没有可用的设备,或者使用drbdadm命令手工分离或是底层的I/O错误导致自动分离 Attaching:读取无数据时候的瞬间状态 Failed 失败:本地块设备报告I/O错误的下一个状态,其下一个状态为Diskless无盘 Negotiating:在已经连接的DRBD设置进行Attach读取无数据前的瞬间状态 Inconsistent:数据是不一致的,在两个节点上(初始的完全同步前)这种状态出现后立即创建一个新的资源。此外,在同步期间(同步目标)在一个节点上出现这种状态 Outdated:数据资源是一致的,但是已经过时 DUnknown:当对等节点网络连接不可用时出现这种状态 Consistent:一个没有连接的节点数据一致,当建立连接时,它决定数据是UpToDate或是Outdated UpToDate:一致的最新的数据状态,这个状态为正常状态 ``` 启动、停止资源 ```bash drbdadm up resouce_name #启动资源 drbdadm down resouce_name #停止资源 ``` 升级和降级资源 ```bash drbdadm primary resouce_name #升级资源角色为主 drbdadm secondary resouce_name #升级资源角色为从 drbdadm -- --overwrite-data-of-peer primary resouce_name #同步资源 ``` `注意:在单主模式下的DRBD,两个节点同时处于连接状态,任何一个节点都可以在特定的时间内变成主;但两个节点中只能一为主,如果已经有一个主,需先降级才可能升级;在双主模式下没有这个限制` **参考:** `https://github.com/chenzhiwei/linux/tree/master/drbd` `https://www.learnitguide.net/2016/07/how-to-install-and-configure-drbd-on-linux.html` `http://yallalabs.com/linux/how-to-install-and-configure-drbd-cluster-on-rhel7-centos7/` `https://wiki.centos.org/zh/HowTos/Ha-Drbd`
阅读 1128 评论 0 收藏 0
阅读 1128
评论 0
收藏 0

兜兜    2018-07-10 11:41:52    2019-11-14 14:34:32   

Keepalived 高可用 LVS 负载均衡
### LVS+NAT **`通过网络地址转换,调度器重写请求报文的目标地址,根据预设的调度算法,将请求分派给后端的RS;RS的响应报文通过调度器时,报文的源地址被重写,再返回给客户,完成整个负载调度过程。`** #### 准备环境 ```bash LVS主机: 192.168.50.253 Real Server: 192.168.50.251/192.168.50.252 网络模式:NAT ``` #### **DR配置** #### 安装ipvsadm ```bash yum install ipvsadm -y ``` #### 设置ipv4转发 ```bash sysctl -w net.ipv4.ip_forward=1 ``` #### 关selinux,firewall,iptables ```bash setenforce 0 systemctl stop firewall iptables -F ``` #### 设置ipvsadm ```bash ipvsadm -A -t 192.168.50.253:80 -s rr ipvsadm -a -t 192.168.50.253:80 -r 192.168.50.251:80 -m ipvsadm -a -t 192.168.50.253:80 -r 192.168.50.252:80 -m ipvsadm -S # -A 添加虚拟服务 # -a 添加一个真是的主机到虚拟服务 # -S 保存 # -s 选择调度方法 # rr 轮训调度 # -m 网络地址转换NAT ``` #### **RS配置** 安装web ```bash yum install nginx -y ``` 修改网关 ```bash vim /etc/sysconfig/network-scripts/ifcfg-enp0s3 ``` ```ini GATEWAY0=192.168.50.253 ``` #### 测试(外网机器) `注意:外网测试,同网段直接访问192.168.50.253,LVS仅修改目的地址成RS,当RS应答给客户端,发现为同网段,不会经过LVS去做SNAT,客户端发送的DEST地址和收到的应答包的SRC地址不一致丢弃` 路由器上面做个端口映射 113.119.xx.xx:8999->192.168.50.253:80 ```bash curl http://113.119.xx.xx:8999/ ```   ### LVS+DR(`无VIP`) **`VS/DR通过改写请求报文的MAC地址,将请求发送到RS,而RS将响应直接返回给客户。同VS/TUN技术一样,VS/DR技术可极大地 提高集群系统的伸缩性。这种方法没有IP隧道的开销,对集群中的RS也没有必须支持IP隧道协议的要求,但是要求调度器与RS都有一块网卡连 在同一物理网段上。`** #### 准备环境 ```bash LVS主机: 192.168.50.253 08:00:27:e6:f4:0a Real Server: 192.168.50.251 08:00:27:8a:58:c1 192.168.50.252 08:00:27:29:31:d8 网络模式:DR ``` #### **DR配置** #### 安装ipvsadm ```bash yum install ipvsadm -y ``` #### 设置ipv4转发 ```bash sysctl -w net.ipv4.ip_forward=1 ``` #### 关selinux,firewall,iptables ```bash setenforce 0 systemctl stop firewall iptables -F ``` #### 设置ipvsadm ```bash ipvsadm -A -t 192.168.50.253:8080 -s rr #虚拟服务端口需要和真实服务端口一致 ipvsadm -a -t 192.168.50.253:8080 -r 192.168.50.251:8080 -g -w 1 ipvsadm -a -t 192.168.50.253:8080 -r 192.168.50.252:8080 -g -w 1 ipvsadm -S # -A 添加虚拟服务 # -a 添加一个真是的主机到虚拟服务 # -S 保存 # -s 选择调度方法 # -g DR模式 # rr 轮训调度 ``` 配置RS MAC静态绑定 `因为负载均衡服务器使用的是真实IP 192.168.50.253,当查询ARP,因为RS回环接口配置192.168.50.253,所以不会应答ARP,相反,如果配置VIP,那么回环地址配置VIP,192.168.50.253请求ARP的时候,RS会应答ARP` ```bash arp -s 192.168.50.251 08:00:27:8a:58:c1 arp -s 192.168.50.252 08:00:27:29:31:d8 ``` #### **RS配置** 启动web服务 ```bash python -m SimpleHTTPServer 8080 ``` 修改内核参数 ```bash echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce ``` 逻辑网卡添加ip地址192.168.50.253 ```bash ifconfig lo:0 192.168.50.253 netmask 255.255.255.255 broadcast 192.168.50.255 ``` 添加路由(确保请求的IP是192.168.50.253,出去的数据包也为192.168.50.253) ```bash route add -host 192.168.50.253 dev lo:0 ```   ### LVS+DR(`有VIP`) #### 准备环境 ```bash VIP: 192.168.50.240 LVS主机: 192.168.50.253 08:00:27:e6:f4:0a Real Server: 192.168.50.251 08:00:27:8a:58:c1 192.168.50.252 08:00:27:29:31:d8 网络模式:DR ``` #### **DR配置** #### 安装ipvsadm ```bash yum install ipvsadm -y ``` #### 设置ipv4转发 ```bash sysctl -w net.ipv4.ip_forward=1 ``` #### 关selinux,firewall,iptables ```bash setenforce 0 systemctl stop firewall iptables -F ``` #### 配置VIP ```bash ifconfig eth0:0 192.168.50.240 netmask 255.255.255.255 broadcast 192.168.50.255 ``` #### 设置ipvsadm ```bash ipvsadm -A -t 192.168.50.240:8080 -s rr #虚拟服务端口需要和真实服务端口一致 ipvsadm -a -t 192.168.50.240:8080 -r 192.168.50.251:8080 -g -w 1 ipvsadm -a -t 192.168.50.240:8080 -r 192.168.50.252:8080 -g -w 1 ipvsadm -S # -A 添加虚拟服务 # -a 添加一个真是的主机到虚拟服务 # -S 保存 # -s 选择调度方法 # -g DR模式 # rr 轮训调度 ``` #### **RS配置** 启动web服务 ```bash python -m SimpleHTTPServer 8080 ``` 修改内核参数 ```bash echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce ``` 添加VIP ```bash ifconfig lo:0 192.168.50.240 netmask 255.255.255.255 broadcast 192.168.50.255 ``` 添加路由(确保请求的IP是192.168.50.253,出去的数据包也为192.168.50.253) ```bash route add -host 192.168.50.240 dev lo:0 ```   ### LVS+NAT **`通过网络地址转换,调度器重写请求报文的目标地址,根据预设的调度算法,将请求分派给后端的RS;RS的响应报文通过调度器时,报文的源地址被重写,再返回给客户,完成整个负载调度过程。`** #### 准备环境 ```bash LVS主机: 192.168.50.253 Real Server: 192.168.50.251/192.168.50.252 网络模式:NAT ``` #### **DR配置** #### 安装ipvsadm ```bash yum install ipvsadm -y ``` #### 设置ipv4转发 ```bash sysctl -w net.ipv4.ip_forward=1 ``` #### 关selinux,firewall,iptables ```bash setenforce 0 systemctl stop firewall iptables -F ``` #### 设置ipvsadm ```bash ipvsadm -A -t 192.168.50.253:80 -s rr ipvsadm -a -t 192.168.50.253:80 -r 192.168.50.251:80 -m ipvsadm -a -t 192.168.50.253:80 -r 192.168.50.252:80 -m ipvsadm -S # -A 添加虚拟服务 # -a 添加一个真是的主机到虚拟服务 # -S 保存 # -s 选择调度方法 # rr 轮训调度 # -m 网络地址转换NAT ``` #### **RS配置** 安装web ```bash yum install nginx -y ``` 修改网关 ```bash vim /etc/sysconfig/network-scripts/ifcfg-enp0s3 ``` ```ini GATEWAY0=192.168.50.253 ``` #### 测试(外网机器) `注意:外网测试,同网段直接访问192.168.50.253,LVS仅修改目的地址成RS,当RS应答给客户端,发现为同网段,不会经过LVS去做SNAT,客户端发送的DEST地址和收到的应答包的SRC地址不一致丢弃` 路由器上面做个端口映射 113.119.xx.xx:8999->192.168.50.253:80 ```bash curl http://113.119.xx.xx:8999/ ```   ### LVS+DR(`无VIP`) **`VS/DR通过改写请求报文的MAC地址,将请求发送到RS,而RS将响应直接返回给客户。同VS/TUN技术一样,VS/DR技术可极大地 提高集群系统的伸缩性。这种方法没有IP隧道的开销,对集群中的RS也没有必须支持IP隧道协议的要求,但是要求调度器与RS都有一块网卡连 在同一物理网段上。`** #### 准备环境 ```bash LVS主机: 192.168.50.253 08:00:27:e6:f4:0a Real Server: 192.168.50.251 08:00:27:8a:58:c1 192.168.50.252 08:00:27:29:31:d8 网络模式:DR ``` #### **DR配置** #### 安装ipvsadm ```bash yum install ipvsadm -y ``` #### 设置ipv4转发 ```bash sysctl -w net.ipv4.ip_forward=1 ``` #### 关selinux,firewall,iptables ```bash setenforce 0 systemctl stop firewall iptables -F ``` #### 设置ipvsadm ```bash ipvsadm -A -t 192.168.50.253:8080 -s rr #虚拟服务端口需要和真实服务端口一致 ipvsadm -a -t 192.168.50.253:8080 -r 192.168.50.251:8080 -g -w 1 ipvsadm -a -t 192.168.50.253:8080 -r 192.168.50.252:8080 -g -w 1 ipvsadm -S # -A 添加虚拟服务 # -a 添加一个真是的主机到虚拟服务 # -S 保存 # -s 选择调度方法 # -g DR模式 # rr 轮训调度 ``` 配置RS MAC静态绑定 `因为负载均衡服务器使用的是真实IP 192.168.50.253,当查询ARP,因为RS回环接口配置192.168.50.253,所以不会应答ARP,相反,如果配置VIP,那么回环地址配置VIP,192.168.50.253请求ARP的时候,RS会应答ARP` ```bash arp -s 192.168.50.251 08:00:27:8a:58:c1 arp -s 192.168.50.252 08:00:27:29:31:d8 ``` #### **RS配置** 启动web服务 ```bash python -m SimpleHTTPServer 8080 ``` 修改内核参数 ```bash echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce ``` 逻辑网卡添加ip地址192.168.50.253 ```bash ifconfig lo:0 192.168.50.253 netmask 255.255.255.255 broadcast 192.168.50.255 ``` 添加路由(确保请求的IP是192.168.50.253,出去的数据包也为192.168.50.253) ```bash route add -host 192.168.50.253 dev lo:0 ```   ### LVS+DR(`有VIP`) #### 准备环境 ```bash VIP: 192.168.50.240 LVS主机: 192.168.50.253 08:00:27:e6:f4:0a Real Server: 192.168.50.251 08:00:27:8a:58:c1 192.168.50.252 08:00:27:29:31:d8 网络模式:DR ``` #### **DR配置** #### 安装ipvsadm ```bash yum install ipvsadm -y ``` #### 设置ipv4转发 ```bash sysctl -w net.ipv4.ip_forward=1 ``` #### 关selinux,firewall,iptables ```bash setenforce 0 systemctl stop firewall iptables -F ``` #### 配置VIP ```bash ifconfig eth0:0 192.168.50.240 netmask 255.255.255.255 broadcast 192.168.50.255 ``` #### 设置ipvsadm ```bash ipvsadm -A -t 192.168.50.240:8080 -s rr #虚拟服务端口需要和真实服务端口一致 ipvsadm -a -t 192.168.50.240:8080 -r 192.168.50.251:8080 -g -w 1 ipvsadm -a -t 192.168.50.240:8080 -r 192.168.50.252:8080 -g -w 1 ipvsadm -S # -A 添加虚拟服务 # -a 添加一个真是的主机到虚拟服务 # -S 保存 # -s 选择调度方法 # -g DR模式 # rr 轮训调度 ``` #### **RS配置** 启动web服务 ```bash python -m SimpleHTTPServer 8080 ``` 修改内核参数 ```bash echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce ``` 添加VIP ```bash ifconfig lo:0 192.168.50.240 netmask 255.255.255.255 broadcast 192.168.50.255 ``` 添加路由(确保请求的IP是192.168.50.253,出去的数据包也为192.168.50.253) ```bash route add -host 192.168.50.240 dev lo:0 ```   ### LVS+TUNNEL(内网跨网段) #### 准备环境 ```bash Client: 192.168.10.3 路由器(LINUX) 192.168.10.4 192.168.20.4 VIP: 192.168.10.100 LVS主机: 192.168.10.5 Real Server: 192.168.20.3 网络模式:TUNNEL ```   #### **DR配置** 配置VIP ```bash ifconfig eth1:1 192.168.10.100 netmask 255.255.255.255 broadcast 192.168.10.100 up route add -host 192.168.10.100 dev eth1:1 ``` 修改内核参数 ```bash echo "0" >/proc/sys/net/ipv4/ip_forward echo "1" >/proc/sys/net/ipv4/conf/all/send_redirects echo "1" >/proc/sys/net/ipv4/conf/default/send_redirects echo "1" >/proc/sys/net/ipv4/conf/eth1/send_redirects ``` 配置lvs ```bash ipvsadm -C ipvsadm -A -t 192.168.10.100:8080 -s rr ipvsadm -a -t 192.168.10.100:8080 -r 192.168.20.3 -i ```   #### **RS配置** 配置tunnel隧道 ```bash modprobe ipip ifconfig tunl0 192.168.10.100 netmask 255.255.255.255 broadcast 192.168.10.100 up route add -host 192.168.10.100 dev tunl0 ``` 修改内核参数 ```bash echo 0 > /proc/sys/net/ipv4/ip_forward echo 1 > /proc/sys/net/ipv4/conf/tunl0/arp_ignore echo 2 > /proc/sys/net/ipv4/conf/tunl0/arp_announce echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce echo 0 > /proc/sys/net/ipv4/conf/tunl0/rp_filter echo 0 > /proc/sys/net/ipv4/conf/all/rp_filter ``` 启动服务 ```bash echo "RS SERVER" >index.html python -m SimpleHTTPServer 8080 ```   #### 客户端测试 ```bash curl http://192.168.10.100:8080 ``` ``` RS SERVER ``` `测试成功`   #### RS主机抓包 监听隧道接口 ```bash tcpdump -i tunl0 -nnn -vvv ``` ```bash 08:32:34.269392 IP (tos 0x0, ttl 64, id 51443, offset 0, flags [DF], proto TCP (6), length 60) 192.168.10.3.46734 > 192.168.10.100.8080: Flags [S], cksum 0xdbc2 (correct), seq 2270636359, win 28200, options [mss 1410,sackOK,TS val 1958324 ecr 0,nop,wscale 7], length 0 08:32:34.271186 IP (tos 0x0, ttl 64, id 51444, offset 0, flags [DF], proto TCP (6), length 52) 192.168.10.3.46734 > 192.168.10.100.8080: Flags [.], cksum 0xa968 (correct), seq 2270636360, ack 1942327447, win 221, options [nop,nop,TS val 1958326 ecr 1821109], length 0 08:32:34.271345 IP (tos 0x0, ttl 64, id 51445, offset 0, flags [DF], proto TCP (6), length 135) 192.168.10.3.46734 > 192.168.10.100.8080: Flags [P.], cksum 0xa230 (correct), seq 0:83, ack 1, win 221, options [nop,nop,TS val 1958327 ecr 1821109], length 83: HTTP, length: 83 GET / HTTP/1.1 User-Agent: curl/7.29.0 Host: 192.168.10.100:8080 Accept: */* ``` `通过上面的抓包日志可以分析DR已经将SYN包发送到RS主机` 监听eth1接口8080端口的数据包 ```bash tcpdump -i eth1 and port 8080 -nnn -vvv ``` ```bash 08:31:57.116678 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 60) 192.168.10.100.8080 > 192.168.10.3.46732: Flags [S.], cksum 0x95e6 (incorrect -> 0x6552), seq 2019760809, ack 3127325422, win 27960, options [mss 1410,sackOK,TS val 1783956 ecr 1921172,nop,wscale 7], length 0 08:31:57.118567 IP (tos 0x0, ttl 64, id 8764, offset 0, flags [DF], proto TCP (6), length 52) 192.168.10.100.8080 > 192.168.10.3.46732: Flags [.], cksum 0x95de (incorrect -> 0xfff2), seq 1, ack 84, win 219, options [nop,nop,TS val 1783958 ecr 1921174], length 0 08:31:57.118811 IP (tos 0x0, ttl 64, id 8765, offset 0, flags [DF], proto TCP (6), length 69) 192.168.10.100.8080 > 192.168.10.3.46732: Flags [P.], cksum 0x95ef (incorrect -> 0x4015), seq 1:18, ack 84, win 219, options [nop,nop,TS val 1783958 ecr 1921174], length 17: HTTP, length: 17 HTTP/1.0 200 OK ``` `通过上面的抓包日志可以发现,RS也将应答包SYN+ACK直接发送给客户端` **`总结:LVS tunnel 跨网段转发是成功的。`**   ### LVS+TUNNEL(公网) `Ip Tunnel模式最大的优点就在于它可以跨网段转发,没有DR和NAT模式的组网限制。这在部署上带来的很大的灵活性,甚至还可以跨机房转发,不过不建议这样使用,一是会带来跨机房间的流量,提高了成本;二是跨机房转发必然会要在RS机房上绑定LVS机房的VIP,这有可能会被运营商的防火墙认为是IP伪造请求而拦截` #### 准备环境(Vultr VPS) ```bash Client: 183.54.238.66 VIP: 104.238.150.254 LVS主机: 167.179.115.37 Real Server: 202.182.125.31 139.180.202.67 网络模式:TUNNEL ```   #### **DR配置** DR主机所在的VPS申请添加一个公网ip,配置到eth0:1接口上 ```bash ifconfig eth0:1 104.238.150.254 netmask 255.255.255.255 broadcast 104.238.150.254 up route add -host 104.238.150.254 dev eth0:1 ``` 修改内核参数 ```bash echo "0" >/proc/sys/net/ipv4/ip_forward echo "1" >/proc/sys/net/ipv4/conf/all/send_redirects echo "1" >/proc/sys/net/ipv4/conf/default/send_redirects echo "1" >/proc/sys/net/ipv4/conf/eth0/send_redirects ``` 配置lvs ```bash ipvsadm -C ipvsadm -A -t 104.238.150.254:8080 -s rr ipvsadm -a -t 104.238.150.254:8080 -r 202.182.125.31 -i ipvsadm -a -t 104.238.150.254:8080 -r 139.180.202.67 -i ```   #### **RS配置** 配置tunnel隧道 ```bash modprobe ipip ifconfig tunl0 104.238.150.254 netmask 255.255.255.255 broadcast 104.238.150.254 up route add -host 104.238.150.254 dev tunl0 ``` 修改内核参数 ```bash echo 0 > /proc/sys/net/ipv4/ip_forward echo 1 > /proc/sys/net/ipv4/conf/tunl0/arp_ignore echo 2 > /proc/sys/net/ipv4/conf/tunl0/arp_announce echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce echo 0 > /proc/sys/net/ipv4/conf/tunl0/rp_filter echo 0 > /proc/sys/net/ipv4/conf/all/rp_filter ``` 启动服务 ```bash python -m SimpleHTTPServer 8080 ```   #### 客户端测试 浏览器访问 `http://104.238.150.254:8080` `浏览器显示访问不了!!!`   #### RS主机抓包 监听隧道接口 ```bash tcpdump -i tunl0 -nnn -vvv ``` ```bash tcpdump: listening on tunl0, link-type RAW (Raw IP), capture size 262144 bytes 09:20:12.713031 IP (tos 0x0, ttl 116, id 29938, offset 0, flags [DF], proto TCP (6), length 48) 183.54.238.66.63376 > 104.238.150.254.8080: Flags [S], cksum 0x5a81 (correct), seq 3523639844, win 8192, options [mss 1440,nop,nop,sackOK], length 0 09:20:54.742880 IP (tos 0x0, ttl 116, id 30339, offset 0, flags [DF], proto TCP (6), length 52) 183.54.238.66.63400 > 104.238.150.254.8080: Flags [S], cksum 0xb006 (correct), seq 2103403813, win 8192, options [mss 1440,nop,wscale 2,nop,nop,sackOK], length 0 09:20:54.993380 IP (tos 0x0, ttl 116, id 30342, offset 0, flags [DF], proto TCP (6), length 52) 183.54.238.66.63402 > 104.238.150.254.8080: Flags [S], cksum 0x8306 (correct), seq 484897436, win 8192, options [mss 1440,nop,wscale 2,nop,nop,sackOK], length 0 09:20:57.744463 IP (tos 0x0, ttl 116, id 30376, offset 0, flags [DF], proto TCP (6), length 52) ``` `通过上面的抓包日志可以分析DR已经将SYN包发送到RS主机` 监听客户端访问服务器8080端口的数据包 ```bash tcpdump -i eth0 host 183.54.238.66 and port 8080 -vvv -nnn ``` ```bash tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes 09:19:13.659753 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 52) 104.238.150.254.8080 > 183.54.238.66.63354: Flags [S.], cksum 0xa58c (incorrect -> 0x6d88), seq 2281292955, ack 1846351957, win 29200, options [mss 1460,nop,nop,sackOK,nop,wscale 7], length 0 09:19:14.861060 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 52) 104.238.150.254.8080 > 183.54.238.66.63354: Flags [S.], cksum 0xa58c (incorrect -> 0x6d88), seq 2281292955, ack 1846351957, win 29200, options [mss 1460,nop,nop,sackOK,nop,wscale 7], length 0 09:19:16.660745 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 52) 104.238.150.254.8080 > 183.54.238.66.63354: Flags [S.], cksum 0xa58c (incorrect -> 0x6d88), seq 2281292955, ack 1846351957, win 29200, options [mss 1460,nop,nop,sackOK,nop,wscale 7], length 0 09:19:19.061063 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 52) ``` `通过上面的抓包日志可以发现,RS也将应答包SYN+ACK直接发送给客户端` **`总结:LVS tunnel DR转发,RS应答是正常的。RS应答的时候被Vultr的防火墙认为是IP伪造请求而拦截,所以导致实验失败!`**   ### LVS+FULLNAT `LVS 当前应用主要采用 DR 和 NAT 模式,但这 2 种模式要求 RealServer 和 LVS 在同一个 vlan中,导致部署成本过高;TUNNEL 模式虽然可以跨 vlan,但 RealServer上需要部署 ipip 模块等,网络拓扑上需要连通外网,较复杂,不易运维。` `为了解决上述问题,我们在 LVS 上添加了一种新的转发模式:FULLNAT,该 模式和 NAT 模式的区别是:Packet IN 时,除了做 DNAT,还做 SNAT(用户 ip->内 网 ip),从而实现 LVS-RealServer 间可以跨 vlan 通讯,RealServer 只需要连接到内 网;` LVS FULLNAT 实战:https://www.haxi.cc/archives/LVS-FULLNAT实战.html 相关链接: LVS-ospf集群:http://noops.me/?p=974
阅读 4181 评论 0 收藏 0
阅读 4181
评论 0
收藏 0

兜兜    2018-07-09 17:35:10    2018-11-14 14:34:52   

HAProxy 高可用
### HAProxy安装 #### yum安装 ```bash yum install haproxy -y ``` 启动、停止和重启 ```bash systemctl start haproxy systemctl stop haproxy systemctl restart haproxy ```   #### 源码安装 下载 ```bash wget http://www.haproxy.org/download/2.0/src/haproxy-2.0.1.tar.gz tar -xzf haproxy-2.0.1.tar.gz ``` 编译并安装 ```bash make PREFIX=/opt/haproxy TARGET=linux2628 make install PREFIX=/opt/haproxy ``` ``` - linux22 for Linux 2.2 - linux24 for Linux 2.4 and above (default) - linux24e for Linux 2.4 with support for a working epoll (> 0.21) - linux26 for Linux 2.6 and above - linux2628 for Linux 2.6.28, 3.x, and above (enables splice and tproxy) ``` 创建配置文件 ```bash mkdir -p /opt/haproxy/conf vi /opt/haproxy/conf/haproxy.cfg ``` ```ini global #全局属性 daemon #以daemon方式在后台运行 maxconn 256 #最大同时256连接 pidfile /opt/haproxy/conf/haproxy.pid #指定保存HAProxy进程号的文件 defaults #默认参数 mode http #http模式 timeout connect 5000ms #连接server端超时5s timeout client 50000ms #客户端响应超时50s timeout server 50000ms #server端响应超时50s frontend http-in #前端服务http-in bind *:8080 #监听8080端口 default_backend servers backend servers #后端服务servers server server1 127.0.0.1:8000 maxconn 32 ``` 创建启动脚本 ```bash vi /etc/init.d/haproxy ``` ```bash #!/bin/sh set -e PATH=/sbin:/bin:/usr/sbin:/usr/bin:/opt/haproxy/sbin PROGDIR=/opt/haproxy PROGNAME=haproxy DAEMON=$PROGDIR/sbin/$PROGNAME CONFIG=$PROGDIR/conf/$PROGNAME.cfg PIDFILE=$PROGDIR/conf/$PROGNAME.pid DESC="HAProxy daemon" SCRIPTNAME=/etc/init.d/$PROGNAME # Gracefully exit if the package has been removed. test -x $DAEMON || exit 0 start() { echo -e "Starting $DESC: $PROGNAME\n" $DAEMON -f $CONFIG echo "." } stop() { echo -e "Stopping $DESC: $PROGNAME\n" haproxy_pid="$(cat $PIDFILE)" kill $haproxy_pid echo "." } restart() { echo -e "Restarting $DESC: $PROGNAME\n" $DAEMON -f $CONFIG -p $PIDFILE -sf $(cat $PIDFILE) echo "." } case "$1" in start) start ;; stop) stop ;; restart) restart ;; *) echo "Usage: $SCRIPTNAME {start|stop|restart}" >&2 exit 1 ;; esac exit 0 ``` 添加执行权限 ```bash chmod +x /etc/init.d/haproxy ``` 启动、停止和重启 ```bash service haproxy start service haproxy stop service haproxy restart ```   ### HAProxy的配置介绍 ```ini 总览 HAProxy的配置文件共有5个域 global:用于配置全局参数 default:用于配置所有frontend和backend的默认属性 frontend:用于配置前端服务(即HAProxy自身提供的服务)实例 backend:用于配置后端服务(即HAProxy后面接的服务)实例组 listen:frontend+backend的组合配置,可以理解成更简洁的配置方法 global域的关键配置 daemon:指定HAProxy以后台模式运行,通常情况下都应该使用这一配置 user [username] :指定HAProxy进程所属的用户 group [groupname] :指定HAProxy进程所属的用户组 log [address] [device] [maxlevel] [minlevel]:日志输出配置,如log 127.0.0.1 local0 info warning,即向本机rsyslog或syslog的local0输出info到warning级别的日志。其中[minlevel]可以省略。HAProxy的日志共有8个级别,从高到低为emerg/alert/crit/err/warning/notice/info/debug pidfile :指定记录HAProxy进程号的文件绝对路径。主要用于HAProxy进程的停止和重启动作。 maxconn :HAProxy进程同时处理的连接数,当连接数达到这一数值时,HAProxy将停止接收连接请求 frontend域的关键配置 acl [name] [criterion] [flags] [operator] [value]:定义一条ACL,ACL是根据数据包的指定属性以指定表达式计算出的true/false值。如"acl url_ms1 path_beg -i /ms1/"定义了名为url_ms1的ACL,该ACL在请求uri以/ms1/开头(忽略大小写)时为true bind [ip]:[port]:frontend服务监听的端口 default_backend [name]:frontend对应的默认backend disabled:禁用此frontend http-request [operation] [condition]:对所有到达此frontend的HTTP请求应用的策略,例如可以拒绝、要求认证、添加header、替换header、定义ACL等等。 http-response [operation] [condition]:对所有从此frontend返回的HTTP响应应用的策略,大体同上 log:同global域的log配置,仅应用于此frontend。如果要沿用global域的log配置,则此处配置为log global maxconn:同global域的maxconn,仅应用于此frontend mode:此frontend的工作模式,主要有http和tcp两种,对应L7和L4两种负载均衡模式 option forwardfor:在请求中添加X-Forwarded-For Header,记录客户端ip option http-keep-alive:以KeepAlive模式提供服务 option httpclose:与http-keep-alive对应,关闭KeepAlive模式,如果HAProxy主要提供的是接口类型的服务,可以考虑采用httpclose模式,以节省连接数资源。但如果这样做了,接口的调用端将不能使用HTTP连接池 option httplog:开启httplog,HAProxy将会以类似Apache HTTP或Nginx的格式来记录请求日志 option tcplog:开启tcplog,HAProxy将会在日志中记录数据包在传输层的更多属性 stats uri [uri]:在此frontend上开启监控页面,通过[uri]访问 stats refresh [time]:监控数据刷新周期 stats auth [user]:[password]:监控页面的认证用户名密码 timeout client [time]:指连接创建后,客户端持续不发送数据的超时时间 timeout http-request [time]:指连接创建后,客户端没能发送完整HTTP请求的超时时间,主要用于防止DoS类攻击,即创建连接后,以非常缓慢的速度发送请求包,导致HAProxy连接被长时间占用 use_backend [backend] if|unless [acl]:与ACL搭配使用,在满足/不满足ACL时转发至指定的backend backend域的关键配置 acl:同frontend域 balance [algorithm]:在此backend下所有server间的负载均衡算法,常用的有roundrobin和source,完整的算法说明见官方文档configuration.html#4.2-balance cookie:在backend server间启用基于cookie的会话保持策略,最常用的是insert方式,如cookie HA_STICKY_ms1 insert indirect nocache,指HAProxy将在响应中插入名为HA_STICKY_ms1的cookie,其值为对应的server定义中指定的值,并根据请求中此cookie的值决定转发至哪个server。indirect代表如果请求中已经带有合法的HA_STICK_ms1 cookie,则HAProxy不会在响应中再次插入此cookie,nocache则代表禁止链路上的所有网关和缓存服务器缓存带有Set-Cookie头的响应。 default-server:用于指定此backend下所有server的默认设置。具体见下面的server配置。 disabled:禁用此backend http-request/http-response:同frontend域 log:同frontend域 mode:同frontend域 option forwardfor:同frontend域 option http-keep-alive:同frontend域 option httpclose:同frontend域 option httpchk [METHOD] [URL] [VERSION]:定义以http方式进行的健康检查策略。如option httpchk GET /healthCheck.html HTTP/1.1 option httplog:同frontend域 option tcplog:同frontend域 server [name] [ip]:[port] [params]:定义backend中的一个后端server,[params]用于指定这个server的参数,常用的包括有: check:指定此参数时,HAProxy将会对此server执行健康检查,检查方法在option httpchk中配置。同时还可以在check后指定inter, rise, fall三个参数,分别代表健康检查的周期、连续几次成功认为server UP,连续几次失败认为server DOWN,默认值是inter 2000ms rise 2 fall 3 cookie [value]:用于配合基于cookie的会话保持,如cookie ms1.srv1代表交由此server处理的请求会在响应中写入值为ms1.srv1的cookie(具体的cookie名则在backend域中的cookie设置中指定) maxconn:指HAProxy最多同时向此server发起的连接数,当连接数到达maxconn后,向此server发起的新连接会进入等待队列。默认为0,即无限 maxqueue:等待队列的长度,当队列已满后,后续请求将会发至此backend下的其他server,默认为0,即无限 weight:server的权重,0-256,权重越大,分给这个server的请求就越多。weight为0的server将不会被分配任何新的连接。所有server默认weight为1 timeout connect [time]:指HAProxy尝试与backend server创建连接的超时时间 timeout check [time]:默认情况下,健康检查的连接+响应超时时间为server命令中指定的inter值,如果配置了timeout check,HAProxy会以inter作为健康检查请求的连接超时时间,并以timeout check的值作为健康检查请求的响应超时时间 timeout server [time]:指backend server响应HAProxy请求的超时时间 default域 上文所属的frontend和backend域关键配置中,除acl、bind、http-request、http-response、use_backend外,其余的均可以配置在default域中。default域中配置了的项目,如果在frontend或backend域中没有配置,将会使用default域中的配置。 listen域 listen域是frontend域和backend域的组合,frontend域和backend域中所有的配置都可以配置在listen域下 ```   ### HAProxy配置代理(7层)web服务 后端两台web服务器192.168.50.251和192.168.50.252,url匹配/app1/访问192.168.50.251,url匹配/app2/访问192.168.50.252 ```bash vim /etc/haproxy/haproxy.cfg ``` ```ini global log 127.0.0.1 local0 info log 127.0.0.1 local1 warning chroot /var/lib/haproxy pidfile /var/run/haproxy.pid maxconn 40000 user haproxy group haproxy daemon stats socket /var/lib/haproxy/stats defaults mode tcp log global option tcplog option dontlognull option http-server-close timeout connect 10s timeout client 1m timeout server 1m maxconn 3000 frontend public mode tcp bind :80 log-format %ft\ %b/%s acl url_app1 path_beg -i /app1/ acl url_app2 path_beg -i /app2/ use_backend app1 if url_app1 default_backend default_servers backend app1 mode http server web1 192.168.50.251:5555 maxconn 300 check backend app2 mode http server web2 192.168.50.252:5555 maxconn 300 check backend default_servers mode http balance roundrobin cookie SERVER insert indirect #开启会话粘滞 server web1 192.168.50.251:5555 maxconn 300 cookie server1 check server web2 192.168.50.252:5555 maxconn 300 cookie server2 check listen status bind *:7777 mode http stats enable stats refresh 10s stats uri /haproxy stats realm Haproxy\ Statistics stats auth admin:admin stats hide-version ``` 配置rsyslog日志 ```bash vim /etc/rsyslog.conf ``` ```ini local0.* /var/log/haproxy_info.log local1.* /var/log/haproxy_warning.log ```   ### HAProxy配置复用8080 haproxy使用8080端口分别代理ssh和web,原理是通过获取请求数据包前三个字节的二进制值,匹配代理到不同的后端服务。 ```bash vim /etc/haproxy/haproxy.cfg ``` ```ini global log 127.0.0.1 local0 info log 127.0.0.1 local1 warning chroot /var/lib/haproxy pidfile /var/run/haproxy.pid maxconn 40000 user haproxy group haproxy daemon stats socket /var/lib/haproxy/stats defaults mode tcp log global option tcplog option dontlognull timeout connect 10s timeout client 1m timeout server 1m maxconn 3000 frontend public mode tcp bind :8080 log global option tcplog log-format %ft\ %b/%s tcp-request inspect-delay 5s acl is_ssh req.payload(0,3) -m bin 535348 #数据包前三个字节,SSH acl is_http req.payload(0,3) -m bin 474554 504f53 505554 44454c 4f5054 484541 434f4e 545241 #数据包前三个字节做匹配,GET,POST,DELETE,PUT,OPTIONS,HEAD,TRACE,CONNECT tcp-request content accept if is_http tcp-request content accept if is_ssh use_backend web if is_http use_backend ssh if is_ssh backend web mode http balance roundrobin server web1 192.168.50.251:5555 check server web2 192.168.50.252:5555 check backend ssh mode tcp timeout server 3h server ssh 192.168.50.252:22 ```
阅读 570 评论 0 收藏 0
阅读 570
评论 0
收藏 0

兜兜    2018-07-02 17:58:12    2019-11-14 14:34:47   

高可用 Keepavlied 负载均衡 ipvs
### 安装Keepalived ```bash yum install keepalived -y ``` &emsp; ### 使用简单的failover 参考:https://docs.oracle.com/cd/E37670_01/E41138/html/section_uxg_lzh_nr.html 典型的Keepalived高可用性配置包括一个主服务器和一个或多个备份服务器。一个或多个虚拟IP地址(定义为VRRP实例)被分配给主服务器的网络接口,以便它可以为网络客户端提供服务。备份服务器侦听主服务器定期发送的组播VRRP通告报文。默认广告时间间隔为一秒。如果备份节点无法接收三个连续的VRRP通告,则具有最高分配优先级的备份服务器将接管主服务器,并将虚拟IP地址分配给其自己的网络接口。如果多个备份服务器具有相同的优先级,则具有最高IP地址值的备份服务器将成为主服务器 以下示例使用Keepalived在两台服务器上实现简单的故障转移配置。一台服务器充当主服务器,另一台服务器充当备份服务器,主服务器的优先级高于备份服务器。 显示虚拟IP地址10.0.0.100最初如何分配给主服务器(10.0.0.71)。当主服务器发生故障时,备份服务器(10.0.0.72)将成为新的主服务器,并被分配虚拟IP地址10.0.0.100。 &emsp; ![](https://docs.oracle.com/cd/E37670_01/E41138/html/images/keepalived1.png) &emsp; #### master配置 ```bash vim /etc/keepalived/keepalived.conf ``` ```ini global_defs { notification_email { root@mydomain.com } notification_email_from svr1@mydomain.com smtp_server localhost smtp_connect_timeout 30 } vrrp_instance VRRP1 { state MASTER # Specify the network interface to which the virtual address is assigned interface eth0 # The virtual router ID must be unique to each VRRP instance that you define virtual_router_id 41 # Set the value of priority higher on the master server than on a backup server priority 200 advert_int 1 authentication { auth_type PASS auth_pass 1066 } virtual_ipaddress { 10.0.0.100/24 } } ``` &emsp; #### backup配置 ```bash vim /etc/keepalived/keepalived.conf ``` ```ini global_defs { notification_email { root@mydomain.com } notification_email_from svr2@mydomain.com smtp_server localhost smtp_connect_timeout 30 } vrrp_instance VRRP1 { state BACKUP # Specify the network interface to which the virtual address is assigned interface eth0 virtual_router_id 41 # Set the value of priority lower on the backup server than on the master server priority 100 advert_int 1 authentication { auth_type PASS auth_pass 1066 } virtual_ipaddress { 10.0.0.100/24 } } ``` 如果主服务器(svr1)发生故障,keepalived会将虚拟IP地址10.0.0.100/24分配给备份服务器(svr2)上的eth0接口,该服务器将成为主服务器。 #### 查看master网卡分配的VIP ```bash ip addr list eth0 ``` ``` 3: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 08:00:27:cb:a6:8d brd ff:ff:ff:ff:ff:ff inet 10.0.0.72/24 brd 10.0.0.255 scope global eth0 inet 10.0.0.100/24 scope global eth0 inet6 fe80::a00:27ff:fecb:a68d/64 scope link valid_lft forever preferred_lft forever ``` #### 查看backup日志 ```bash tail -f /var/log/messages ``` ``` ...51:55 ... VRRP_Instance(VRRP1) Entering BACKUP STATE ... ...53:08 ... VRRP_Instance(VRRP1) Transition to MASTER STATE ...53:09 ... VRRP_Instance(VRRP1) Entering MASTER STATE ...53:09 ... VRRP_Instance(VRRP1) setting protocol VIPs. ...53:09 ... VRRP_Instance(VRRP1) Sending gratuitous ARPs on eth0 for 10.0.0.100 ``` &emsp; ### Keepalived负载均衡之NAT模式(双网卡场景) 以下示例在NAT模式下使用Keepalived在两台服务器上实现简单的故障转移和负载平衡配置。一台服务器充当主服务器,另一台服务器充当备份服务器,主服务器的优先级高于备份服务器。每个服务器都有两个网络接口,其中一个接口连接到面向外部网络(192.168.1.0/24)的一侧,另一个接口连接到内部网络(10.0.0.0/24),其中两个Web服务器可以访问。 &emsp; ![](https://docs.oracle.com/cd/E37670_01/E41138/html/images/keepalived2.png) &emsp; #### master配置 此配置类似“使用简单的failover”,增加了vrrp_sync_group部分,以便在故障转移时将网络接口一起分配,并使用virtual_server部分定义真实的后备Keepalived用于负载平衡的终端服务器。 lb_kind的值设置为NAT(网络地址转换),这意味着Keepalived服务器代表后端服务器处理来自客户端的入站和出站网络流量。 ```bash vim /etc/keepalived/keepalived.conf ``` ```ini global_defs { notification_email { root@mydomain.com } notification_email_from svr1@mydomain.com smtp_server localhost smtp_connect_timeout 30 } vrrp_sync_group VRRP1 { # Group the external and internal VRRP instances so they fail over together group { external internal } } vrrp_instance external { state MASTER interface eth0 virtual_router_id 91 priority 200 advert_int 1 authentication { auth_type PASS auth_pass 1215 } # Define the virtual IP address for the external network interface virtual_ipaddress { 192.168.1.1/24 } } vrrp_instance internal { state MASTER interface eth1 virtual_router_id 92 priority 200 advert_int 1 authentication { auth_type PASS auth_pass 1215 } # Define the virtual IP address for the internal network interface virtual_ipaddress { 10.0.0.100/24 } } # Define a virtual HTTP server on the virtual IP address 192.168.1.1 virtual_server 192.168.1.1 80 { delay_loop 10 protocol TCP # Use round-robin scheduling in this example lb_algo rr # Use NAT to hide the back-end servers lb_kind NAT # Persistence of client sessions times out after 2 hours persistence_timeout 7200 real_server 10.0.0.71 80 { weight 1 TCP_CHECK { connect_timeout 5 connect_port 80 } } real_server 10.0.0.72 80 { weight 1 TCP_CHECK { connect_timeout 5 connect_port 80 } } } ``` &emsp; #### backup配置 ```bash vim /etc/keepalived/keepalived.conf ``` ```ini global_defs { notification_email { root@mydomain.com } notification_email_from svr2@mydomain.com smtp_server localhost smtp_connect_timeout 30 } vrrp_sync_group VRRP1 { # Group the external and internal VRRP instances so they fail over together group { external internal } } vrrp_instance external { state BACKUP interface eth0 virtual_router_id 91 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 1215 } # Define the virtual IP address for the external network interface virtual_ipaddress { 192.168.1.1/24 } } vrrp_instance internal { state BACKUP interface eth1 virtual_router_id 92 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 1215 } # Define the virtual IP address for the internal network interface virtual_ipaddress { 10.0.0.100/24 } } # Define a virtual HTTP server on the virtual IP address 192.168.1.1 virtual_server 192.168.1.1 80 { delay_loop 10 protocol TCP # Use round-robin scheduling in this example lb_algo rr # Use NAT to hide the back-end servers lb_kind NAT # Persistence of client sessions times out after 2 hours persistence_timeout 7200 real_server 10.0.0.71 80 { weight 1 TCP_CHECK { connect_timeout 5 connect_port 80 } } real_server 10.0.0.72 80 { weight 1 TCP_CHECK { connect_timeout 5 connect_port 80 } } } ``` &emsp; #### 防火墙配置 如果将Keepalived配置为使用NAT模式与内部网络上的服务器进行负载平衡,则Keepalived服务器将处理所有入站和出站网络流量,并通过重写真实服务器的源IP地址来隐藏后端服务器,使用外部网络接口的虚拟IP地址在传出数据包。 开启IP伪装(master/backup) ```bash iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE service iptables save ``` 允许网卡之间的转发(master/backup) ```bash iptables -A FORWARD -i eth0 -o eth1 -m state --state RELATED,ESTABLISHED -j ACCEPT iptables -A FORWARD -i eth1 -o eth0 -j ACCEPT iptables -A FORWARD -j REJECT --reject-with icmp-host-prohibited service iptables save ``` 开放端口(以HTTP为例) ```bash iptables -I INPUT -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT service iptables save ``` &emsp; #### 创建VIP的路由(后端服务器) ```bash ip route add default via 10.0.0.100 dev eth0 ip route show ``` ``` default via 10.0.0.100 dev eth0 10.0.0.0/24 dev eth0 proto kernel scope link src 10.0.0.71 ``` 永久生效 ```bash echo "default via 10.0.0.100 dev eth0" > /etc/sysconfig/network-scripts/route-eth0 ``` &emsp; ### Keepalived负载均衡之NAT模式(单网卡场景) `注意:不能在同一网段直接访问虚拟IP,因为真实服务针对客户端发来的数据包,应答的时候不会经过keepalived(数据包不需要路由),改不了数据包,所以访问不了。如果keepalived和真实主机处于同一内网推荐使用DR模式,代理性能更高,此处仅为实验,为了更深入理解。` 此配置类似“Keepalived负载均衡之NAT模式(双网卡场景)” ![](https://files.ynotes.cn/keepalived_single_network.png) &emsp; #### master配置 ```bash vim /etc/keepalived/keepalived.conf ``` ```ini ! Configuration File for keepalived global_defs { notification_email { test01@ynotes.cn } notification_email_from haproxy1@ynotes.cn smtp_server localhost smtp_connect_timeout 30 router_id LVS_DEVEL vrrp_skip_check_adv_addr #vrrp_strict #严格执行VRRP协议规范,此模式不支持节点单播,注销该参数 vrrp_garp_interval 0 vrrp_gna_interval 0 } vrrp_instance VI_1 { state MASTER interface enp0s3 virtual_router_id 51 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.50.240 } } virtual_server 192.168.50.240 5555 { delay_loop 6 lb_algo rr lb_kind NAT persistence_timeout 50 protocol TCP real_server 192.168.50.251 5555 { weight 1 TCP_CHECK { connect_port 5555 connect_timeout 3 } } real_server 192.168.50.252 5555 { weight 1 TCP_CHECK { connect_port 5555 connect_timeout 3 } } } ``` &emsp; #### backup配置 ```bash vim /etc/keepalived/keepalived.conf ``` ```ini ! Configuration File for keepalived global_defs { notification_email { test01@ynotes.cn } notification_email_from haproxy2@ynotes.cn smtp_server localhost smtp_connect_timeout 30 router_id LVS_DEVEL vrrp_skip_check_adv_addr #vrrp_strict #严格执行VRRP协议规范,此模式不支持节点单播,注销该参数 vrrp_garp_interval 0 vrrp_gna_interval 0 } vrrp_instance VI_1 { state BACKUP interface enp0s3 virtual_router_id 51 priority 80 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.50.240 } } virtual_server 192.168.50.240 5555 { delay_loop 6 lb_algo rr lb_kind NAT persistence_timeout 50 protocol TCP real_server 192.168.50.251 5555 { weight 1 TCP_CHECK { connect_port 5555 connect_timeout 3 } } real_server 192.168.50.252 5555 { weight 1 TCP_CHECK { connect_port 5555 connect_timeout 3 } } } ``` &emsp; #### 开启端口转发(master/backup) ```bash echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf sysctl -p ``` ``` net.ipv4.ip_forward = 1 ``` &emsp; #### 防火墙配置 对后端服务器的流量进行转发 开启SNAT(master/backup) ```bash iptables -t nat -A POSTROUTING -o enp0s3 -s 192.168.50.251/32 -j SNAT --to-source 192.168.50.240 #对源地址为192.168.50.251的数据包替换源地址为192.168.50.240 iptables -t nat -A POSTROUTING -o enp0s3 -s 192.168.50.252/32 -j SNAT --to-source 192.168.50.240 service iptables save #永久保存配置 ``` &emsp; #### 修改默认网关(后端服务器) ```bash vim /etc/sysconfig/network-scripts/ifcfg-enp0s3 #修改默认网关 ``` ```bash GATEWAY0=192.168.50.240 #这里配置成VIP地址 ``` ```bash systemctl restart network #重启网卡 ip route #查看路由信息 ``` ``` default via 192.168.50.240 dev enp0s3 proto static metric 100 172.10.0.0/24 dev br-975d989973b4 proto kernel scope link src 172.10.0.1 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 172.18.0.0/16 dev br-b64efd3fb71e proto kernel scope link src 172.18.0.1 172.20.0.0/16 dev docker_gwbridge proto kernel scope link src 172.20.0.1 192.168.50.0/24 dev enp0s3 proto kernel scope link src 192.168.50.252 metric 100 192.168.100.0/24 dev br-ea58d9f6ef1f proto kernel scope link src 192.168.100.1 ``` &emsp; ### Keepalived负载均衡之DR模式 以下示例在直接路由(DR)模式下使用Keepalived在两台服务器上实现简单的故障转移和负载平衡配置。一台服务器充当主服务器,另一台服务器充当备份服务器,主服务器的优先级高于备份服务器。每个Keepalived服务器都有一个网络接口,服务器连接到同一网段(10.0.0.0/24),可以访问两个Web服务器。 下图显示Keepalived主服务器具有网络地址10.0.0.11和10.0.0.1(虚拟)。 Keepalived备份服务器的网络地址为10.0.0.12。 Web服务器websvr1和websvr2分别具有网络地址10.0.0.71和10.0.0.72。此外,两个Web服务器都配置了虚拟IP地址10.0.0.1,以使它们接受具有该目标地址的数据包。主服务器接收传入请求并重定向到Web服务器,Web服务器直接响应。 &emsp; ![](https://docs.oracle.com/cd/E37670_01/E41138/html/images/keepalived3.png) #### master配置 类似“Keepalived负载均衡之NAT模式”的配置,但lb_kind的值设置为DR(直接路由),这意味着Keepalived服务器处理来自的所有入站网络包,出站网络包由后端服务器直接回复客户端,绕过Keepalived服务器。此配置减少了Keepalived服务器上的负载,但安全性较低,因为每个后端服务器都需要外部访问。某些实现使用额外的网络接口,每个Web服务器都有一个专用网关来处理响应网络包。 ```bash vim /etc/keepalived/keepalived.conf ``` ```ini global_defs { notification_email { root@mydomain.com } notification_email_from svr1@mydomain.com smtp_server localhost smtp_connect_timeout 30 } vrrp_instance external { state MASTER interface eth0 virtual_router_id 91 priority 200 advert_int 1 authentication { auth_type PASS auth_pass 1215 } virtual_ipaddress { 10.0.0.1/24 } } virtual_server 10.0.0.1 80 { delay_loop 10 protocol TCP lb_algo rr # Use direct routing lb_kind DR persistence_timeout 7200 real_server 10.0.0.71 80 { weight 1 TCP_CHECK { connect_timeout 5 connect_port 80 } } real_server 10.0.0.72 80 { weight 1 TCP_CHECK { connect_timeout 5 connect_port 80 } } } ``` &emsp; #### backup配置 ```bash vim /etc/keepalived/keepalived.conf ``` ```ini global_defs { notification_email { root@mydomain.com } notification_email_from svr2@mydomain.com smtp_server localhost smtp_connect_timeout 30 } vrrp_instance external { state BACKUP interface eth0 virtual_router_id 91 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 1215 } virtual_ipaddress { 10.0.0.1/24 } } virtual_server 10.0.0.1 80 { delay_loop 10 protocol TCP lb_algo rr # Use direct routing lb_kind DR persistence_timeout 7200 real_server 10.0.0.71 80 { weight 1 TCP_CHECK { connect_timeout 5 connect_port 80 } } real_server 10.0.0.72 80 { weight 1 TCP_CHECK { connect_timeout 5 connect_port 80 } } } ``` &emsp; #### 防火墙配置 开启服务端口,以http服务为例(master/backup) ```bash iptables -I INPUT -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT service iptables save ``` &emsp; #### 后端服务器(RS)配置 ```bash echo "net.ipv4.conf.lo.arp_ignore = 1" >> /etc/sysctl.conf #只响应目的IP地址为接收网卡上的本地地址的arp请求 echo "net.ipv4.conf.lo.arp_announce = 2" >> /etc/sysctl.conf echo "net.ipv4.conf.all.arp_ignore = 1" >> /etc/sysctl.conf echo "net.ipv4.conf.all.arp_announce = 2" >> /etc/sysctl.conf sysctl -p ``` ``` net.ipv4.conf.lo.arp_ignore = 1 net.ipv4.conf.lo.arp_announce = 2 net.ipv4.conf.all.arp_ignore = 1 net.ipv4.conf.all.arp_announce = 2 ``` ```bash echo "ifconfig lo:0 10.0.0.1 broadcast 10.0.0.255 netmask 255.255.255.255 up" >> /etc/rc.local ifconfig lo:0 10.0.0.1 broadcast 10.0.0.255 netmask 255.255.255.255 up ip addr show lo ``` ``` 2: lo: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 08:00:27:cb:a6:8d brd ff:ff:ff:ff:ff:ff inet 10.0.0.72/24 brd 10.0.0.255 scope global eth0 inet 10.0.0.1/24 brd 10.0.0.255 scope global secondary eth0 inet6 fe80::a00:27ff:fecb:a68d/64 scope link valid_lft forever preferred_lft forever ```
阅读 776 评论 0 收藏 0
阅读 776
评论 0
收藏 0

兜兜    2018-07-02 11:31:47    2018-11-14 14:35:02   

HAProxy Keepalived 高可用 负载均衡 LoadBalance
### 准备工作 ```bash HAProxy/Keepalived 192.168.50.250 (Master) 192.168.50.253 (Backup) web服务器 192.168.50.251 192.168.50.252 VIP地址 192.168.50.240 ``` ### HAProxy(Master) #### 安装HAProxy ```bash yum install haproxy -y ``` &emsp; #### 开启IP转发 ```bash echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf echo "net.ipv4.ip_nonlocal_bind = 1" >> /etc/sysctl.conf sysctl -p ``` ``` net.ipv4.ip_forward = 1 net.ipv4.ip_nonlocal_bind = 1 ``` &emsp; #### 配置HAProxy ```bash cat /etc/haproxy/haproxy.cfg ``` ```ini global log 127.0.0.1 local2 #日志输出配置,所有日志都记录在本机,通过local0输出 chroot /var/lib/haproxy #改变工作目录 pidfile /var/run/haproxy.pid maxconn 80000 #限制单个进程的最大连接数 user haproxy #所属运行用户 group haproxy #所属运行用户组 daemon #后台运行 nbproc 1 #指定作为守护进程运行时的进程数 stats socket /var/lib/haproxy/stats defaults mode http #mode {http|tcp|health},http是七层模式,tcp是四层模式,health是健康检测返回OK log global option httplog #http 日志格式 option dontlognull #不记录空连接 option http-server-close option forwardfor except 127.0.0.0/8 option redispatch #在连接失败或断开的情况下,允许当前会话被重新分发 retries 3 #设置在一个服务器上链接失败后的重连次数 timeout http-request 10s timeout queue 1m timeout connect 10s #连接超时 timeout client 1m #客户端超时 timeout server 1m #服务器超时 timeout http-keep-alive 10s timeout check 10s #心跳检测超时 maxconn 80000 #限制单个进程的最大连接数 #前端代理web frontend web bind *:5555 #acl www hdr(host) -i www.ynotes.cn #acl规则,-i是访问的域名,如果访问的是www.ynotes.cn,分发到后端www #acl image hdr(host) -i files.ynotes.cn #use_backend www if www #use_backend image if image default_backend web #backend www # mode http # balance roundrobin # server web2 192.168.50.252:5555 check #backend image # mode http # balance roundrobin # server web1 192.168.50.251:5555 check backend web balance roundrobin server web1 192.168.50.251:5555 check inter 2000 fall 3 server web2 192.168.50.252:5555 check inter 2000 fall 3 listen status #启动统计页面 bind *:7777 mode http stats enable stats refresh 10s stats uri /haproxy stats realm Haproxy\ Statistics stats auth admin:admin stats hide-version ``` #### 开启HAProxy日志 修改rsyslog配置文件 ```bash vim /etc/rsyslog.conf ``` ```ini #启用在udp 514端口接收日志消息 $ModLoad imudp $UDPServerRun 514 #在rules(规则)节中添加如下信息 local2.* /var/log/haproxy.log #表示将发往facility local2的消息写入haproxy.log文件中,"local2.* "前面的local2表示facility,预定义的。*表示所有等级的消息 ``` 重启rsyslog ```bash systemctl restart rsyslog ``` &emsp; #### 配置两台nginx 192.168.50.251/192.168.50.252 ```bash cat /etc/nginx.conf ``` ```ini ... server { listen 5555; location / { root /var/www/haproxy/node; } } ... ``` 192.168.50.251 ```bash echo 192.168.50.251 >/var/www/haproxy/node/index.html ``` 192.168.50.252 ```bash echo 192.168.50.252 >/var/www/haproxy/node/index.html ``` &emsp; #### HAProxy启动关闭与开机启动 启动/关闭 ```bash systemctl start haproxy systemctl stop haproxy ``` 开机启动/禁用 ```bash systemctl enable haproxy systemctl disable haproxy ``` &emsp; #### 防火墙开启访问HAProxy代理的服务 iptable ```bash iptables -I INPUT -p tcp -m state --state NEW -m tcp --dport 5555 -j ACCEPT iptables -I INPUT -p tcp -m state --state NEW -m tcp --dport 7777 -j ACCEPT ``` firewalld ```bash firewall-cmd --zone=<zone> --add-port=5555/tcp --permanent#zone指定网卡接口应用的区域,可通过firewall-cmd --get-zone-of-interface=<interface> 查看网卡所在区域,添加网卡到指定区域firewall-cmd --permanent --zone=<zone> --change-interface=<interface> firewall-cmd --zone=<zone> --add-port=7777/tcp --permanent firewall-cmd --reload ``` &emsp; #### 测试访问HAProxy代理 ```bash while true; do curl http://192.168.50.253:5555; sleep 1; done ``` ``` 192.168.50.252 192.168.50.251 192.168.50.252 192.168.50.251 192.168.50.252 ^C ``` &emsp; #### 访问统计页面 http://192.168.50.253:7777/haproxy ![](https://files.ynotes.cn/haproxy_statistics.png) &emsp; #### 配置HAProxy会话粘滞 开启会话粘滞,使用cookie参数SERVER的值做匹配 ```bash cat /etc/haproxy/haproxy.cfg ``` ```ini #balance roundrobin #注释改行 cookie SERVER insert server web1 192.168.50.251:5555 cookie 1 check server web2 192.168.50.252:5555 cookie 2 check ``` 测试 ```bash while true; do curl http://192.168.50.253:5555 --cookie "SERVER=1"; sleep 1; done ``` ``` 192.168.50.251 192.168.50.251 192.168.50.251 ^C ``` ```bash while true; do curl http://192.168.50.253:5555 --cookie "SERVER=2"; sleep 1; done ``` ``` 192.168.50.252 192.168.50.252 192.168.50.252 ^C ``` 开启会话粘滞,使用cookie参数前缀名做匹配,使用"\~"做分隔符,以SESSIONID为例,格式如:set-Cookie: SESSIONID=N\~Session_ID; ```bash cat /etc/haproxy/haproxy.cfg ``` ```ini #balance roundrobin #注释改行 cookie SESSIONID prefix server web1 192.168.50.251:5555 cookie 1 check server web2 192.168.50.252:5555 cookie 2 check ``` 测试 ```bash while true; do curl http://192.168.50.253:5555 --cookie "SESSIONID=1~AAA"; sleep 1; done ``` ``` 192.168.50.251 192.168.50.251 192.168.50.251 ^C ``` ```bash while true; do curl http://192.168.50.253:5555 --cookie "SESSIONID=2~AAA"; sleep 1; done ``` ``` 192.168.50.252 192.168.50.252 192.168.50.252 ^C ``` &emsp; ### HAProxy(Backup) `同Master` &emsp; ### keepalived(Master) #### 安装keepalived ```bash yum install keepalived -y ``` #### 配置Keepalived ```bash vim /etc/keepalived/keepalived.conf ``` ```bash global_defs { notification_email { test01@ynotes.cn } notification_email_from haproxy1@ynotes.cn smtp_server localhost smtp_connect_timeout 30 router_id LVS_DEVEL vrrp_skip_check_adv_addr vrrp_strict vrrp_garp_interval 0 vrrp_gna_interval 0 } vrrp_script chk_haproxy { script "/etc/keepalived/check_haproxy.sh" interval 5 weight -4 } vrrp_instance VI_1 { state MASTER interface enp0s3 virtual_router_id 51 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.50.240 } track_script { chk_haproxy } } ``` ```bash cat /etc/keepalived/check_haproxy.sh ``` ```bash #!/bin/bash if [ $(ps -C haproxy --no-header | wc -l) -eq 0 ]; then systemctl start haproxy sleep 2 #睡眠时间少于vrrp_script 中的interval 5参数值 if [ $(ps -C haproxy --no-header | wc -l) -eq 0 ]; then systemctl stop keepalived fi fi ``` #### 开启路由转发(前面已开启,如果单独配置keepalived需开启) ```bash echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf sysctl -p ``` ``` net.ipv4.ip_forward = 1 ``` #### Keepalived启动关闭与开机启动 启动/关闭 ```bash systemctl start keepalived systemctl stop keepalived ``` 开机启动/禁用 ```bash systemctl enable keepalived systemctl disable keepalived ``` &emsp; ### keepalived(Backup) #### 安装keepalived ```bash yum install keepalived -y ``` #### 配置Keepalived ```bash vim /etc/keepalived/keepalived.conf ``` ```bash global_defs { notification_email { test01@ynotes.cn } notification_email_from haproxy1@ynotes.cn smtp_server localhost smtp_connect_timeout 30 router_id LVS_DEVEL vrrp_skip_check_adv_addr vrrp_strict vrrp_garp_interval 0 vrrp_gna_interval 0 } vrrp_script chk_haproxy { script "/etc/keepalived/check_haproxy.sh" interval 5 weight -4 } vrrp_instance VI_1 { state BACKUP interface enp0s3 virtual_router_id 51 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.50.240 } track_script { chk_haproxy } } ``` `其他同Master` &emsp; #### 测试 停止192.168.50.253的keeaplived ```bash systemctl stop keepalived ``` 查看192.168.50.253的vip ```bash ip a|grep 192.168.50.240 #执行无输出 ``` 查看192.168.50.250的vip ```bash ip a|grep 192.168.50.240 #输出VIP ``` ```bash inet 192.168.50.240/32 scope global enp0s3 ``` 访问192.168.50.240:5555 ```bash curl http://192.168.50.240:5555 #看到192.168.50.250成功接管VIP,并且能访问页面 ``` ``` 192.168.50.252 ```
阅读 896 评论 0 收藏 0
阅读 896
评论 0
收藏 0