私信
兜兜
文章
206
评论
12
点赞
98
原创 180
翻译 4
转载 22

文章
关注
粉丝
收藏

个人分类:

兜兜    2018-08-10 16:11:21    2018-08-10 16:11:21   

docker 容器 fastdfs Dockerfile fdfs
#### **环境:** 系统: **Centos7** Docker版本: **18.03.1-ce, build 9ee9f40** 容器网络: **桥接docker0** 容器网段: **10.10.0.0/24** #### **Dockerfile文件** ```bash FROM alpine:3.6 MAINTAINER ynotes.cn <admin@ynotes.cn> #环境变量 ENV NGINX_PORT 80 ENV FASTDFS_PORT 22122 #编译参数 ARG HOME=/root ARG FASTDFS_VERSION=5.11 ARG LIBFASTCOMMON_VERSION=1.0.38 ARG FASTDFS_NGINX_MODULE_VERSION=1.20 ARG NGINX_VERSION=1.12.1 #下载包 RUN cd ${HOME} \ && sed -i 's#http://[^/]*/\(.*\)$#http://mirrors.aliyun.com/\1#g' /etc/apk/repositories \ && apk update \ && apk add --no-cache --virtual .build-deps bash gcc libc-dev make openssl-dev pcre-dev zlib-dev linux-headers curl gnupg libxslt-dev gd-dev geoip-dev \ && curl -fLS https://github.com/happyfish100/fastdfs/archive/V${FASTDFS_VERSION}.tar.gz -o V${FASTDFS_VERSION}.tar.gz \ && curl -fLS https://github.com/happyfish100/libfastcommon/archive/V${LIBFASTCOMMON_VERSION}.tar.gz -o V${LIBFASTCOMMON_VERSION}.tar.gz \ && curl -fLS https://github.com/happyfish100/fastdfs-nginx-module/archive/V${FASTDFS_NGINX_MODULE_VERSION}.tar.gz -o V${FASTDFS_NGINX_MODULE_VERSION}.tar.gz \ && curl -fSL http://nginx.org/download/nginx-${NGINX_VERSION}.tar.gz -o nginx-${NGINX_VERSION}.tar.gz \ && tar xf V${FASTDFS_VERSION}.tar.gz \ && tar xf V${LIBFASTCOMMON_VERSION}.tar.gz \ && tar xf V${FASTDFS_NGINX_MODULE_VERSION}.tar.gz \ && tar zxf nginx-${NGINX_VERSION}.tar.gz #安装包 RUN cd ${HOME}/libfastcommon-${LIBFASTCOMMON_VERSION}/ \ && ./make.sh \ && ./make.sh install \ && cd ${HOME}/fastdfs-${FASTDFS_VERSION}/ \ && ./make.sh \ && ./make.sh install \ && sed "s@/home/yuqing/fastdfs@/data/fastdfs/tracker@g" /etc/fdfs/tracker.conf.sample > /etc/fdfs/tracker.conf \ && sed "s@/home/yuqing/fastdfs@/data/fastdfs/storage@g" /etc/fdfs/storage.conf.sample > /etc/fdfs/storage.conf \ && sed "s@/home/yuqing/fastdfs@/data/fastdfs/storage@g" /etc/fdfs/client.conf.sample > /etc/fdfs/client.conf \ && sed -i 's#CORE_INCS=.*#CORE_INCS="$CORE_INCS /usr/include/fastdfs /usr/include/fastcommon/"#g' ${HOME}/fastdfs-nginx-module-${FASTDFS_NGINX_MODULE_VERSION}/src/config \ && sed -i 's#ngx_module_incs=.*#ngx_module_incs="/usr/include/fastdfs /usr/include/fastcommon/"#g' ${HOME}/fastdfs-nginx-module-${FASTDFS_NGINX_MODULE_VERSION}/src/config \ && chmod u+x ${HOME}/fastdfs-nginx-module-${FASTDFS_NGINX_MODULE_VERSION}/src/config \ && cd ${HOME}/nginx-${NGINX_VERSION} \ && ./configure --add-module=${HOME}/fastdfs-nginx-module-${FASTDFS_NGINX_MODULE_VERSION}/src \ && make && make install #配置包 RUN cp ${HOME}/fastdfs-nginx-module-${FASTDFS_NGINX_MODULE_VERSION}/src/mod_fastdfs.conf /etc/fdfs/ \ && sed -i "s#^store_path0.*#store_path0 = /data/fastdfs/storage#g" /etc/fdfs/mod_fastdfs.conf \ && sed -i "s#^url_have_group_name.*#url_have_group_name = true#g" /etc/fdfs/mod_fastdfs.conf \ && cd ${HOME}/fastdfs-${FASTDFS_VERSION}/conf/ \ && cp http.conf mime.types /etc/fdfs/ \ && echo -e "worker_processes 2;\nevents { \nworker_connections 10240; \n}\nhttp { \ninclude mime.types;\ndefault_type application/octet-stream;\nsendfile on;\nkeepalive_timeout 65;\nserver {\nlisten NGINX_PORT;\nserver_name localhost;\nlocation ~/group([0-9])/M00 {\nngx_fastdfs_module;\n}\n}\n}">/usr/local/nginx/conf/nginx.conf #清理包 RUN rm -rf ${HOME}/* \ && apk del .build-deps gcc libc-dev make openssl-dev linux-headers curl gnupg libxslt-dev gd-dev geoip-dev \ && apk add bash pcre-dev zlib-dev #安装脚本 RUN sed -i "s/NGINX_PORT/$NGINX_PORT/g" /usr/local/nginx/conf/nginx.conf \ && echo -e "mkdir -p /data/fastdfs/storage/data\nmkdir -p /data/fastdfs/tracker\nln -s /data/fastdfs/storage/data /data/fastdfs/storage/data/M00\nHOST_IP=\$(ip addr |grep 'scope global eth0'|awk '{ print \$2}'|awk -F/ '{ print \$1 }')\nsed -i "s/^tracker_server=.*$/tracker_server=\$HOST_IP:$FASTDFS_PORT/g" /etc/fdfs/storage.conf\nsed -i "s/^tracker_server=.*$/tracker_server=\$HOST_IP:$FASTDFS_PORT/g" /etc/fdfs/mod_fastdfs.conf\n/etc/init.d/fdfs_trackerd start \n/etc/init.d/fdfs_storaged start\n/usr/local/nginx/sbin/nginx\ntail -f /usr/local/nginx/logs/access.log" >/start.sh \ && chmod +x /start.sh EXPOSE 80 22122 23000 ENTRYPOINT ["/bin/bash","/start.sh"] ``` #### **编译镜像文件** ```bash $ docker build -t fastdfs-nginx:v5.11 . ``` #### **启动容器** ```bash $ docker run -p 80:80 -p 22122:22122 -p 23000:23000 -v /root/docker_demo/fastdfs/data:/data/fastdfs fastdfs-nginx:v5.11 ```      #### **测试机1(Centos7)** 添加路由 ```bash $ route add -net 10.10.0.0 netmask 255.255.255.0 gw 192.168.50.252 ``` 上传图片 ```bash $ fdfs_test /etc/fdfs/client.conf upload zzzz.jpg ``` ``` group_name=group1, remote_filename=M00/00/00/CgoAAVtxAhqAWpxyAAE7WHOlIPs425.jpg source ip address: 10.10.0.1 file timestamp=2018-08-13 11:59:22 file size=80728 file crc32=1940201723 example file url: http://10.10.0.1/group1/M00/00/00/CgoAAVtxAhqAWpxyAAE7WHOlIPs425.jpg storage_upload_slave_by_filename group_name=group1, remote_filename=M00/00/00/CgoAAVtxAhqAWpxyAAE7WHOlIPs425_big.jpg source ip address: 10.10.0.1 file timestamp=2018-08-13 11:59:22 file size=80728 file crc32=1940201723 example file url: http://10.10.0.1/group1/M00/00/00/CgoAAVtxAhqAWpxyAAE7WHOlIPs425_big.jpg ```    #### **测试机2(window7)** 添加路由 ```cmd > route add 10.10.0.0 mask 255.255.255.0 192.168.50.252 ``` 访问图片 ![](https://files.ynotes.cn/18-8-13/79351609.jpg)
阅读 2090 评论 0 收藏 0
阅读 2090
评论 0
收藏 0


兜兜    2018-08-09 15:07:32    2020-03-08 18:41:51   

nginx https X-Forwarded-Proto scheme
#### **nginx+tomcat** nginx配置: ```bash proxy_set_header Host $host; proxy_set_header Cookie $http_cookie; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_pass http://127.0.0.1:8181; ``` tomcat配置: ```xml <Valve className="org.apache.catalina.valves.RemoteIpValve" remoteIpHeader="X-Forwarded-For" protocolHeader="X-Forwarded-Proto" internalProxies="172\.16.\d{1,3}\.\d{1,3}" #注意坑:如果使用tomcat7,并且内网ip是172网段需要加上internalProxies, 官网解释:http://tomcat.apache.org/tomcat-7.0-doc/api/org/apache/catalina/valves/RemoteIpValve.html protocolHeaderHttpsValue="https"/> ``` internalProxies参考:http://blog.inford.net/doc/171    #### **阿里云SLB+nginx+tomcat** 阿里云SLB配置: ![](https://files.ynotes.cn/18-8-9/93864542.jpg) nginx配置: ```bash proxy_set_header Host $host; proxy_set_header Cookie $http_cookie; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto; #http_x_forwarded_proto参数为SLB传过来的参数 proxy_pass http://127.0.0.1:8181; ``` tomcat配置: ```xml <Valve className="org.apache.catalina.valves.RemoteIpValve" remoteIpHeader="X-Forwarded-For" protocolHeader="X-Forwarded-Proto" protocolHeaderHttpsValue="https"/> ``` #### 上面的配置,一般的访问没有问题,当页面发生302重定向会请求http的问题,出现 requested an insecure XMLHttpRequest nginx配置(nginx+tomcat) ```bash proxy_redirect http:// $scheme://; #302重定向请求的http协议转发到$scheme ``` nginx配置(阿里云SLB+nginx+tomcat) ```bash proxy_redirect http:// $http_x_forwarded_proto://; #302重定向请求的http协议转发到$http_x_forwarded_proto ```
阅读 2730 评论 0 收藏 0
阅读 2730
评论 0
收藏 0


兜兜    2018-08-09 15:01:14    2019-11-14 14:32:44   

mysql PXC
### 一、环境准备 系统: `CentOS7` 数据库: `Percona-XtraDB-Cluster-57` 服务器: `master1`: `172.16.0.100/db1` `master2`: `172.16.0.101/db2` `master2`: `172.16.0.102/db3` &emsp; &emsp; ### 二、准备工作 `a.删除mysql-community` ```bash yum remove mysql-community-client mysql-community-server -y #仅安装了mysql-community需要执行 ``` `b.关闭防火墙` `c.关闭Selinux` &emsp; &emsp; ### 三、安装PXC集群 `所有节点` 安装Percona库 ```bash yum install https://repo.percona.com/yum/percona-release-latest.noarch.rpm -y ``` 安装Percona-XtraDB-Cluster-57 ```bash yum install Percona-XtraDB-Cluster-57 -y ``` 启动MySQL ```bash systemctl start mysql.service ``` 获取初始密码 ```bash grep 'temporary password' /var/log/mysqld.log ``` ``` 2019-08-09T03:22:26.358453Z 1 [Note] A temporary password is generated for root@localhost: so*WrNqjm3(e ``` 修改root密码 ```sql mysql -u root -p mysql> ALTER USER 'root'@'localhost' IDENTIFIED BY 'rootPass'; mysql> exit ``` 停止MySQL ```bash systemctl stop mysql.service ``` &emsp; &emsp; ### 四、引导第一个节点 修改配置 `Master1` ```bash vim /etc/percona-xtradb-cluster.conf.d/wsrep.cnf ``` ```ini [mysqld] wsrep_provider=/usr/lib64/galera3/libgalera_smm.so wsrep_cluster_address=gcomm:// #配置wsrep_cluster_address=gcomm:// 为了初始化集群,添加其他节点之后再修改回正常的配置,再重启节点即可 binlog_format=ROW default_storage_engine=InnoDB wsrep_slave_threads= 8 wsrep_log_conflicts innodb_autoinc_lock_mode=2 wsrep_cluster_name=pxc-cluster-test #集群名字 wsrep_node_name=db1 #节点名 wsrep_node_address=172.16.0.100 #当前节点IP pxc_strict_mode=ENFORCING wsrep_sst_method=xtrabackup-v2 #SST传输方法 wsrep_sst_auth=sstuser:passw0rd #SST账号 ``` bootstrap启动节点 ```bash systemctl start mysql@bootstrap.service ``` 查看集群信息 ```sql mysql> show status like 'wsrep%'; +----------------------------+--------------------------------------+ | Variable_name | Value | +----------------------------------+--------------------------------+ | wsrep_local_state_uuid | ee7be278-ba54-11e9-9621-8ee7979a72d4 | | ... | ... | | wsrep_local_state | 4 | | wsrep_local_state_comment | Synced | | ... | ... | | wsrep_incoming_addresses | 172.16.0.100:3306 | | wsrep_cluster_size | 1 | | wsrep_cluster_status | Primary | | wsrep_connected | ON | | ... | ... | | wsrep_ready | ON | +----------------------------+--------------------------------------+ ``` `上面信息显示集群几点为1,节点状态为Synced` 创建SST用户 ```sql mysql CREATE USER 'sstuser'@'localhost' IDENTIFIED BY 'passw0rd'; mysql> GRANT RELOAD, LOCK TABLES, PROCESS, REPLICATION CLIENT ON *.* TO 'sstuser'@'localhost'; mysql> FLUSH PRIVILEGES; ``` &emsp; &emsp; ### 五、添加节点到集群 #### 添加Master2到集群 修改配置 `Master2` ```bash vim /etc/percona-xtradb-cluster.conf.d/wsrep.cnf ``` ```bash [mysqld] wsrep_provider=/usr/lib64/galera3/libgalera_smm.so wsrep_cluster_address=gcomm://172.16.0.100,172.16.0.101,172.16.0.102 binlog_format=ROW default_storage_engine=InnoDB wsrep_slave_threads= 8 wsrep_log_conflicts innodb_autoinc_lock_mode=2 wsrep_cluster_name=pxc-cluster-test wsrep_node_name=db2 wsrep_node_address=172.16.0.101 pxc_strict_mode=ENFORCING wsrep_sst_method=xtrabackup-v2 wsrep_sst_auth=sstuser:passw0rd ``` 启动节点 ```bash systemctl start mysql ``` 查看集群状态 ```sql mysql> show status like 'wsrep%'; +----------------------------+--------------------------------------+ | Variable_name | Value | +----------------------------------+--------------------------------+ | wsrep_local_state_uuid | ee7be278-ba54-11e9-9621-8ee7979a72d4 | | ... | ... | | wsrep_local_state | 4 | | wsrep_local_state_comment | Synced | | ... | ... | | wsrep_incoming_addresses | 172.16.0.101:3306,172.16.0.100:3306 | | wsrep_cluster_size | 2 | | wsrep_cluster_status | Primary | | wsrep_connected | ON | | ... | ... | | wsrep_ready | ON | +----------------------------+--------------------------------------+ ``` `上面信息显示集群现在为2个节点,172.16.0.101:3306,172.16.0.100:3306` #### 添加Master3到集群 修改配置 `Master3` ```bash vim /etc/percona-xtradb-cluster.conf.d/wsrep.cnf ``` ```bash [mysqld] wsrep_provider=/usr/lib64/galera3/libgalera_smm.so wsrep_cluster_address=gcomm://172.16.0.100,172.16.0.101,172.16.0.102 binlog_format=ROW default_storage_engine=InnoDB wsrep_slave_threads= 8 wsrep_log_conflicts innodb_autoinc_lock_mode=2 wsrep_cluster_name=pxc-cluster-test wsrep_node_name=db3 wsrep_node_address=172.16.0.102 pxc_strict_mode=ENFORCING wsrep_sst_method=xtrabackup-v2 wsrep_sst_auth=sstuser:passw0rd ``` 启动节点 ```bash systemctl start mysql ``` 查看集群状态 ```sql mysql> show status like 'wsrep%'; +----------------------------+-------------------------------------------------------+ | Variable_name | Value | +----------------------------+-------------------------------------------------------+ | wsrep_local_state_uuid | ee7be278-ba54-11e9-9621-8ee7979a72d4 | | ... | ... | | wsrep_local_state | 4 | | wsrep_local_state_comment | Synced | | ... | ... | | wsrep_incoming_addresses | 172.16.0.102:3306,172.16.0.101:3306,172.16.0.100:3306 | | wsrep_cluster_size | 3 | | wsrep_cluster_status | Primary | | wsrep_connected | ON | | ... | ... | | wsrep_ready | ON | +----------------------------+-------------------------------------------------------+ ``` `上面信息显示集群现在为3个节点,172.16.0.102:3306,172.16.0.101:3306,172.16.0.100:3306` 重新添加Master1到集群 停止MySQL `Master1` ```bash systemctl stop mysql@bootstrap.service ``` 修改配置 `Master1` ```sql vim /etc/percona-xtradb-cluster.conf.d/wsrep.cnf ``` ```ini ... #wsrep_cluster_address=gcomm:// 修改前 wsrep_cluster_address=gcomm://172.16.0.100,172.16.0.101,172.16.0.102 #修改后 ... ``` 启动数据库 `Master1` ```bash systemctl start mysql ``` &emsp; &emsp; ### 六、验证集群 在Master2节点插入测试数据 `Master2` ```sql mysql> CREATE DATABASE percona; mysql> USE percona; mysql> CREATE TABLE example (node_id INT PRIMARY KEY, node_name VARCHAR(30)); mysql> INSERT INTO example VALUES (1, 'percona1'); ``` 所有节点查询数据 `所有节点` ```sql mysql> SELECT * FROM percona.example; +---------+-----------+ | node_id | node_name | +---------+-----------+ | 1 | percona1 | +---------+-----------+ ``` `测试发现所有数据都已经同步,说明集群搭建成功!` &emsp; &emsp; ### 七、从节点在线转换成PXC节点 #### **`说明:测试使用db1做主节点,db3做从节点`** #### 重置db3 `a.清理旧的数据并初始化数据库` ```bash systemctl stop mysql mv /var/lib/mysql/ /var/lib/mysql_bak/ mkdir /var/lib/mysql mv /etc/percona-xtradb-cluster.conf.d/wsrep.cnf /tmp #移除PXC相关配置 chown -R mysql.mysql /var/lib/mysql systemctl start mysql #启动并初始化数据库 ``` `b.获取初始密码` ```bash grep 'temporary password' /var/log/mysqld.log ``` ``` 2019-08-09T03:22:26.358453Z 1 [Note] A temporary password is generated for root@localhost: so*WrNqjm3(e ``` `c.修改root密码` ```sql mysql -u root -p mysql> ALTER USER 'root'@'localhost' IDENTIFIED BY 'rootPass'; mysql> exit ``` &emsp; #### 创建复制账号 `db1` ```sql mysql> create user 'bak'@'172.16.0.%' identified by '123456'; mysql> grant replication slave on *.* to 'bak'@'172.16.0.%'; mysql> flush privileges; ``` #### 备份数据库并发送到db3 `db1` ```bash mysqldump -hlocalhost -uroot -pxxxxxx --single-transaction -A --master-data=2 > all.sql scp all.sql db3:/root ``` #### 从节点执行数据库恢复 `db3` ```bash mysql -hlocalhost -uroot -prootPass </root/all.sql ``` #### 配置从节点同步主节点 `db3` ```sql mysql> CHANGE MASTER TO MASTER_HOST='172.16.0.100',MASTER_USER='bak',MASTER_PASSWORD='test123',MASTER_LOG_FILE='db1-bin.000004', MASTER_LOG_POS=88754; mysql> start slave; ``` #### 停止从节点获取同步位置 `db3` ```sql mysql> stop slave; #停止slave mysql> show slave status\G #查看同步位置 *************************** 1. row *************************** Slave_IO_State: Master_Host: 172.16.0.100 Master_User: bak Master_Port: 3306 Connect_Retry: 60 Master_Log_File: db1-bin.000004 Read_Master_Log_Pos: 190214 Relay_Log_File: db3-relay-bin.000003 Relay_Log_Pos: 101778 Relay_Master_Log_File: db1-bin.000004 Slave_IO_Running: No Slave_SQL_Running: No Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 190214 ``` `上面的输出显示同步文件:db1-bin.000004,位置为:190214` #### 从节点重置slave配置(`下次启动直接使用PXC去同步`) `db3` ```sql mysql> reset slave all; ``` #### 从节点停止MySQL `db3` ```bash systemctl stop mysql ``` #### 主节点查看同步的Position对应Xid `db1` ```bash mysqlbinlog db1-bin.000004 |grep Xid|grep 190214 ``` ``` #190809 8:57:53 server id 1 end_log_pos 190214 CRC32 0x4cfb3e3b Xid = 675 ``` `确认Postion 190214对应的Xid为675` #### 主节点拷贝grastate.dat文件到从节点 `db1` ```bash scp grastate.dat db3:/var/lib/mysql ``` #### 从节点修改grastate.dat `db3` ```bash cat grastate.dat ``` ```ini # GALERA saved state version: 2.1 uuid: ee7be278-ba54-11e9-9621-8ee7979a72d4 seqno: 675 #-1修改为上面的Xid safe_to_bootstrap: 0 ``` ```bash chown mysql.mysql /var/lib/mysql/grastate.dat ``` #### 加入PXC相关配置 `db3` ```bash vim /etc/percona-xtradb-cluster.conf.d/wsrep.cnf ``` ```ini [mysqld] wsrep_provider=/usr/lib64/galera3/libgalera_smm.so wsrep_cluster_address=gcomm://172.16.0.100,172.16.0.101,172.16.0.102 binlog_format=ROW default_storage_engine=InnoDB wsrep_slave_threads= 8 wsrep_log_conflicts innodb_autoinc_lock_mode=2 wsrep_cluster_name=pxc-cluster-test wsrep_node_name=db3 wsrep_node_address=172.16.0.102 pxc_strict_mode=ENFORCING wsrep_sst_method=xtrabackup-v2 wsrep_sst_auth=sstuser:passw0rd ``` #### 启动数据库 `db3` ```bash systemctl start mysql ``` #### 查看集群信息 `db3` ```sql mysql> show status like 'wsrep%'; +----------------------------------+-------------------------------------------------------+ | Variable_name | Value | +----------------------------------+-------------------------------------------------------+ | wsrep_local_state_uuid | ee7be278-ba54-11e9-9621-8ee7979a72d4 | | wsrep_protocol_version | 9 | | wsrep_last_applied | 1431 | | wsrep_last_committed | 1431 | | wsrep_replicated | 0 | | wsrep_replicated_bytes | 0 | | wsrep_repl_keys | 0 | | wsrep_repl_keys_bytes | 0 | | wsrep_repl_data_bytes | 0 | | wsrep_repl_other_bytes | 0 | | wsrep_received | 247 | | wsrep_received_bytes | 77425 | | wsrep_local_commits | 0 | | wsrep_local_cert_failures | 0 | | wsrep_local_replays | 0 | | wsrep_local_send_queue | 0 | | wsrep_local_send_queue_max | 1 | | wsrep_local_send_queue_min | 0 | | wsrep_local_send_queue_avg | 0.000000 | | wsrep_local_recv_queue | 0 | | wsrep_local_recv_queue_max | 2 | | wsrep_local_recv_queue_min | 0 | | wsrep_local_recv_queue_avg | 0.004049 | | wsrep_local_cached_downto | 1191 | | wsrep_flow_control_paused_ns | 0 | | wsrep_flow_control_paused | 0.000000 | | wsrep_flow_control_sent | 0 | | wsrep_flow_control_recv | 0 | | wsrep_flow_control_interval | [ 173, 173 ] | | wsrep_flow_control_interval_low | 173 | | wsrep_flow_control_interval_high | 173 | | wsrep_flow_control_status | OFF | | wsrep_cert_deps_distance | 75.099585 | | wsrep_apply_oooe | 0.654762 | | wsrep_apply_oool | 0.009259 | | wsrep_apply_window | 5.033069 | | wsrep_commit_oooe | 0.000000 | | wsrep_commit_oool | 0.000000 | | wsrep_commit_window | 2.513228 | | wsrep_local_state | 4 | | wsrep_local_state_comment | Synced | | wsrep_cert_index_size | 116 | | wsrep_cert_bucket_count | 210 | | wsrep_gcache_pool_size | 88288 | | wsrep_causal_reads | 0 | | wsrep_cert_interval | 0.000000 | | wsrep_open_transactions | 0 | | wsrep_open_connections | 0 | | wsrep_ist_receive_status | | | wsrep_ist_receive_seqno_start | 0 | | wsrep_ist_receive_seqno_current | 0 | | wsrep_ist_receive_seqno_end | 0 | | wsrep_incoming_addresses | 172.16.0.101:3306,172.16.0.100:3306,172.16.0.102:3306 | +----------------------------------+-------------------------------------------------------+ ``` 参考:https://www.percona.com/doc/percona-xtradb-cluster/5.7/index.html#introduction 参考书籍: MySQL王者晋级之路/张甦
阅读 806 评论 0 收藏 0
阅读 806
评论 0
收藏 0


兜兜    2018-08-08 14:46:59    2019-11-14 14:33:01   

mysql GTID
### 环境准备 系统: `CentOS7` 数据库: `MySQL5.7` Master节点: `172.16.0.100(node1)` Slave节点: `172.16.0.101(node2)` #### GTID复制切换成传统复制 查看复制信息 `Slave` ```sql mysql> show slave status\G ``` ``` Slave_IO_State: Waiting for master to send event Master_Host: 172.16.0.100 Master_User: replica Master_Port: 3306 Connect_Retry: 60 Master_Log_File: on.000001 Read_Master_Log_Pos: 7227 Relay_Log_File: db3-relay-bin.000002 Relay_Log_Pos: 2374 Relay_Master_Log_File: on.000001 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 7227 ... ``` 停止Slave,配置MASTER_AUTO_POSITION=0 `Slave` ```sql mysql> stop slave; mysql> CHANGE MASTER TO MASTER_HOST='172.16.0.100', MASTER_PORT=3306,MASTER_USER='replica', MASTER_PASSWORD='xxxxx',MASTER_AUTO_POSITION=0,MASTER_LOG_FILE='on.000001',MASTER_LOG_POS=7227; mysql> start slave; ``` 主从同时配置GTID模式为on_permissive `Master/Slave` ```sql mysql> set global gtid_mode=on_permissive ``` 主从同时配置GTID模式为off_permissive `Master/Slave` ```sql mysql> set global gtid_mode=off_permissive ``` 主从同时配置关闭GTID功能 `Master/Slave` ```sql mysql> set global enforce_gtid_consistency=off; mysql> set global gtid_mode=off; ``` 把gtid_mode=off和enforce_gtid_consistency=off写入my.cnf `Master/Slave` ```bash cat /etc/my.cnf ``` ```ini [mysqld] gtid_mode=on enforce_gtid_consistency=on ``` 验证传统复制是否成功 _1.插入数据到Master_ _2.Slave查看Executed_Gtid_Set值是否增加,如果值没变化,数据同步成功,说明传统复制配置成功_ &emsp; #### 传统复制切换GTID复制 主从同时修改参数enforce_gtid_consistency=warn `Master/Slave` ```sql mysql> set global enforce_gtid_consistency=warn; ``` 主从同时修改参数enforce_gtid_consistency=on `Master/Slave` ```sql mysql> set global enforce_gtid_consistency=on; ``` 主从同时配置GTID模式为off_permissive `Master/Slave` ```sql mysql> set global gtid_mode=off_permissive ``` 主从同时配置GTID模式为on_permissive `Master/Slave` ```sql mysql> set global gtid_mode=on_permissive ``` 确认从库的Ongoing_anonymous_transaction_count参数是否为0(为0,意味着没有等待的事务,可以直接进行下一步操作) `Slave` ```sql mysql> show global status like 'Ongoing_anonymous_transaction_count'; ``` ``` +-------------------------------------+-------+ | Variable_name | Value | +-------------------------------------+-------+ | Ongoing_anonymous_transaction_count | 0 | +-------------------------------------+-------+ ``` 主从同时配置gtid_mode=on `Master/Slave` ```sql mysql> set global gtid_mode=on; ``` 把传统模式改为GTID复制 `Slave` ```sql mysql> stop slave; mysql> change master to master_auto_position=1; mysql> start slave; ``` 验证GTID复制是否成功 _1.插入数据到Master_ _2.Slave查看Executed_Gtid_Set值是否增加,如果是则说明切换GTID复制成功_
阅读 715 评论 0 收藏 0
阅读 715
评论 0
收藏 0


兜兜    2018-08-05 11:05:07    2019-11-14 14:32:56   

mysql MHA
### 环境准备 系统: `CentOS7` 软件: `MySQL5.7` `MHA 0.56` VIP: `172.16.0.222` 服务器: `node1(Master节点)`: `172.16.0.100` `node2(SLave节点/Master备)`: `172.16.0.101` `node3(SLave节点/MHA manager)`: `172.16.0.102` &emsp; ### 初始化工作 **`a.三台机器SSH免密登陆`** **`b.三台机器的主机名和设置hosts文件解析`** **`c.两台SLAVE节点配置成MASTER节点的MySQL主从`** **`d.MySQL的MHA配置要求`** ```bash vim /etc/my.cnf ``` ```ini [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock symbolic-links=0 log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid bind-address = 0.0.0.0 #配置 server-id = 1 #节点的值唯一 log_bin = mysql-bin #MySQL Master或者备用Master binlog_ignore_db = mysql binlog_ignore_db = infomation_schema binlog_ignore_db = performance_schema #relay_log_purge = 0 #从库需要配置,执行完relay不删除 ``` `提示:MySQL Master或者备用Master需要打开binlog,MHA要求各个数据库节点的复制过滤规则(binlog-do-db, replicate-ignore-db)都一样` **`e. 开启半同步复制`** `所有节点都执行下面命令` ```sql mysql> install plugin rpl_semi_sync_master soname 'semisync_master.so'; #安装Master半同步复制插件 mysql> install plugin rpl_semi_sync_slave soname 'semisync_slave.so'; #安装Slave半同步复制插件 mysql> set global rpl_semi_sync_slave_enabled=on; #开启Slave半同步复制 mysql> set global rpl_semi_sync_master_enabled=on; #开启Master半同步复制 mysql> stop slave io_thread; mysql> start slave io_thread; mysql> show variables like '%semi%'; #查看半同步复制参数 mysql> show plugins; #查看加载的插件 ``` **`f. MySQL配置GTID复制(可选)`** 参考:https://ynotes.cn/blog/article_detail/203 传统复制和GTID复制转换 &emsp; ### 安装MHA 下载MHA Manager & MHA Node ```bash curl http://www.mysql.gr.jp/frame/modules/bwiki/index.php\?plugin\=attach\&pcmd\=open\&file\=mha4mysql-manager-0.56-0.el6.noarch.rpm\&refer\=matsunobu -o mha4mysql-manager-0.56-0.el6.noarch.rpm curl http://www.mysql.gr.jp/frame/modules/bwiki/index.php\?plugin\=attach\&pcmd\=open\&file\=mha4mysql-node-0.56-0.el6.noarch.rpm\&refer\=matsunobu -o mha4mysql-node-0.56-0.el6.noarch.rpm ``` 安装MHA Node `所有节点` ```bash yum localinstall -y mha4mysql-node-0.56-0.el6.noarch.rpm ``` 安装MHA Manager `node3(MHA manager节点)` ```bash yum localinstall -y mha4mysql-manager-0.56-0.el6.noarch.rpm ``` &emsp; ### 配置MHA管理节点 `node3(MHA manager节点)` 创建mha相关目录 ```bash mkdir -p /usr/local/mha mkdir -p /etc/mha ``` 创建mha.conf ```bash vim /etc/mha/mha.conf ``` ```ini [server default] manager_log=/var/log/mha/manager.log master_ip_failover_script=/etc/mha/scripts/master_ip_failover master_ip_online_change_script=/etc/mha/scripts/master_ip_failover # mysql user and password user=mha_admin password=123456 #ssh user ssh_user=root # working directory on the manager manager_workdir=/usr/local/mha # working directory on MySQL servers remote_workdir=/usr/local/mha repl_user=replica repl_password=123456 [server1] hostname=172.16.0.100 [server2] hostname=172.16.0.101 [server3] hostname=172.16.0.102 ``` 创建VIP切换脚本 ```bash vim /etc/mha/scripts/master_ip_failover ``` ```perl #!/usr/bin/env perl use strict; use warnings FATAL => 'all'; use Getopt::Long; my ( $command, $ssh_user, $orig_master_host, $orig_master_ip, $orig_master_port, $new_master_host, $new_master_ip, $new_master_port ); my $vip = '172.16.0.222/24'; # Virtual IP #设置VIP地址 my $key = "1"; my $ssh_start_vip = "/sbin/ifconfig eth1:$key $vip"; #针对网卡配置,如果使用的是eth0,则eth0:$key my $ssh_stop_vip = "/sbin/ifconfig eth1:$key down"; #针对网卡配置,如果使用的是eth0,则eth0:$key $ssh_user = "root"; GetOptions( 'command=s' => \$command, 'ssh_user=s' => \$ssh_user, 'orig_master_host=s' => \$orig_master_host, 'orig_master_ip=s' => \$orig_master_ip, 'orig_master_port=i' => \$orig_master_port, 'new_master_host=s' => \$new_master_host, 'new_master_ip=s' => \$new_master_ip, 'new_master_port=i' => \$new_master_port, ); exit &main(); sub main { print "\n\nIN SCRIPT TEST====$ssh_stop_vip==$ssh_start_vip===\n\n"; if ( $command eq "stop" || $command eq "stopssh" ) { # $orig_master_host, $orig_master_ip, $orig_master_port are passed. # If you manage master ip address at global catalog database, # invalidate orig_master_ip here. my $exit_code = 1; eval { print "Disabling the VIP on old master: $orig_master_host \n"; &stop_vip(); $exit_code = 0; }; if ($@) { warn "Got Error: $@\n"; exit $exit_code; } exit $exit_code; } elsif ( $command eq "start" ) { # all arguments are passed. # If you manage master ip address at global catalog database, # activate new_master_ip here. # You can also grant write access (create user, set read_only=0, etc) here. my $exit_code = 10; eval { print "Enabling the VIP - $vip on the new master - $new_master_host \n"; &start_vip(); $exit_code = 0; }; if ($@) { warn $@; exit $exit_code; } exit $exit_code; } elsif ( $command eq "status" ) { print "Checking the Status of the script.. OK \n"; `ssh $ssh_user\@cluster1 \" $ssh_start_vip \"`; exit 0; } else { &usage(); exit 1; } } # A simple system call that enable the VIP on the new master sub start_vip() { `ssh $ssh_user\@$new_master_host \" $ssh_start_vip \"`; } # A simple system call that disable the VIP on the old_master sub stop_vip() { `ssh $ssh_user\@$orig_master_host \" $ssh_stop_vip \"`; } sub usage { print "Usage: master_ip_failover --command=start|stop|stopssh|status --orig_master_host=host --orig_master_ip=ip --orig_master_port=port --new_master_host=host --new_master_ip=ip --new_master_port=port\n"; } ``` 创建复制用户(replica) `node1-2节点(Master节点/Master备)` ```sql mysql> GRANT REPLICATION SLAVE ON *.* TO 'replica'@'172.16.0.%' IDENTIFIED BY '123456'; mysql> FLUSH PRIVILEGES; ``` 创建管理用户(mha) `所有节点` ```sql mysql> GRANT ALL ON *.* TO 'mha'@'172.16.0.%' IDENTIFIED BY '123456'; mysql> FLUSH PRIVILEGES; ``` &emsp; ### Master配置VIP(第一次手动添加) `node1节点` ```bash /sbin/ifconfig eth1:1 172.16.0.222 ``` &emsp; ### 测试节点间的SSH登录 `node3(MHA manager节点)` ```bash masterha_check_ssh --conf=/etc/mha/mha.conf ``` ``` Mon Aug 5 03:17:00 2019 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping. Mon Aug 5 03:17:00 2019 - [info] Reading application default configuration from /etc/mha/mha.conf.. Mon Aug 5 03:17:00 2019 - [info] Reading server configuration from /etc/mha/mha.conf.. Mon Aug 5 03:17:00 2019 - [info] Starting SSH connection tests.. Mon Aug 5 03:17:01 2019 - [debug] Mon Aug 5 03:17:00 2019 - [debug] Connecting via SSH from root@172.16.0.100(172.16.0.100:22) to root@172.16.0.101(172.16.0.101:22).. Mon Aug 5 03:17:00 2019 - [debug] ok. Mon Aug 5 03:17:00 2019 - [debug] Connecting via SSH from root@172.16.0.100(172.16.0.100:22) to root@172.16.0.102(172.16.0.102:22).. Mon Aug 5 03:17:01 2019 - [debug] ok. Mon Aug 5 03:17:01 2019 - [debug] Mon Aug 5 03:17:00 2019 - [debug] Connecting via SSH from root@172.16.0.101(172.16.0.101:22) to root@172.16.0.100(172.16.0.100:22).. Mon Aug 5 03:17:01 2019 - [debug] ok. Mon Aug 5 03:17:01 2019 - [debug] Connecting via SSH from root@172.16.0.101(172.16.0.101:22) to root@172.16.0.102(172.16.0.102:22).. Mon Aug 5 03:17:01 2019 - [debug] ok. Mon Aug 5 03:17:02 2019 - [debug] Mon Aug 5 03:17:01 2019 - [debug] Connecting via SSH from root@172.16.0.102(172.16.0.102:22) to root@172.16.0.100(172.16.0.100:22).. Mon Aug 5 03:17:01 2019 - [debug] ok. Mon Aug 5 03:17:01 2019 - [debug] Connecting via SSH from root@172.16.0.102(172.16.0.102:22) to root@172.16.0.101(172.16.0.101:22).. Mon Aug 5 03:17:02 2019 - [debug] ok. Mon Aug 5 03:17:02 2019 - [info] All SSH connection tests passed successfully. ``` `上面显示ssh免密登陆配置成功` ### 检查mha配置 `node3(MHA manager节点)` ```bash masterha_check_repl --conf=/etc/mha/mha.conf ``` ``` Mon Aug 5 03:39:17 2019 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping. Mon Aug 5 03:39:17 2019 - [info] Reading application default configuration from /etc/mha/mha.conf.. Mon Aug 5 03:39:17 2019 - [info] Reading server configuration from /etc/mha/mha.conf.. Mon Aug 5 03:39:17 2019 - [info] MHA::MasterMonitor version 0.56. Mon Aug 5 03:39:18 2019 - [info] GTID failover mode = 0 Mon Aug 5 03:39:18 2019 - [info] Dead Servers: Mon Aug 5 03:39:18 2019 - [info] Alive Servers: Mon Aug 5 03:39:18 2019 - [info] 172.16.0.100(172.16.0.100:3306) Mon Aug 5 03:39:18 2019 - [info] 172.16.0.101(172.16.0.101:3306) Mon Aug 5 03:39:18 2019 - [info] 172.16.0.102(172.16.0.102:3306) Mon Aug 5 03:39:18 2019 - [info] Alive Slaves: Mon Aug 5 03:39:18 2019 - [info] 172.16.0.101(172.16.0.101:3306) Version=5.7.27-log (oldest major version between slaves) log-bin:enabled Mon Aug 5 03:39:18 2019 - [info] Replicating from 172.16.0.100(172.16.0.100:3306) Mon Aug 5 03:39:18 2019 - [info] 172.16.0.102(172.16.0.102:3306) Version=5.7.27-log (oldest major version between slaves) log-bin:enabled Mon Aug 5 03:39:18 2019 - [info] Replicating from 172.16.0.100(172.16.0.100:3306) Mon Aug 5 03:39:18 2019 - [info] Current Alive Master: 172.16.0.100(172.16.0.100:3306) Mon Aug 5 03:39:18 2019 - [info] Checking slave configurations.. Mon Aug 5 03:39:18 2019 - [info] read_only=1 is not set on slave 172.16.0.101(172.16.0.101:3306). Mon Aug 5 03:39:18 2019 - [warning] relay_log_purge=0 is not set on slave 172.16.0.101(172.16.0.101:3306). Mon Aug 5 03:39:18 2019 - [info] read_only=1 is not set on slave 172.16.0.102(172.16.0.102:3306). Mon Aug 5 03:39:18 2019 - [warning] relay_log_purge=0 is not set on slave 172.16.0.102(172.16.0.102:3306). Mon Aug 5 03:39:18 2019 - [info] Checking replication filtering settings.. Mon Aug 5 03:39:18 2019 - [info] binlog_do_db= , binlog_ignore_db= infomation_schema,mysql,performance_schema Mon Aug 5 03:39:18 2019 - [info] Replication filtering check ok. Mon Aug 5 03:39:18 2019 - [info] GTID (with auto-pos) is not supported Mon Aug 5 03:39:18 2019 - [info] Starting SSH connection tests.. Mon Aug 5 03:39:20 2019 - [info] All SSH connection tests passed successfully. Mon Aug 5 03:39:20 2019 - [info] Checking MHA Node version.. Mon Aug 5 03:39:21 2019 - [info] Version check ok. Mon Aug 5 03:39:21 2019 - [info] Checking SSH publickey authentication settings on the current master.. Mon Aug 5 03:39:21 2019 - [info] HealthCheck: SSH to 172.16.0.100 is reachable. Mon Aug 5 03:39:21 2019 - [info] Master MHA Node version is 0.56. Mon Aug 5 03:39:21 2019 - [info] Checking recovery script configurations on 172.16.0.100(172.16.0.100:3306).. Mon Aug 5 03:39:21 2019 - [info] Executing command: save_binary_logs --command=test --start_pos=4 --binlog_dir=/var/lib/mysql,/var/log/mysql --output_file=/usr/local/mha/save_binary_logs_test --manager_version=0.56 --start_file=mysql-bin.000008 Mon Aug 5 03:39:21 2019 - [info] Connecting to root@172.16.0.100(172.16.0.100:22).. Creating /usr/local/mha if not exists.. ok. Checking output directory is accessible or not.. ok. Binlog found at /var/lib/mysql, up to mysql-bin.000008 Mon Aug 5 03:39:22 2019 - [info] Binlog setting check done. Mon Aug 5 03:39:22 2019 - [info] Checking SSH publickey authentication and checking recovery script configurations on all alive slave servers.. Mon Aug 5 03:39:22 2019 - [info] Executing command : apply_diff_relay_logs --command=test --slave_user='mha_admin' --slave_host=172.16.0.101 --slave_ip=172.16.0.101 --slave_port=3306 --workdir=/usr/local/mha --target_version=5.7.27-log --manager_version=0.56 --relay_log_info=/var/lib/mysql/relay-log.info --relay_dir=/var/lib/mysql/ --slave_pass=xxx Mon Aug 5 03:39:22 2019 - [info] Connecting to root@172.16.0.101(172.16.0.101:22).. Checking slave recovery environment settings.. Opening /var/lib/mysql/relay-log.info ... ok. Relay log found at /var/lib/mysql, up to db2-relay-bin.000002 Temporary relay log file is /var/lib/mysql/db2-relay-bin.000002 Testing mysql connection and privileges..mysql: [Warning] Using a password on the command line interface can be insecure. done. Testing mysqlbinlog output.. done. Cleaning up test file(s).. done. Mon Aug 5 03:39:22 2019 - [info] Executing command : apply_diff_relay_logs --command=test --slave_user='mha_admin' --slave_host=172.16.0.102 --slave_ip=172.16.0.102 --slave_port=3306 --workdir=/usr/local/mha --target_version=5.7.27-log --manager_version=0.56 --relay_log_info=/var/lib/mysql/relay-log.info --relay_dir=/var/lib/mysql/ --slave_pass=xxx Mon Aug 5 03:39:22 2019 - [info] Connecting to root@172.16.0.102(172.16.0.102:22).. Checking slave recovery environment settings.. Opening /var/lib/mysql/relay-log.info ... ok. Relay log found at /var/lib/mysql, up to db3-relay-bin.000002 Temporary relay log file is /var/lib/mysql/db3-relay-bin.000002 Testing mysql connection and privileges..mysql: [Warning] Using a password on the command line interface can be insecure. done. Testing mysqlbinlog output.. done. Cleaning up test file(s).. done. Mon Aug 5 03:39:22 2019 - [info] Slaves settings check done. Mon Aug 5 03:39:22 2019 - [info] 172.16.0.100(172.16.0.100:3306) (current master) +--172.16.0.101(172.16.0.101:3306) +--172.16.0.102(172.16.0.102:3306) Mon Aug 5 03:39:22 2019 - [info] Checking replication health on 172.16.0.101.. Mon Aug 5 03:39:22 2019 - [info] ok. Mon Aug 5 03:39:22 2019 - [info] Checking replication health on 172.16.0.102.. Mon Aug 5 03:39:22 2019 - [info] ok. Mon Aug 5 03:39:22 2019 - [info] Checking master_ip_failover_script status: Mon Aug 5 03:39:22 2019 - [info] /etc/mha/scripts/master_ip_failover --command=status --ssh_user=root --orig_master_host=172.16.0.100 --orig_master_ip=172.16.0.100 --orig_master_port=3306 IN SCRIPT TEST====/sbin/ifconfig eth1:1 down==/sbin/ifconfig eth1:1 172.16.0.222/24=== Checking the Status of the script.. OK ssh: Could not resolve hostname cluster1: Name or service not known Mon Aug 5 03:39:22 2019 - [info] OK. Mon Aug 5 03:39:22 2019 - [warning] shutdown_script is not defined. Mon Aug 5 03:39:22 2019 - [info] Got exit code 0 (Not master dead). MySQL Replication Health is OK. ``` `MySQL Replication Health is OK.显示配置正确!` ### 启动MHA Manager ```bash nohup masterha_manager --conf=/etc/mha/mha.conf & ``` ### 验证是否启动成功 ```bash masterha_check_status --conf=/etc/mha/mha.conf ``` ``` mha (pid:27057) is running(0:PING_OK), master:172.16.0.100 ``` ### 停止MHA Manager(`不执行`) ```bash masterha_stop --conf=/etc/mha/mha.conf ``` &emsp; ### 测试 #### 测试故障转移 **停止node1节点的MySQL** ```bash systemctl stop mysqld ``` **node2查看VIP** ```bash ip a ``` ``` ... 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc pfifo_fast state UP group default qlen 1000 link/ether 5a:00:02:32:0f:46 brd ff:ff:ff:ff:ff:ff inet 172.16.0.101/16 brd 172.16.255.255 scope global eth1 valid_lft forever preferred_lft forever inet 172.16.0.222/24 brd 172.16.0.255 scope global eth1:1 valid_lft forever preferred_lft forever inet6 fe80::5800:2ff:fe32:f46/64 scope link valid_lft forever preferred_lft forever ``` `通过上面的信息可以看到VIP已经转移到node2` **查看是否为slave** ```sql mysql> show slave status\G #没有输出,说明node2已经为非slave节点 Empty set (0.00 sec) ``` **查看node3的slave信息** ```sql mysql> show slave status\G ``` ``` *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: 172.16.0.101 Master_User: replica Master_Port: 3306 Connect_Retry: 60 ... ``` `通过上面的信息可以确定故障转移成功,master已经变成node2节点,node3节点已经同步到新的slave` &emsp; #### 旧master恢复的操作 `node1数据库恢复之后,首先设置为node2的slave,之后启动MHA Manager` 启动MySQL `node1` ```bash systemctl start mysqld ``` 配置node1为node2的slave _MHA节点查看同步信息_ `node3` ```bash grep "CHANGE MASTER TO" /var/log/mha/manager.log |tail -1 ``` ```bash Mon Aug 5 03:32:28 2019 - [info] All other slaves should start replication from here. Statement should be: CHANGE MASTER TO MASTER_HOST='172.16.0.101', MASTER_PORT=3306, MASTER_LOG_FILE='mysql-bin.000008', MASTER_LOG_POS=154, MASTER_USER='replica', MASTER_PASSWORD='xxx'; ``` _node1配置同步node2_ `node1` ```sql mysql> CHANGE MASTER TO MASTER_HOST='172.16.0.101', MASTER_PORT=3306, MASTER_LOG_FILE='mysql-bin.000008', MASTER_LOG_POS=154, MASTER_USER='replica', MASTER_PASSWORD='123456'; mysql> start slave; #启动slave mysql> show slave status\G ``` ``` *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: 172.16.0.101 Master_User: replica Master_Port: 3306 Connect_Retry: 60 ... ``` 把 Master切回到原理的node1 `node3` ```bash masterha_master_switch --conf=/etc/mha/mha.conf --master_state=alive --new_master_host=172.16.0.100 --orig_master_is_new_slave ``` 检查mha配置 `node3` ```bash masterha_check_repl --conf=/etc/mha/mha.conf ``` ``` ... MySQL Replication Health is OK. ``` 启动MHA Manager `node3` ```bash nohup masterha_manager --conf=/etc/mha/mha.conf & ``` 参考: http://www.fblinux.com/?p=1018 https://joelhy.github.io/2015/02/06/mysql-mha/
阅读 730 评论 0 收藏 0
阅读 730
评论 0
收藏 0


兜兜    2018-08-04 18:14:23    2019-07-23 09:50:11   

分布式文件服务器 fastdfs
#### **介绍** **实验使用两台centos7机器搭建fastdfs,两台机器使用不同的group,使用阿里云SLB做负载均衡,nginx做反向代理,部署架构如下:**  ![](https://files.ynotes.cn/18-8-4/89007362.jpg) **配置两台ECS机器的host,能互相解析主机名** ```bash $ cat /etc/hosts ``` ``` 172.18.176.147 n2 n2.mytest.loc 172.18.176.146 n1 n1.mytest.loc ``` ### **[ 172.18.176.146 ]** #### 1.安装依赖库以及环境 ```bash $ yum install gcc gcc-c++ libevent libstdc++-devel pcre-devel zlib-devel make unzip ``` #### 2.安装配置libfastcommon ```bash $ wget https://github.com/happyfish100/libfastcommon/archive/V1.0.7.zip $ tar xvf V1.0.7.zip $ cd libfastcommon-1.0.7 $ ./make.sh && ./make.sh install ``` libfastcommon.so 安装到了/usr/lib64/libfastcommon.so,但是FastDFS主程序设置的lib目录是/usr/local/lib,所以需要创建软链接。 ```bash $ ln -s /usr/lib64/libfastcommon.so /usr/local/lib/libfastcommon.so $ ln -s /usr/lib64/libfastcommon.so /usr/lib/libfastcommon.so $ ln -s /usr/lib64/libfdfsclient.so /usr/local/lib/libfdfsclient.so $ ln -s /usr/lib64/libfdfsclient.so /usr/lib/libfdfsclient.so ``` #### 3.安装配置FastDFS 下载FastDFS ```bash $ wget https://github.com/happyfish100/fastdfs/archive/V5.05.zip $ tar xvf V5.05.tar.gz $ cd fastdfs-5.05 $ ./make.sh && ./make.sh install ``` #### 4.配置tracker ```bash $ cd /etc/fdfs $ cp tracker.conf.sample tracker.conf $ cat tracker.conf ``` ```bash disabled=false bind_addr= port=22122 #tracker端口号 connect_timeout=30 network_timeout=60 base_path=/data/fastdfs/tracker #tracker的日志和数据存储目录 max_connections=256 accept_threads=1 work_threads=4 store_lookup=2 store_server=0 store_path=0 download_server=0 reserved_storage_space = 10% log_level=info run_by_group= run_by_user= allow_hosts=* sync_log_buff_interval = 10 check_active_interval = 120 thread_stack_size = 64KB storage_ip_changed_auto_adjust = true storage_sync_file_max_delay = 86400 storage_sync_file_max_time = 300 use_trunk_file = false slot_min_size = 256 slot_max_size = 16MB trunk_file_size = 64MB trunk_create_file_advance = false trunk_create_file_time_base = 02:00 trunk_create_file_interval = 86400 trunk_create_file_space_threshold = 20G trunk_init_check_occupying = false trunk_init_reload_from_binlog = false trunk_compress_binlog_min_interval = 0 use_storage_id = false storage_ids_filename = storage_ids.conf id_type_in_filename = ip store_slave_file_use_link = false rotate_error_log = false error_log_rotate_time=00:00 rotate_error_log_size = 0 log_file_keep_days = 0 use_connection_pool = false connection_pool_max_idle_time = 3600 http.server_port=8080 http.check_alive_interval=30 http.check_alive_type=tcp http.check_alive_uri=/status.html ``` #### 5.配置storage ```bash $ cd /etc/fdfs $ cp storage.conf.sample storage.conf $ cat storage.conf ``` ```bash disabled=false group_name=group1 #配置group1卷组 bind_addr= client_bind=true port=23000 #storage端口号 connect_timeout=30 network_timeout=60 heart_beat_interval=30 stat_report_interval=60 base_path=/data/fastdfs/storage #storage日志路径 max_connections=256 buff_size = 256KB accept_threads=1 work_threads=4 disk_rw_separated = true disk_reader_threads = 1 disk_writer_threads = 1 sync_wait_msec=50 sync_interval=0 sync_start_time=00:00 sync_end_time=23:59 write_mark_file_freq=500 store_path_count=1 store_path0=/data/fastdfs/storage #storage文件存储路径 #store_path_count=2 #有几个存储路径,就写几个 #store_path1=/data/fastdfs/storage #storage文件存储路径 subdir_count_per_path=256 tracker_server=n1.mytest.loc:22122 #配置tracker tracker_server=n2.mytest.loc:22122 #配置tracker log_level=info run_by_group= run_by_user= allow_hosts=* file_distribute_path_mode=0 file_distribute_rotate_count=100 fsync_after_written_bytes=0 sync_log_buff_interval=10 sync_binlog_buff_interval=10 sync_stat_file_interval=300 thread_stack_size=512KB upload_priority=10 if_alias_prefix= check_file_duplicate=0 file_signature_method=hash key_namespace=FastDFS keep_alive=0 use_access_log = false rotate_access_log = false access_log_rotate_time=00:00 rotate_error_log = false error_log_rotate_time=00:00 rotate_access_log_size = 0 rotate_error_log_size = 0 log_file_keep_days = 0 file_sync_skip_invalid_record=false use_connection_pool = false connection_pool_max_idle_time = 3600 http.domain_name= http.server_port=80 ``` #### 6.启动tracker ```bash $ /usr/local/bin/fdfs_trackerd /etc/fdfs/tracker.conf ``` #### 7.启动storage ```bash $ /usr/local/bin/fdfs_storaged /etc/fdfs/storage.conf ``` #### 8.安装nginx及fastdfs-nginx-module模块(下载源码nginx编译nginx-fastdfs模块,然后替换yum安装的nginx二进制文件,你也可以直接使用源码编译的nginx) 8.1安装nginx ```bash $ yum install -y nginx $ nginx -v ``` 8.2查看安装的nginx文件的参数 ```bash $ nginx -V ``` ``` nginx version: nginx/1.12.2 (CentOS) built by gcc 4.8.5 20150623 (Red Hat 4.8.5-28) (GCC) built with OpenSSL 1.0.2k-fips 26 Jan 2017 TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib64/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --user=nginx --group=nginx --build=CentOS --with-select_module --with-poll_module --with-threads --with-file-aio --with-http_ssl_module --with-http_v2_module --with-http_realip_module --with-http_addition_module --with-http_xslt_module=dynamic --with-http_image_filter_module=dynamic --with-http_geoip_module=dynamic --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_auth_request_module --with-http_random_index_module --with-http_secure_link_module --with-http_degradation_module --with-http_slice_module --with-http_stub_status_module --http-log-path=/var/log/nginx/access.log --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --with-stream=dynamic --with-stream_ssl_module --with-stream_realip_module --with-stream_geoip_module=dynamic ``` 8.3 下载fastdfs-nginx-module模块 ```bash $ wget https://github.com/happyfish100/fastdfs-nginx-module/archive/master.zip $ unzip master.zip ``` 8.4 下载源码nginx-1.12.2.tar.gz ```bash $ wget http://nginx.org/download/nginx-1.12.2.tar.gz $ tar xvr http://nginx.org/download/nginx-1.12.2.tar.gz $ cd nginx-1.12.2 ``` 8.5 源码编译nginx ```bash $ ./configure --prefix=/etc/nginx \ --sbin-path=/usr/sbin/nginx \ --modules-path=/usr/lib64/nginx/modules \ --conf-path=/etc/nginx/nginx.conf \ --error-log-path=/var/log/nginx/error.log \ --pid-path=/var/run/nginx.pid \ --lock-path=/var/run/nginx.lock \ --user=nginx \ --group=nginx \ --build=CentOS \ --with-select_module \ --with-poll_module \ --with-threads \ --with-file-aio \ --with-http_ssl_module \ --with-http_v2_module \ --with-http_realip_module \ --with-http_addition_module \ --with-http_xslt_module=dynamic \ --with-http_image_filter_module=dynamic \ --with-http_geoip_module=dynamic \ --with-http_sub_module \ --with-http_dav_module \ --with-http_flv_module \ --with-http_mp4_module \ --with-http_gunzip_module \ --with-http_gzip_static_module \ --with-http_auth_request_module \ --with-http_random_index_module \ --with-http_secure_link_module \ --with-http_degradation_module \ --with-http_slice_module \ --with-http_stub_status_module \ --http-log-path=/var/log/nginx/access.log \ --http-client-body-temp-path=/var/cache/nginx/client_temp \ --http-proxy-temp-path=/var/cache/nginx/proxy_temp \ --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp \ --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp \ --http-scgi-temp-path=/var/cache/nginx/scgi_temp \ --with-stream=dynamic \ --with-stream_ssl_module \ --with-stream_realip_module \ --with-stream_geoip_module=dynamic \ --add-module=../fastdfs-nginx-module-master/src #添加fastdfs-nginx-module-master模块 $ make #编译nginx ``` 8.6 替换yum安装的nginx ```bash $ cp /usr/sbin/nginx /usr/sbin/nginx_old #备份原来的nginx $ cp objs/nginx /usr/sbin/nginx #替换yum安装的nginx ``` #### 9.配置fastdfs-nginx-module模块和nginx ```bash $ cd /etc/fdfs/ $ cp /root/fastdfs-nginx-module/src/mod_fastdfs.conf . $ cat mod_fastdfs.conf ``` ```bash connect_timeout=2 network_timeout=30 base_path=/tmp load_fdfs_parameters_from_tracker=true storage_sync_file_max_delay = 86400 use_storage_id = false storage_ids_filename = storage_ids.conf tracker_server=n1.mytest.loc:22122 tracker_server=n2.mytest.loc:22122 storage_server_port=23000 group_name=group1 url_have_group_name = true store_path_count=1 store_path0=/data/fastdfs/storage log_level=info log_filename= response_mode=proxy if_alias_prefix= flv_support = true flv_extension = flv group_count = 1 [group1] group_name=group1 storage_server_port=23000 store_path_count=1 store_path0=/data/fastdfs/storage ``` 9.1拷贝http.conf,mime.types文件(nginx的fastdfs-nginx-module模块需要用到) ```bash $ cp /root/fastdfs/conf/http.conf /root/fastdfs/conf/mime.types /etc/fdfs/ ``` 9.2配置nginx ```bash $ cat fastdfs.mytest.cn.conf ``` ``` upstream fdfs_group1{ server n1.mytest.loc:18080 weight=1 max_fails=2 fail_timeout=30s; } upstream fdfs_group2{ server n2.mytest.loc:18080 weight=1 max_fails=2 fail_timeout=30s; } server { listen 80; server_name fastdfs.mytest.cn; access_log /var/log/nginx/fastdfs.mytest.cn.access.log main; location ~ /group1/M00 { add_header Strict-Transport-Security max-age=86400; proxy_next_upstream http_502 http_504 error timeout invalid_header; proxy_pass http://fdfs_group1; } location ~ /group2/M00 { add_header Strict-Transport-Security max-age=86400; proxy_next_upstream http_502 http_504 error timeout invalid_header; proxy_pass http://fdfs_group2; } error_page 404 /404.html; location = /404.html { root /usr/share/nginx/html; } error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html/; } } #本机器ngx_fastdfs_module模块只会处理group1的读写请求 server { listen 18080; server_name 172.18.176.146; location ~ /group1/M00 { #add_header Strict-Transport-Security max-age=86400; alias /data/fastdfs/storage/data; ngx_fastdfs_module; } } ``` #### 10.启动nginx ```bash $ systemctl start nginx ``` #### 11.测试fastdfs文件服务器 11.1配置fdfs客户端文件 ```bash $ cat /etc/fdfs/client.conf ``` ```bash connect_timeout=30 network_timeout=60 base_path=/data/fastdfs/client tracker_server=n1.mytest.loc:22122 tracker_server=n2.mytest.loc:22122 log_level=info use_connection_pool = false connection_pool_max_idle_time = 3600 load_fdfs_parameters_from_tracker=false use_storage_id = false storage_ids_filename = storage_ids.conf http.tracker_server_port=80 ``` 11.2增加测试文件test.html ```bash $ cat test.html hello,fastdfs! ``` 11.3上传文件 ```bash $ fdfs_upload_file /etc/fdfs/client.conf test.html group1/M00/00/00/rBKwk1tmpJaAbf3CAAAADxawCsc58.html ``` 11.4下载文件 ```bash $ fdfs_download_file /etc/fdfs/client.conf group1/M00/00/00/rBKwk1tmpJaAbf3CAAAADxawCsc58.html test2.html ``` 11.5监控monitor ```bash $ fdfs_monitor /etc/fdfs/client.conf ``` ### **[ 172.18.176.147 ]** #### **安装libfastcommon,FastDFS,nginx,fastdfs-nginx-module模块的步骤与172.18.176.146一样,storage.conf和nginx的配置有差异** #### 12.storage的配置 ```bash $ cat /etc/fdfs/storage.conf ``` ```bash disabled=false group_name=group2 #配置group2卷组 bind_addr= client_bind=true port=23000 #storage端口号 connect_timeout=30 network_timeout=60 heart_beat_interval=30 stat_report_interval=60 base_path=/data/fastdfs/storage #storage日志路径 max_connections=256 buff_size = 256KB accept_threads=1 work_threads=4 disk_rw_separated = true disk_reader_threads = 1 disk_writer_threads = 1 sync_wait_msec=50 sync_interval=0 sync_start_time=00:00 sync_end_time=23:59 write_mark_file_freq=500 store_path_count=1 store_path0=/data/fastdfs/storage #storage文件存储路径 #store_path_count=2 #有几个存储路径,就写几个 #store_path1=/data/fastdfs/storage #storage文件存储路径 subdir_count_per_path=256 tracker_server=n1.mytest.loc:22122 #配置tracker tracker_server=n2.mytest.loc:22122 #配置tracker log_level=info run_by_group= run_by_user= allow_hosts=* file_distribute_path_mode=0 file_distribute_rotate_count=100 fsync_after_written_bytes=0 sync_log_buff_interval=10 sync_binlog_buff_interval=10 sync_stat_file_interval=300 thread_stack_size=512KB upload_priority=10 if_alias_prefix= check_file_duplicate=0 file_signature_method=hash key_namespace=FastDFS keep_alive=0 use_access_log = false rotate_access_log = false access_log_rotate_time=00:00 rotate_error_log = false error_log_rotate_time=00:00 rotate_access_log_size = 0 rotate_error_log_size = 0 log_file_keep_days = 0 file_sync_skip_invalid_record=false use_connection_pool = false connection_pool_max_idle_time = 3600 http.domain_name= http.server_port=80 ``` #### 13.nginx的配置 ```bash $ cat /etc/nginx/conf.d/fastdfs.mytest.cn.conf ``` ``` upstream fdfs_group1{ server n1.mytest.loc:18080 weight=1 max_fails=2 fail_timeout=30s; } upstream fdfs_group2{ server n2.mytest.loc:18080 weight=1 max_fails=2 fail_timeout=30s; } server { listen 80; server_name fastdfs.mytest.cn; #charset koi8-r; access_log /var/log/nginx/fastdfs.mytest.cn.access.log main; location ~ /group1/M00 { add_header Strict-Transport-Security max-age=86400; proxy_next_upstream http_502 http_504 error timeout invalid_header; proxy_pass http://fdfs_group1; } location ~ /group2/M00 { add_header Strict-Transport-Security max-age=86400; proxy_next_upstream http_502 http_504 error timeout invalid_header; proxy_pass http://fdfs_group2; } error_page 404 /404.html; location = /404.html { root /usr/share/nginx/html; } error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html/; } } #本机器ngx_fastdfs_module模块只会处理group2的读写请求 server { listen 18080; server_name 172.18.176.147; location ~ /group2/M00 { alias /data/fastdfs/storage/data; ngx_fastdfs_module; } } ``` #### 14.启动tracker ```bash $ /usr/local/bin/fdfs_trackerd /etc/fdfs/tracker.conf ``` #### 15.启动storage ```bash $ /usr/local/bin/fdfs_storaged /etc/fdfs/storage.conf ``` #### 16.启动nginx ```bash $ systemctl start nginx ``` #### 17. 测试fastdfs文件服务器 17.1上传文件 ```bash $ fdfs_upload_file /etc/fdfs/client.conf test.jpg group2/M00/00/00/rBKwk1tmrBCAXY8gAAFMEccTGrw633.jpg ``` 17.2下载文件 ```bash $ fdfs_download_file /etc/fdfs/client.conf group2/M00/00/00/rBKwk1tmrBCAXY8gAAFMEccTGrw633.jpg test2.jpg ``` 17.3监控monitor ```bash $ fdfs_monitor /etc/fdfs/client.conf ``` #### 18.配置阿里云SLB ![](https://files.ynotes.cn/18-8-5/9281487.jpg) #### 19.浏览器访问 ![](https://files.ynotes.cn/18-8-5/32712486.jpg)
阅读 831 评论 0 收藏 0
阅读 831
评论 0
收藏 0


第 10 页 / 共 15 页