私信
兜兜
文章
206
评论
12
点赞
98
原创 180
翻译 4
转载 22

文章
关注
粉丝
收藏

个人分类:

兜兜    2018-08-01 17:07:51    2019-11-14 14:32:50   

数据库 mysql
### 环境准备 系统: `CentOS7` 数据库: `MySQL5.7` Master1节点: `172.16.0.100(node1)` Master2节点: `172.16.0.101(node2)` VIP: `172.16.0.188` 同步数据库名:`replicatest` &emsp; ### 安装MySQL `Master1/Master2节点` ```bash yum localinstall https://dev.mysql.com/get/mysql57-community-release-el7-11.noarch.rpm yum install mysql-community-server ``` 启动数据库 ```bash systemctl enable mysqld systemctl start mysqld ``` 获取数据库临时密码 ```bash grep 'temporary password' /var/log/mysqld.log ``` 对数据库进行加固 ```bash mysql_secure_installation ``` &emsp; ### 配置Master1 修改数据库配置 ```bash vim /etc/my.cnf ``` ```ini bind-address = 172.16.0.100 server-id = 1 log_bin = mysql-bin binlog-do-db = replicatest replicate-do-db = replicatest auto-increment-increment = 2 auto-increment-offset = 1 ``` 重启数据库 ```bash systemctl restart mysqld ``` 创建复制账号 ```sql mysql> CREATE USER 'replica'@'172.16.0.101' IDENTIFIED BY 'strong_password'; mysql> GRANT REPLICATION SLAVE ON *.* TO 'replica'@'172.16.0.101'; mysql> FLUSH PRIVILEGES; ``` 查看二进制日志文件和位置 ```sql mysql> FLUSH TABLES WITH READ LOCK #锁表,备份数据,恢复到master2,再让master2从当前位置去同步数据,保证了数据的一致性 mysql> SHOW MASTER STATUS\G ``` ``` *************************** 1. row *************************** File: mysql-bin.000001 Position: 623 Binlog_Do_DB: Binlog_Ignore_DB: Executed_Gtid_Set: 1 row in set (0.00 sec) ``` `注意:文件:mysql-bin.000001,位置:623` 备份replicatest数据 ```bash mysqldump -h127.0.0.1 -uroot -pxxxx --databases replicatest --events --triggers --routines >replicatest.sql ``` ### 拷贝数据到master2 ```bash scp replicatest.sql root@node2:/root ``` &emsp; ### 配置Master2 修改数据库配置 ```bash vim /etc/my.cnf ``` ```ini bind-address = 172.16.0.101 server-id = 2 log_bin = mysql-bin binlog-do-db = replicatest replicate-do-db = replicatest #binlog_ignore_db = mysql #binlog_ignore_db = infomation_schema #binlog_ignore_db = performance_schema auto-increment-increment = 2 auto-increment-offset = 2 ``` 重启数据库 ```bash systemctl restart mysqld ``` 恢复数据库 ```sql mysql>source replicatest.sql ``` 停止slave线程 ```sql mysql> STOP SLAVE; ``` 配置master2复制master1 ```sql mysql> CHANGE MASTER TO mysql> MASTER_HOST='172.16.0.100', mysql> MASTER_USER='replica', mysql> MASTER_PASSWORD='strong_password', mysql> MASTER_LOG_FILE='mysql-bin.000001', mysql> MASTER_LOG_POS=623; ``` 启动slave线程 ```sql mysql> START SLAVE; ``` 查看slave状态 ```sql mysql > show slave status\G ``` ``` *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: 172.16.0.100 Master_User: replica Master_Port: 3306 Connect_Retry: 60 Master_Log_File: mysql-bin.000001 Read_Master_Log_Pos: 623 Relay_Log_File: node2-relay-bin.000001 Relay_Log_Pos: 323 Relay_Master_Log_File: mysql-bin.000001 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 623 Relay_Log_Space: 544 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 0 Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 1 Master_UUID: f8cc4c10-b429-11e9-9ff9-5600023106ec Master_Info_File: /var/lib/mysql/master.info SQL_Delay: 0 SQL_Remaining_Delay: NULL Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates Master_Retry_Count: 86400 Master_Bind: Last_IO_Error_Timestamp: Last_SQL_Error_Timestamp: Master_SSL_Crl: Master_SSL_Crlpath: Retrieved_Gtid_Set: Executed_Gtid_Set: Auto_Position: 0 Replicate_Rewrite_DB: Channel_Name: Master_TLS_Version: ``` 创建复制账号 ```bash mysql> CREATE USER 'replica'@'172.16.0.100' IDENTIFIED BY 'strong_password'; mysql> GRANT REPLICATION SLAVE ON *.* TO 'replica'@'172.16.0.100'; mysql> FLUSH PRIVILEGES; ``` 查看当前的二进制日志 ```sql mysql> SHOW MASTER STATUS\G ``` ``` *************************** 1. row *************************** File: mysql-bin.000001 Position: 504 Binlog_Do_DB: Binlog_Ignore_DB: Executed_Gtid_Set: 1 row in set (0.00 sec) ``` `注意:文件:mysql-bin.000001,位置:504` &emsp; ### 配置Master1 解锁只读锁 ```sql mysql> UNLOCK TABLES ``` 停止slave线程 ```sql mysql> STOP SLAVE; ``` 配置同步 ```sql mysql> CHANGE MASTER TO mysql> MASTER_HOST='172.16.0.101', mysql> MASTER_USER='replica', mysql> MASTER_PASSWORD='strong_password', mysql> MASTER_LOG_FILE='mysql-bin.000001', mysql> MASTER_LOG_POS=504; ``` 启动slave线程 ```sql mysql> START SLAVE; ``` 查看同步状态 ```sql mysql> show slave status\G ``` ``` *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: 172.16.0.101 Master_User: replica Master_Port: 3306 Connect_Retry: 60 Master_Log_File: mysql-bin.000001 Read_Master_Log_Pos: 504 Relay_Log_File: node1-relay-bin.000001 Relay_Log_Pos: 245 Relay_Master_Log_File: mysql-bin.000001 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 504 Relay_Log_Space: 213 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 0 Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 2 Master_UUID: 1c505063-b42a-11e9-96c3-560002320f46 Master_Info_File: /var/lib/mysql/master.info SQL_Delay: 0 SQL_Remaining_Delay: NULL Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates Master_Retry_Count: 86400 Master_Bind: Last_IO_Error_Timestamp: Last_SQL_Error_Timestamp: Master_SSL_Crl: Master_SSL_Crlpath: Retrieved_Gtid_Set: Executed_Gtid_Set: Auto_Position: 0 Replicate_Rewrite_DB: Channel_Name: Master_TLS_Version: ``` &emsp; ### 安装Keepalived `Master1节点/Master2节点` ```bash yum install keepalived -y ``` ### 配置Keepalived `Master1` ```bash vim /etc/keepalived/keepalived.conf #除了参数priority两台Master设置优先级不一样,其他参数保存一致 ``` ```ini ! Configuration File for keepalived global_defs { notification_email { acassen@firewall.loc failover@firewall.loc sysadmin@firewall.loc } notification_email_from Alexandre.Cassen@firewall.loc smtp_server 192.168.200.1 smtp_connect_timeout 30 router_id LVS_DEVEL vrrp_skip_check_adv_addr vrrp_strict vrrp_garp_interval 0 vrrp_gna_interval 0 } vrrp_script vs_mysql_100 { script "/etc/keepalived/checkmysql.sh" interval 10 } vrrp_instance VI_100 { #集群的名字 state BACKUP #这里两台都配置BACKUP nopreempt #设置为不抢占模式 interface eth1 #VIP绑定的网卡eth1 virtual_router_id 51 #vrid的两台一样 priority 100 #优先级,两台设置不一样,master1设置为100,master2设置为90 advert_int 5 #主备之前同步建成的时间间隔是5秒 authentication { auth_type PASS #验证方式通过密码 auth_pass 9898 #验证密码 } track_script { vs_mysql_100 #执行监控的脚本 } virtual_ipaddress { 172.16.0.188 #虚拟VIP地址 } } ``` #### 创建数据库监控脚本 `Master1/Master2节点` ```bash cat /etc/keepalived/checkmysql.sh ``` ```sh #!/bin/bash mysqlstr=/usr/bin/mysql host=127.0.0.1 user=root password=xxxxxxx port=3306 mysql_status=1 $mysqlstr -h $host -u $user -p$password -P$port -e "show status;" >/dev/null 2>&1 if [ $? -eq 0 ];then echo "mysql_status=1" exit 0 else echo "systemctl stop keepaliaved" systemctl stop keepaliaved fi ``` 添加执行权限 ```bash chmod +x /etc/keepalived/checkmysql.sh ``` ### 启动keepalived `Master1/Master2节点` ```bash systemctl start keepalived ``` &emsp; ### 测试主主 `Master1` 插入测试数据 ```sql mysql> use replicatest; mysql> insert into test(name) values('master1 write'); ``` `Master2` 查看是否同步 ```sql mysql> use replicatest; mysql> select * from test; ``` ``` +----+---------------+ | id | name | +----+---------------+ | 19 | master1 write | +----+---------------+ ``` 插入测试数据 ```sql mysql> insert into test(name) values('master2 write'); ``` `Master1` 查看是否同步 ```sql select * from test; ``` ``` +----+---------------+ | id | name | +----+---------------+ | 19 | master1 write | | 20 | master2 write | +----+---------------+ ``` `通过输出可以看到master1和master2都能同步成功!` &emsp; ### 测试keepalived **停止Master1上的MySQL,查看VIP是否转移到MASTER2** `Master1` ```bash systemcat stop mysqld ``` 查看VIP是否漂移成功 `Master2` ```bash ip a ``` ``` ... 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc pfifo_fast state UP group default qlen 1000 link/ether 5a:00:02:32:0f:46 brd ff:ff:ff:ff:ff:ff inet 172.16.0.101/16 brd 172.16.255.255 scope global eth1 valid_lft forever preferred_lft forever inet 172.16.0.188/32 scope global eth1 valid_lft forever preferred_lft forever ``` `通过测试发现VIP漂移成功`
阅读 662 评论 0 收藏 0
阅读 662
评论 0
收藏 0


兜兜    2018-08-01 15:42:24    2019-11-14 14:33:10   

数据库 mysql
### 环境准备 系统: `CentOS7` 数据库: `MySQL5.7` Master节点: `172.16.0.100(node1)` Slave节点: `172.16.0.101(node2)` &emsp; ### 安装MySQL `Master/Slave节点` ```bash yum localinstall https://dev.mysql.com/get/mysql57-community-release-el7-11.noarch.rpm yum install mysql-community-server ``` 启动数据库 ```bash systemctl enable mysqld systemctl start mysqld ``` 获取数据库临时密码 ```bash grep 'temporary password' /var/log/mysqld.log ``` 对数据库进行加固 ```bash mysql_secure_installation ``` &emsp; ### 配置Master 修改数据库配置 ```bash vim /etc/my.cnf ``` ```ini bind-address = 172.16.0.100 server-id = 1 log_bin = mysql-bin ``` 重启数据库 ```bash systemctl restart mysqld ``` 创建复制账号 ```sql mysql> CREATE USER 'replica'@'172.16.0.101' IDENTIFIED BY 'strong_password'; mysql> GRANT REPLICATION SLAVE ON *.* TO 'replica'@'172.16.0.101'; mysql> FLUSH PRIVILEGES; ``` 查看二进制日志文件和位置 ```sql mysql> SHOW MASTER STATUS\G ``` ``` *************************** 1. row *************************** File: mysql-bin.000001 Position: 623 Binlog_Do_DB: Binlog_Ignore_DB: Executed_Gtid_Set: 1 row in set (0.00 sec) ``` `注意:文件:mysql-bin.000001,位置:623` &emsp; ### 配置Slave 修改数据库配置 ```bash vim /etc/my.cnf ``` ```ini bind-address = 172.16.0.101 server-id = 2 log_bin = mysql-bin ``` 重启数据库 ```bash systemctl restart mysqld ``` 停止slave线程 ```sql mysql> STOP SLAVE; ``` 配置slave复制master ```sql mysql> CHANGE MASTER TO mysql> MASTER_HOST='172.16.0.100', mysql> MASTER_USER='replica', mysql> MASTER_PASSWORD='strong_password', mysql> MASTER_LOG_FILE='mysql-bin.000001', mysql> MASTER_LOG_POS=623; ``` 启动slave线程 ```sql mysql> START SLAVE; ``` 查看slave状态 ```sql mysql >show slave status\G ``` ``` *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: 172.16.0.100 Master_User: replica Master_Port: 3306 Connect_Retry: 60 Master_Log_File: mysql-bin.000001 Read_Master_Log_Pos: 1576 Relay_Log_File: node2-relay-bin.000002 Relay_Log_Pos: 1273 Relay_Master_Log_File: mysql-bin.000001 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 1576 Relay_Log_Space: 1480 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 0 Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 1 Master_UUID: f8cc4c10-b429-11e9-9ff9-5600023106ec Master_Info_File: /var/lib/mysql/master.info SQL_Delay: 0 SQL_Remaining_Delay: NULL Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates Master_Retry_Count: 86400 Master_Bind: Last_IO_Error_Timestamp: Last_SQL_Error_Timestamp: Master_SSL_Crl: Master_SSL_Crlpath: Retrieved_Gtid_Set: Executed_Gtid_Set: Auto_Position: 0 Replicate_Rewrite_DB: Channel_Name: Master_TLS_Version: ``` ### 开启半同步复制 安装半同步复制插件 `Master` ```sql mysql> install plugin rpl_semi_sync_master soname 'semisync_master.so'; #安装Master半同步复制插件 mysql> set global rpl_semi_sync_master_enabled=on; #开启Master半同步复制 mysql> show variables like '%semi%'; #查看半同步复制参数 mysql> show plugins; #查看加载的插件 ``` ``` ``` `Slave` ```sql mysql> install plugin rpl_semi_sync_slave soname 'semisync_slave.so'; #安装Slave半同步复制插件 mysql> set global rpl_semi_sync_slave_enabled=on; #开启Slave半同步复制 mysql> show variables like '%semi%'; #查看半同步复制参数 mysql> show plugins; #查看加载的插件 ``` 重启IO线程 `Slave` ```sql mysql> stop slave io_thread; mysql> start slave io_thread; ``` 查看半同步复制信息 `Master` ```sql mysql> show global status like '%semi%'; ``` ``` +--------------------------------------------+-------+ | Variable_name | Value | +--------------------------------------------+-------+ | Rpl_semi_sync_master_clients | 1 | | Rpl_semi_sync_master_net_avg_wait_time | 0 | | Rpl_semi_sync_master_net_wait_time | 0 | | Rpl_semi_sync_master_net_waits | 0 | | Rpl_semi_sync_master_no_times | 0 | | Rpl_semi_sync_master_no_tx | 0 | | Rpl_semi_sync_master_status | ON | | Rpl_semi_sync_master_timefunc_failures | 0 | | Rpl_semi_sync_master_tx_avg_wait_time | 0 | | Rpl_semi_sync_master_tx_wait_time | 0 | | Rpl_semi_sync_master_tx_waits | 0 | | Rpl_semi_sync_master_wait_pos_backtraverse | 0 | | Rpl_semi_sync_master_wait_sessions | 0 | | Rpl_semi_sync_master_yes_tx | 0 | +--------------------------------------------+-------+ ``` `Rpl_semi_sync_master_no_tx`表示没有成功接收slave提交的事务 `Rpl_semi_sync_master_yes_tx`表示成功接收slave事务回复的次数 配置开机自启动半同步复制 `Master` ```bash vim /etc/my.cnf ``` ```ini rpl_semi_sync_master_enabled = on ``` `Slave` ```bash vim /etc/my.cnf ``` ```ini rpl_semi_sync_slave_enabled = on ``` &emsp; ### 测试主从 `Master` ```sql mysql> CREATE DATABASE replicatest; ``` `Slave` ```sql mysql> show databases; ``` ``` +--------------------+ | Database | +--------------------+ | information_schema | | mysql | | performance_schema | | replicatest | | sys | +--------------------+ ``` `通过输出可以看到slave已经同步成功了!`
阅读 666 评论 0 收藏 0
阅读 666
评论 0
收藏 0


兜兜    2018-07-29 11:42:38    2019-11-14 14:32:03   

mysql DRBD pcs Pacemaker corosync
### 准备工作 所有节点: - 系统: `CentOS7.6` - 数据库: `MariaDB 5.5.60` - VIP: `172.16.0.200` node1节点: - IP/主机:`172.16.0.100` node2节点: - IP/主机:`172.16.0.101` 网络配置如下图 ![](https://files.ynotes.cn/drbd_pcs2.png) &emsp; ### 安装Pacemaker和Corosync #### 安装Pacemaker,Corosync,pcs `node1和node2执行` ```bash yum -y install corosync pacemaker pcs ``` #### 设置集群用户密码 `node1和node2执行` ```bash echo "passwd" | passwd hacluster --stdin ``` 启动和开启服务 `node1和node2执行` ```bash systemctl start pcsd systemctl enable pcsd pcs cluster enable --all #配置集群服务开机启动 ``` #### 配置Corosync `node1执行` 认证用户hacluster,将授权tokens存储在文件/var/lib/pcsd/tokens中. ```bash pcs cluster auth node1 node2 -u hacluster -p passwd ``` ``` node1: Authorized node2: Authorized ``` #### 生成和同步Corosync配置 `node1执行` ```bash pcs cluster setup --name mysql_cluster node1 node2 ``` #### 在所有节点启动集群 `node1执行` ```bash pcs cluster start --all ``` ### 安装DRBD `node1和node2执行` ```bash rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm yum install -y kmod-drbd84 drbd84-utils ``` #### 配置DRBD `node1和node2执行` ```bash vim /etc/drbd.d/mysql01.res ``` ```ini resource mysql01 { protocol C; meta-disk internal; device /dev/drbd0; disk /dev/vdb; #/dev/vdb为空闲的块设备,可以LVM创建一个逻辑卷 handlers { split-brain "/usr/lib/drbd/notify-split-brain.sh root"; } net { allow-two-primaries no; after-sb-0pri discard-zero-changes; after-sb-1pri discard-secondary; after-sb-2pri disconnect; rr-conflict disconnect; } disk { on-io-error detach; } syncer { verify-alg sha1; } on node1 { address 172.16.0.100:7789; } on node2 { address 172.16.0.101:7789; } } ``` #### 初始化DRBD`(创建DRBD metadata)` `node1和node2执行` ```bash drbdadm create-md mysql01 ``` #### 启动mysql01 `node1和node2执行` ```bash drbdadm up mysql01 ``` #### 指定主节点 `node1执行` ```bash drbdadm primary --force mysql01 ``` #### 查看drbd状态 `node1执行` ```bash cat /proc/drbd ``` ``` version: 8.4.11-1 (api:1/proto:86-101) GIT-hash: 66145a308421e9c124ec391a7848ac20203bb03c build by mockbuild@, 2018-11-03 01:26:55 0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r----- ns:136 nr:288 dw:428 dr:13125 al:5 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0 ``` #### 快速同步 ```bash drbdadm new-current-uuid --clear-bitmap mysql01/0 ``` #### 格式化drbd设备 等待上面主从的块设备同步(UpToDate/UpToDate)之后执行 格式化drbd成ext4格式 `node1执行` ```bash mkfs.ext4 -m 0 -L drbd /dev/drbd0 tune2fs -c 30 -i 180d /dev/drbd0 ``` #### 挂载drbd设备 `node1执行` ```bash mount /dev/drbd0 /mnt ``` &emsp; ### 安装MariaDB `node1和node2执行` ```bash yum install -y mariadb-server mariadb systemctl disable mariadb.service #设置开启不启动,通过pacemaker去管理 ``` `node1执行` ```bash systemctl start mariadb ``` #### 安装数据库 `node1执行` ```bash mysql_install_db --datadir=/mnt --user=mysql ``` #### 执行安全安装 `node1执行` ```bash mysql_secure_installation ``` #### 卸载drbd和停止数据库 `node1执行` ```bash umount /mnt #卸载目录 systemctl stop mariadb #停止数据库 ``` #### 配置mysql `node1和node2执行` ```bash vim /etc/my.cnf ``` ```ini [mysqld] symbolic-links=0 bind_address = 0.0.0.0 datadir = /var/lib/mysql pid_file = /var/run/mariadb/mysqld.pid socket = /var/run/mariadb/mysqld.sock [mysqld_safe] bind_address = 0.0.0.0 datadir = /var/lib/mysql pid_file = /var/run/mariadb/mysqld.pid socket = /var/run/mariadb/mysqld.sock !includedir /etc/my.cnf.d ``` &emsp; ### 配置Pacemaker集群 #### 配置逻辑和顺序如下 ```bash Start: mysql_fs01 -> mysql_service01 -> mysql_VIP01, Stop: mysql_VIP01 -> mysql_service01 -> mysql_fs01. ``` mysql_fs01是文件系统资源,mysql_service01是服务资源,mysql_VIP01是浮动虚拟IP `172.16.0.200` pcs具有的一个方便功能是能够将多个更改排入文件并以原子方式提交这些更改。为此,我们首先使用CIB中的当前原始XML配置填充文件 `node1执行` ```bash pcs cluster cib clust_cfg ``` 关闭STONITH(`注意:依赖具体的环境视情况操作`) `node1执行` ```bash pcs -f clust_cfg property set stonith-enabled=false ``` 设置quorum策略为ignore `node1执行` ```bash pcs -f clust_cfg property set no-quorum-policy=ignore ``` 防止资源在恢复后移动,因为它通常会增加停机时间 `node1执行` ```bash pcs -f clust_cfg resource defaults resource-stickiness=200 ``` 为了达到这个效果,Pacemaker 有一个叫做“资源粘性值”的概念,它能够控制一个服务(资源)有多想呆在它正在运行的节点上。 Pacemaker为了达到最优分布各个资源的目的,默认设置这个值为0。我们可以为每个资源定义不同的粘性值,但一般来说,更改默认粘性值就够了。资源粘性表示资源是否倾向于留在当前节点,如果为正整数,表示倾向,负数则会离开,-inf表示负无穷,inf表示正无穷。 &emsp; 为drbd设备创建名为mysql_data01的集群资源和一个额外的克隆资源MySQLClone01,允许资源同时在两个集群节点上运行 `node1执行` ```bash pcs -f clust_cfg resource create mysql_data01 ocf:linbit:drbd \ drbd_resource=mysql01 \ op monitor interval=30s ``` ```bash pcs -f clust_cfg resource master MySQLClone01 mysql_data01 \ master-max=1 master-node-max=1 \ clone-max=2 clone-node-max=1 \ notify=true ``` master-max: 可以将多少资源副本提升为主状态 master-node-max: 可以在单个节点上将多少个资源副本提升为主状态 clone-max: 要启动多少个资源副本。默认为群集中的节点数 clone-node-max: 可以在单个节点上启动多少个资源副本 notify: 停止或启动克隆副本时,请事先告知所有其他副本以及操作何时成功 &emsp; 为文件系统创建名为mysql_fs01的集群资源,告诉群集克隆资源MySQLClone01必须在与文件系统资源相同的节点上运行,并且必须在文件系统资源之前启动克隆资源。 `node1执行` ```bash pcs -f clust_cfg resource create mysql_fs01 Filesystem \ device="/dev/drbd0" \ directory="/var/lib/mysql" \ fstype="ext4" ``` ```bash pcs -f clust_cfg constraint colocation add mysql_fs01 with MySQLClone01 \ INFINITY with-rsc-role=Master ``` ```bash pcs -f clust_cfg constraint order promote MySQLClone01 then start mysql_fs01 ``` 为MariaDB服务创建名为mysql_service01的集群资源。告诉群集MariaDB服务必须在与mysql_fs01文件系统资源相同的节点上运行,并且必须首先启动文件系统资源。 `node1执行` ```bash pcs -f clust_cfg resource create mysql_service01 ocf:heartbeat:mysql \ binary="/usr/bin/mysqld_safe" \ config="/etc/my.cnf" \ datadir="/var/lib/mysql" \ pid="/var/lib/mysql/mysql.pid" \ socket="/var/lib/mysql/mysql.sock" \ additional_parameters="--bind-address=0.0.0.0" \ op start timeout=60s \ op stop timeout=60s \ op monitor interval=20s timeout=30s ``` ```bash pcs -f clust_cfg constraint colocation add mysql_service01 with mysql_fs01 INFINITY ``` ```bash pcs -f clust_cfg constraint order mysql_fs01 then mysql_service01 ``` 为虚拟IP 172.16.0.200创建名为mysql_VIP01的集群资源 `node1执行` ```bash pcs -f clust_cfg resource create mysql_VIP01 ocf:heartbeat:IPaddr2 \ ip=172.16.0.200 cidr_netmask=32 \ op monitor interval=30s ``` 当然,虚拟IP mysql_VIP01资源必须与MariaDB资源在同一节点上运行,并且必须在最后一个时启动。这是为了确保在连接到虚拟IP之前已经启动了所有其他资源。 `node1执行` ```bash pcs -f clust_cfg constraint colocation add mysql_VIP01 with mysql_service01 INFINITY ``` ```bash pcs -f clust_cfg constraint order mysql_service01 then mysql_VIP01 ``` 检查配置 `node1执行` ```bash pcs -f clust_cfg constraint ``` ``` Location Constraints: Ordering Constraints: promote MySQLClone01 then start mysql_fs01 (kind:Mandatory) start mysql_fs01 then start mysql_service01 (kind:Mandatory) start mysql_service01 then start mysql_VIP01 (kind:Mandatory) Colocation Constraints: mysql_fs01 with MySQLClone01 (score:INFINITY) (with-rsc-role:Master) mysql_service01 with mysql_fs01 (score:INFINITY) mysql_VIP01 with mysql_service01 (score:INFINITY) ``` ```bash pcs -f clust_cfg resource show ``` ``` Master/Slave Set: MySQLClone01 [mysql_data01] Stopped: [ node1 node2 ] mysql_fs01 (ocf::heartbeat:Filesystem): Stopped mysql_service01 (ocf::heartbeat:mysql): Stopped mysql_VIP01 (ocf::heartbeat:IPaddr2): Stopped ``` 提交修改并查看集群状态 `node1执行` ```bash pcs cluster cib-push clust_cfg ``` ```bash pcs status ``` ``` Cluster name: mysql_cluster Stack: corosync Current DC: node1 (version 1.1.19-8.el7_6.4-c3c624ea3d) - partition with quorum Last updated: Mon Jul 29 06:51:22 2019 Last change: Mon Jul 29 02:49:38 2019 by root via cibadmin on node1 2 nodes configured 5 resources configured Online: [ node1 node2 ] Full list of resources: Master/Slave Set: MySQLClone01 [mysql_data01] Masters: [ node1 ] Slaves: [ node2 ] mysql_fs01 (ocf::heartbeat:Filesystem): Started node1 mysql_service01 (ocf::heartbeat:mysql): Started node1 mysql_VIP01 (ocf::heartbeat:IPaddr2): Started node1 Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled ``` 一旦配置提交,Pacemaker将会执行以下操作 - 在集群节点启动DRBD - 选择一个节点提升为主节点 - 在同一个节点挂载文件系统,配置集群IP地址,启动MariaDB - 开始监控资源 通过telenet虚拟IP地址和3306端口,测试MariaDB服务 `client执行` ```bash telnet 172.16.0.200 3306 ``` ``` Trying 172.16.0.200... Connected to 172.16.0.200. Escape character is '^]'. GHost '172.16.0.200' is not allowed to connect to this MariaDB serverConnection closed by foreign host. ``` ### 常用命令汇总 查看集群状态: ```bash pcs status ``` 查看集群当前配置: ```bash pcs config ``` 开机后集群自启动: ```bash pcs cluster enable --all ``` 启动集群: ```bash pcs cluster start --all ``` 查看集群资源状态: ```bash pcs resource show ``` 验证集群配置情况: ```bash crm_verify -L -V ``` 测试资源配置: ```bash pcs resource debug-start resource ``` 设置节点为备用状态: ```bash pcs cluster standby node1 ``` 列出集群属性 ```bash pcs property list ``` 测试corosync成员 ```bash corosync-cmapctl | grep members ``` 查看corosync成员 ```bash pcs status corosync ``` 将集群配置保存到文件 ```bash pcs cluster cib filename ``` 创建资源不应用到集群,写入到文件 ```bash pcs -f testfile1 resource create VirtualIP ocf:heartbeat:IPaddr2 ip=192.168.0.120 cidr_netmask=24 op monitor interval=30s ``` 将文件配置应用到集群 ```bash pcs cluster cib-push filename ``` 备份集群配置 ```bash pcs config backup filename ``` 使用恢复集群配置 ```bash pcs config restore [--local] [filename] #--local只还原当前节点,没有执行filename则读取标准输入 ``` 添加集群节点 ```bash pcs cluster node add node ``` 删除集群节点 ```bash pcs cluster node remove node ``` 显示资源的参数 ```bash pcs resource describe standard:provider:type|type #例如:pcs resource describe ocf:heartbeat:IPaddr2 ``` 设置节点进入待机状态 ```bash pcs cluster standby node | --all ``` 设置节点从待机状态移除 ```bash pcs cluster unstandby node | --all ``` 删除集群配置(`警告:这个命令可永久移除已创建的集群配置`) ```bash pcs cluster stop pcs cluster destroy ``` 参考: https://www.lisenet.com/2016/activepassive-mysql-high-availability-pacemaker-cluster-with-drbd-on-centos-7/ https://linux.cn/article-3963-1.html https://www.howtoforge.com/tutorial/how-to-set-up-nginx-high-availability-with-pacemaker-corosync-on-centos-7/ http://www.alexlinux.com/pacemaker-corosync-nginx-cluster/ https://access.redhat.com/documentation/zh-cn/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/ch-clusteradmin-haar
阅读 1600 评论 0 收藏 0
阅读 1600
评论 0
收藏 0


兜兜    2018-07-25 22:23:58    2018-07-25 22:23:58   

docker 容器 容器编排 swarm 阿里云
#### **开通阿里云的容器服务** #### **创建专有网络** ![](https://files.ynotes.cn/18-7-25/43311644.jpg) #### **创建交换机** ![](https://files.ynotes.cn/18-7-25/43311644.jpg) #### **创建swarm集群** ![](https://files.ynotes.cn/18-7-25/85719142.jpg) ![](https://files.ynotes.cn/18-7-25/91643015.jpg) ![](https://files.ynotes.cn/18-7-25/99693706.jpg) ![](https://files.ynotes.cn/18-7-25/5548867.jpg) ![](https://files.ynotes.cn/18-7-25/76142867.jpg) ![](https://files.ynotes.cn/18-7-25/36572329.jpg) #### **创建编排模板** ```yaml version: '2' services: db: image: mysql:5.7 restart: always container_name: blog-db environment: MYSQL_ROOT_PASSWORD: 123456 MYSQL_DATABASE: blog MYSQL_USER: blog MYSQL_PASSWORD: 123456 volumes: - /root/blog/mysql/conf/mysqld.cnf:/etc/mysql/mysql.conf.d/mysqld.cnf - /root/blog/mysql/db_init_sql:/docker-entrypoint-initdb.d - /root/blog/mysql/data:/var/lib/mysql - /root/blog/mysql/log:/var/log networks: default: aliases: - db uwsgi-django: image: 'registry.cn-shenzhen.aliyuncs.com/sys/uwsgi-django:1.9.5' restart: always depends_on: - db container_name: blog-uwsgi-django environment: DB_NAME: blog DB_USER: blog DB_PASS: 123456 DB_PORT: 3306 WEB_URL: www.ynotes.cn volumes: - /root/blog/uwsgi-django/my_project:/usr/src/app/my_project - /root/blog/uwsgi-django/conf:/usr/src/app/uwsgi/conf command: uwsgi /usr/src/app/uwsgi/conf/config.ini networks: default: aliases: - uwsgi-django nginx: image: nginx:stable restart: always depends_on: - uwsgi-django container_name: blog-nginx environment: NGINX_HOST: www.ynotes.cn NGINX_PORT: 80 NGINX_SSL_PORT: 443 UWSGI_PORT: 8888 ports: - 8080:80 volumes: - /root/blog/nginx/conf/nginx.conf:/etc/nginx/nginx.conf - /root/blog/nginx/conf/mysite.template:/etc/nginx/conf.d/mysite.template - /root/blog/nginx/ssl/fullchain.pem:/etc/nginx/ssl/blog.itisme.co/fullchain.pem - /root/blog/nginx/ssl/privkey.pem:/etc/nginx/ssl/blog.itisme.co/privkey.pem - /root/blog/uwsgi-django/my_project/my_project/upload:/data/app/my_project/my_project/upload - /root/blog/uwsgi-django/my_project/my_project/static_all:/data/app/my_project/my_project/static_all - /root/blog/uwsgi-django/my_project/my_project/uwsgi_params:/data/app/my_project/my_project/uwsgi_params - /root/blog/nginx/log/:/var/log/nginx/ command: /bin/bash -c "envsubst < /etc/nginx/conf.d/mysite.template > /etc/nginx/conf.d/blog.itisme.co.conf && nginx -g 'daemon off;'" networks: default: driver: overlay ``` #### **配置安全组规则,增加22端口(方便远程拷贝项目)** ![](https://files.ynotes.cn/18-7-25/52826166.jpg) #### **上传blog项目到容器主机/root目录** ```bash $ tar xvf blog.tar.gz ``` #### **创建应用** ![](https://files.ynotes.cn/18-7-25/35869228.jpg) ![](https://files.ynotes.cn/18-7-25/31022136.jpg) #### **查看启动的服务** ![](https://files.ynotes.cn/18-7-26/43336011.jpg) #### **配置SLB负载均衡证书(把申请的证书和私钥粘贴到下面的服务器证书相对应的文本框中)** ![](https://files.ynotes.cn/18-7-26/34424767.jpg) #### **配置SLB负载端口映射(443->8080)** ![](https://files.ynotes.cn/18-7-26/11656277.jpg) ![](https://files.ynotes.cn/18-7-26/92529607.jpg) ![](https://files.ynotes.cn/18-7-26/27217610.jpg) #### **配置dns解析 `www.ynotes.cn` 到slb** #### **访问`https://www.ynotes.cn`** ![](https://files.ynotes.cn/18-7-26/68765124.jpg)
阅读 1395 评论 0 收藏 0
阅读 1395
评论 0
收藏 0


兜兜    2018-07-23 18:26:17    2018-07-23 18:26:17   

docker docker-compose 个人网盘 nextcloud
![](https://files.ynotes.cn/18-7-23/70377481.jpg) #### **项目目录结构** ```bash nextcloud/ ├── db.env ├── docker-compose.yml ├── mysql │   ├── conf │   │   └── mysqld.cnf │   ├── data │   └── log ├── nextcloud └── nginx ├── conf │   ├── conf.d │   │   ├── certs │   │   │   └── pan.itisme.co │   │   │   ├── fullchain1.pem │   │   │   └── privkey1.pem │   │   └── pan.itisme.co.conf │   └── nginx.conf └── log ``` #### **新建docker项目数据配置存放目录** ```bash $ mkdir /data/docker_project/nextcloud -p $ cd /data/docker_project/nextcloud ``` #### **创建mysql容器使用的目录** ```bash $ mkdir mysql/{conf,data,log} -p $ chmod 777 mysql/log ``` conf:存放mysql配置文件 data:存放mysql数据的目录 log:存放mysql日志,修改权限为777   #### **编辑mysql配置文件mysql/conf/mysqld.cnf** ```bash [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock symbolic-links=0 log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid default-time-zone = '+08:00' character-set-server=utf8 character-set-server = utf8mb4 collation-server = utf8mb4_unicode_ci character-set-client-handshake = FALSE innodb_buffer_pool_size = 128M sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES [client] default-character-set = utf8mb4 [mysql] default-character-set = utf8mb4 ``` #### **下载nextcloud-13.0.4** ```bash $ cd /data/docker_project/nextcloud $ wget https://download.nextcloud.com/server/releases/nextcloud-13.0.4.zip $ unzip nextcloud-13.0.4.zip #解压到项目的nextcloud目录 $ mkdir nextcloud/data #nextcloud数据目录 $ chmod 33.root nextcloud/{apps,config,data} -p #修改目录所属id,docker运行时生成的文件默认为uid 33,根据实际情况修改 $ chmod 0700 nextcloud/data #修改目录的权限为0700,nextcloud代码会检验是否为该权限 ``` #### **创建nginx容器使用的目录** ```bash $ mkdir nginx/conf/conf.d/certs/pan.itisme.co -p #证书存放目录 $ mkdir nginx/log $ chmod 777 nginx/log ``` conf:存放nginx的配置文件 log:存放日志目录 #### **编辑nginx/conf/nginx.conf** ```nginx user nginx; worker_processes 1; pid /var/run/nginx.pid; error_log /var/log/nginx.error.log warn; events { use epoll; worker_connections 10240; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; #access_log /dev/null; access_log /var/log/nginx/nginx.access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65; #gzip on; include /etc/nginx/conf.d/*.conf; } ``` #### **编辑nginx/conf/conf.d/pan.itisme.co.conf** ```nginx upstream php-handler { server app:9000; } server { listen 80; server_name pan.itisme.co; return 301 https://$server_name$request_uri; } server { listen 443 ssl http2; listen [::]:443 ssl http2; server_name pan.itisme.co; ssl_certificate /etc/nginx/conf.d/certs/pan.itisme.co/fullchain1.pem; ssl_certificate_key /etc/nginx/conf.d/certs/pan.itisme.co/privkey1.pem; # Add headers to serve security related headers # Before enabling Strict-Transport-Security headers please read into this # topic first. # add_header Strict-Transport-Security "max-age=15768000; # includeSubDomains; preload;"; # # WARNING: Only add the preload option once you read about # the consequences in https://hstspreload.org/. This option # will add the domain to a hardcoded list that is shipped # in all major browsers and getting removed from this list # could take several months. add_header X-Content-Type-Options nosniff; add_header X-XSS-Protection "1; mode=block"; add_header X-Robots-Tag none; add_header X-Download-Options noopen; add_header X-Permitted-Cross-Domain-Policies none; root /var/www/html; location = /robots.txt { allow all; log_not_found off; access_log off; } # The following 2 rules are only needed for the user_webfinger app. # Uncomment it if you're planning to use this app. #rewrite ^/.well-known/host-meta /public.php?service=host-meta last; #rewrite ^/.well-known/host-meta.json /public.php?service=host-meta-json # last; location = /.well-known/carddav { return 301 $scheme://$host/remote.php/dav; } location = /.well-known/caldav { return 301 $scheme://$host/remote.php/dav; } # set max upload size client_max_body_size 10G; fastcgi_buffers 64 4K; # Enable gzip but do not remove ETag headers gzip on; gzip_vary on; gzip_comp_level 4; gzip_min_length 256; gzip_proxied expired no-cache no-store private no_last_modified no_etag auth; gzip_types application/atom+xml application/javascript application/json application/ld+json application/manifest+json application/rss+xml application/vnd.geo+json application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/bmp image/svg+xml image/x-icon text/cache-manifest text/css text/plain text/vcard text/vnd.rim.location.xloc text/vtt text/x-component text/x-cross-domain-policy; # Uncomment if your server is build with the ngx_pagespeed module # This module is currently not supported. #pagespeed off; location / { rewrite ^ /index.php$uri; } location ~ ^/(?:build|tests|config|lib|3rdparty|templates|data)/ { deny all; } location ~ ^/(?:\.|autotest|occ|issue|indie|db_|console) { deny all; } location ~ ^/(?:index|remote|public|cron|core/ajax/update|status|ocs/v[12]|updater/.+|ocs-provider/.+)\.php(?:$|/) { fastcgi_split_path_info ^(.+\.php)(/.*)$; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_path_info; # fastcgi_param HTTPS on; #Avoid sending the security headers twice fastcgi_param modHeadersAvailable true; fastcgi_param front_controller_active true; fastcgi_pass php-handler; fastcgi_intercept_errors on; fastcgi_request_buffering off; } location ~ ^/(?:updater|ocs-provider)(?:$|/) { try_files $uri/ =404; index index.php; } # Adding the cache control header for js and css files # Make sure it is BELOW the PHP block location ~ \.(?:css|js|woff|svg|gif)$ { try_files $uri /index.php$uri$is_args$args; add_header Cache-Control "public, max-age=15778463"; # Add headers to serve security related headers (It is intended to # have those duplicated to the ones above) # Before enabling Strict-Transport-Security headers please read into # this topic first. # add_header Strict-Transport-Security "max-age=15768000; # includeSubDomains; preload;"; # # WARNING: Only add the preload option once you read about # the consequences in https://hstspreload.org/. This option # will add the domain to a hardcoded list that is shipped # in all major browsers and getting removed from this list # could take several months. add_header X-Content-Type-Options nosniff; add_header X-XSS-Protection "1; mode=block"; add_header X-Robots-Tag none; add_header X-Download-Options noopen; add_header X-Permitted-Cross-Domain-Policies none; # Optional: Don't log access to assets access_log off; } location ~ \.(?:png|html|ttf|ico|jpg|jpeg)$ { try_files $uri /index.php$uri$is_args$args; # Optional: Don't log access to other assets access_log off; } } ``` #### **拷贝证书到nginx/conf/conf.d/certs/pan.itisme.co目录** ```bash $ scp fullchain.pem root@docker-host:/data/docker_project/nextcloud/nginx/conf/conf.d/certs/pan.itisme.co $ scp privkey.pem root@docker-host:/data/docker_project/nextcloud/nginx/conf/conf.d/certs/pan.itisme.co ``` #### **编辑docker-compose.yml (客户端->nginx->php->db)** ```bash $ vim docker-compose.yml ``` ```yaml version: '3' services: db: image: mysql:5.7 ports: - "3306:3306" volumes: - ./mysql/conf/mysqld.cnf:/etc/mysql/mysql.conf.d/mysqld.cnf - ./mysql/data:/var/lib/mysql/:rw - ./mysql/log:/var/log/ env_file: - db.env app: image: nextcloud:fpm depends_on: - db volumes: - ./nextcloud:/var/www/html restart: always web: image: nginx ports: - 80:80 - 443:443 depends_on: - app volumes: - ./nextcloud:/var/www/html - ./nginx/conf/nginx.conf:/etc/nginx/nginx.conf:ro - ./nginx/conf/conf.d:/etc/nginx/conf.d/:ro - ./nginx/log/:/var/log/nginx/:rw restart: always ``` #### **增加db.env文件,数据库的环境变量** ```bash MYSQL_PASSWORD=123456 MYSQL_DATABASE=nextcloud MYSQL_USER=nextcloud MYSQL_ROOT_PASSWORD=123456 ``` #### **启动项目** ```bash $ docker-compose up ``` #### **启动项目后台运行** ```bash $ docker-compose up -d ``` #### **查看docker进程** ```bash $ docker-compose ps ``` ``` Name Command State Ports ------------------------------------------------------------------------------------------------ nextcloud_app_1 /entrypoint.sh php-fpm Up 9000/tcp nextcloud_db_1 docker-entrypoint.sh mysqld Up 0.0.0.0:3306->3306/tcp nextcloud_web_1 nginx -g daemon off; Up 0.0.0.0:443->443/tcp, 0.0.0.0:80->80/tcp ``` #### **浏览器访问https://pan.itisme.co/** ![](https://files.ynotes.cn/18-7-25/23443164.jpg)
阅读 2417 评论 0 收藏 0
阅读 2417
评论 0
收藏 0


兜兜    2018-07-23 09:39:28    2019-11-14 14:33:28   

iSCSI SAN
### 准备工作 所有节点: - 系统: `CentOS7.6` iSCSI : - IP/主机:`172.16.0.3(node1)` 从节点: - IP/主机:`172.16.0.4(node2)` ### 创建 iSCSI target #### 创建后备存储设备 `fdisk创建一个分区/dev/vdb1` &emsp; #### 安装targetcli ```bash yum -y install targetcli ``` &emsp; #### 使用targetcli管理iSCSI targets ```bash targetcli ``` ```bash targetcli shell version 2.1.fb46 Copyright 2011-2013 by Datera, Inc and others. For help on commands, type 'help'. /> ls o- / .................................................................................. [...] o- backstores ....................................................................... [...] | o- block ........................................................... [Storage Objects: 0] | o- fileio .......................................................... [Storage Objects: 0] | o- pscsi ........................................................... [Storage Objects: 0] | o- ramdisk ......................................................... [Storage Objects: 0] o- iscsi ..................................................................... [Targets: 0] o- loopback .................................................................. [Targets: 0] ``` &emsp; #### 创建block backstores 创建一个新的block ```bash /backstores/block> create dev=/dev/vdb1 name=vdb1 Created block storage object vdb1 using /dev/vdb1. /backstores/block> ls o- block ................................................................ [Storage Objects: 1] o- vdb1 ....................................... [/dev/vdb1 (0 bytes) write-thru deactivated] o- alua ................................................................. [ALUA Groups: 1] o- default_tg_pt_gp ..................................... [ALUA state: Active/optimized] ``` &emsp; #### 创建iSCSI targets ```bash />cd /iscsi /iscsi>create wwn=iqn.2019-07.com.example:servers Created target iqn.2018-12.com.example:servers. Created TPG 1. Global pref auto_add_default_portal=true Created default portal listening on all IPs (0.0.0.0), port 3260. /iscsi> ls o- iscsi ......................................................................... [Targets: 1] o- iqn.2019-07.com.example:servers ................................................ [TPGs: 1] o- tpg1 ............................................................ [no-gen-acls, no-auth] o- acls ....................................................................... [ACLs: 0] o- luns ....................................................................... [LUNs: 0] o- portals ................................................................. [Portals: 1] o- 0.0.0.0:3260 .................................................................. [OK] ``` &emsp; #### 添加ACLs ```bash />cd iscsi/iqn.2019-07.com.example:servers/tpg1/acls /iscsi/iqn.20...ers/tpg1/acls> create wwn=iqn.2018-12.com.example:node1 Created Node ACL for iqn.2018-12.com.example:node1 ``` &emsp; #### 添加LUNs到iSCSI target ```bash /> cd iscsi/iqn.2018-12.com.example:servers/tpg1/luns /iscsi/iqn.20...ers/tpg1/luns> create /backstores/block/vdb1 Created LUN 0. Created LUN 0->0 mapping in node ACL iqn.2018-12.com.example:node1 /iscsi/iqn.20...ers/tpg1/luns> exit Global pref auto_save_on_exit=true Configuration saved to /etc/target/saveconfig.json ``` &emsp; #### 启动和开启target服务 ```bash systemctl start target systemctl enable target ``` &emsp; &emsp; ### 创建 iSCSI Initiator #### 安装iscsi-initiator包 ```bash yum -y install iscsi-initiator-utils ``` &emsp; #### 设置iSCSI Initiator名 ```bash cat /etc/iscsi/initiatorname.iscsi ``` ``` InitiatorName=iqn.2019-07.com.example:node1 ``` &emsp; #### 重启iscsid ```bash systemctl restart iscsid ``` &emsp; #### 发现LUNs ```bash iscsiadm --mode discovery --type sendtargets --portal 172.16.0.4 --discover ``` ``` 172.16.0.4:3260,1 iqn.2019-07.com.example:servers ``` 发现之后数据目录更新 ```bash ls -l /var/lib/iscsi/nodes ``` ``` drw------- 3 root root 4096 Jul 23 03:21 iqn.2019-07.com.example:servers ``` ```bash ls -l /var/lib/iscsi/send_targets/172.16.0.4,3260/ ``` ``` lrwxrwxrwx 1 root root 70 Jul 23 03:21 iqn.2019-07.com.example:servers,172.16.0.4,3260,1,default -> /var/lib/iscsi/nodes/iqn.2019-07.com.example:servers/172.16.0.4,3260,1 -rw------- 1 root root 549 Jul 23 03:21 st_config ``` &emsp; #### 创建连接(默认持久连接,重启生效) ```bash iscsiadm --mode node --targetname iqn.2019-07.com.example:servers --login ``` ``` Logging in to [iface: default, target: iqn.2019-07.com.example:servers, portal: 172.16.0.4,3260] (multiple) Login to [iface: default, target: iqn.2019-07.com.example:servers, portal: 172.16.0.4,3260] successful. ``` 监控连接 ```bash iscsiadm --mode node -P 1 ``` ``` Target: iqn.2019-07.com.example:servers Portal: 172.16.0.4:3260,1 Iface Name: default ``` 列出scsi设备 ```bash lsscsi ``` ``` [1:0:0:0] cd/dvd QEMU QEMU DVD-ROM 2.5+ /dev/sr0 [2:0:0:0] disk LIO-ORG vdb1 4.0 /dev/sda ``` &emsp; #### 移除连接 断开连接 ```bash iscsiadm --mode node --targetname iqn.2018-12.com.example:servers --portal 10.0.2.13 -u ``` ``` Logging out of session [sid: 1, target: iqn.2018-12.com.example:servers, portal: 10.0.2.13,3260] Logout of [sid: 1, target: iqn.2018-12.com.example:servers, portal: 10.0.2.13,3260] successful. ``` 删除IQN子目录和内容 ```bash iscsiadm --mode node --targetname iqn.2018-12.com.example:servers --portal 10.0.2.13 -o delete ``` `停止iscsi服务,移除/var/lib/iscsi/nodes下所有文件清理配置,重启iscsi服务,开始discovery再次登录` &emsp; #### 格式化iSCSI设备 ```bash mkfs.ext4 /dev/sda #也可以对设备进行分区 ``` #### 挂载iscsi设备 ```bash blkid /dev/sda ``` ``` /dev/sda: UUID="dce62896-9ac9-42cf-aa3b-38344974c309" TYPE="ext4" ``` #### 设置开机挂载 ```bash vim /etc/fstab ``` ``` UUID="dce62896-9ac9-42cf-aa3b-38344974c309" /test ext4 defaults 0 0 ``` 挂载/etc/fstab中的配置 ```bash mount -a ``` 查看挂载信息 ```bash df -h ``` ``` Filesystem Size Used Avail Use% Mounted on /dev/vda1 25G 1.8G 22G 8% / devtmpfs 486M 0 486M 0% /dev tmpfs 496M 0 496M 0% /dev/shm tmpfs 496M 50M 446M 11% /run tmpfs 496M 0 496M 0% /sys/fs/cgroup tmpfs 100M 0 100M 0% /run/user/0 /dev/sda 2.0G 6.0M 1.8G 1% /test ``` 应用场景:利用服务器多余的磁盘空间整合成大的逻辑卷用于备份(流程:服务器创建iSCSI targets,客户端通过iSCSI initators登陆获取targets。客户端使用LVM对磁盘创建一个总的逻辑卷)
阅读 894 评论 0 收藏 0
阅读 894
评论 0
收藏 0


第 11 页 / 共 15 页