兜兜    2018-07-23 18:26:17    2018-07-23 18:26:17   

docker docker-compose 个人网盘 nextcloud
![](https://files.ynotes.cn/18-7-23/70377481.jpg) #### **项目目录结构** ```bash nextcloud/ ├── db.env ├── docker-compose.yml ├── mysql │   ├── conf │   │   └── mysqld.cnf │   ├── data │   └── log ├── nextcloud └── nginx ├── conf │   ├── conf.d │   │   ├── certs │   │   │   └── pan.itisme.co │   │   │   ├── fullchain1.pem │   │   │   └── privkey1.pem │   │   └── pan.itisme.co.conf │   └── nginx.conf └── log ``` #### **新建docker项目数据配置存放目录** ```bash $ mkdir /data/docker_project/nextcloud -p $ cd /data/docker_project/nextcloud ``` #### **创建mysql容器使用的目录** ```bash $ mkdir mysql/{conf,data,log} -p $ chmod 777 mysql/log ``` conf:存放mysql配置文件 data:存放mysql数据的目录 log:存放mysql日志,修改权限为777   #### **编辑mysql配置文件mysql/conf/mysqld.cnf** ```bash [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock symbolic-links=0 log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid default-time-zone = '+08:00' character-set-server=utf8 character-set-server = utf8mb4 collation-server = utf8mb4_unicode_ci character-set-client-handshake = FALSE innodb_buffer_pool_size = 128M sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES [client] default-character-set = utf8mb4 [mysql] default-character-set = utf8mb4 ``` #### **下载nextcloud-13.0.4** ```bash $ cd /data/docker_project/nextcloud $ wget https://download.nextcloud.com/server/releases/nextcloud-13.0.4.zip $ unzip nextcloud-13.0.4.zip #解压到项目的nextcloud目录 $ mkdir nextcloud/data #nextcloud数据目录 $ chmod 33.root nextcloud/{apps,config,data} -p #修改目录所属id,docker运行时生成的文件默认为uid 33,根据实际情况修改 $ chmod 0700 nextcloud/data #修改目录的权限为0700,nextcloud代码会检验是否为该权限 ``` #### **创建nginx容器使用的目录** ```bash $ mkdir nginx/conf/conf.d/certs/pan.itisme.co -p #证书存放目录 $ mkdir nginx/log $ chmod 777 nginx/log ``` conf:存放nginx的配置文件 log:存放日志目录 #### **编辑nginx/conf/nginx.conf** ```nginx user nginx; worker_processes 1; pid /var/run/nginx.pid; error_log /var/log/nginx.error.log warn; events { use epoll; worker_connections 10240; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; #access_log /dev/null; access_log /var/log/nginx/nginx.access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65; #gzip on; include /etc/nginx/conf.d/*.conf; } ``` #### **编辑nginx/conf/conf.d/pan.itisme.co.conf** ```nginx upstream php-handler { server app:9000; } server { listen 80; server_name pan.itisme.co; return 301 https://$server_name$request_uri; } server { listen 443 ssl http2; listen [::]:443 ssl http2; server_name pan.itisme.co; ssl_certificate /etc/nginx/conf.d/certs/pan.itisme.co/fullchain1.pem; ssl_certificate_key /etc/nginx/conf.d/certs/pan.itisme.co/privkey1.pem; # Add headers to serve security related headers # Before enabling Strict-Transport-Security headers please read into this # topic first. # add_header Strict-Transport-Security "max-age=15768000; # includeSubDomains; preload;"; # # WARNING: Only add the preload option once you read about # the consequences in https://hstspreload.org/. This option # will add the domain to a hardcoded list that is shipped # in all major browsers and getting removed from this list # could take several months. add_header X-Content-Type-Options nosniff; add_header X-XSS-Protection "1; mode=block"; add_header X-Robots-Tag none; add_header X-Download-Options noopen; add_header X-Permitted-Cross-Domain-Policies none; root /var/www/html; location = /robots.txt { allow all; log_not_found off; access_log off; } # The following 2 rules are only needed for the user_webfinger app. # Uncomment it if you're planning to use this app. #rewrite ^/.well-known/host-meta /public.php?service=host-meta last; #rewrite ^/.well-known/host-meta.json /public.php?service=host-meta-json # last; location = /.well-known/carddav { return 301 $scheme://$host/remote.php/dav; } location = /.well-known/caldav { return 301 $scheme://$host/remote.php/dav; } # set max upload size client_max_body_size 10G; fastcgi_buffers 64 4K; # Enable gzip but do not remove ETag headers gzip on; gzip_vary on; gzip_comp_level 4; gzip_min_length 256; gzip_proxied expired no-cache no-store private no_last_modified no_etag auth; gzip_types application/atom+xml application/javascript application/json application/ld+json application/manifest+json application/rss+xml application/vnd.geo+json application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/bmp image/svg+xml image/x-icon text/cache-manifest text/css text/plain text/vcard text/vnd.rim.location.xloc text/vtt text/x-component text/x-cross-domain-policy; # Uncomment if your server is build with the ngx_pagespeed module # This module is currently not supported. #pagespeed off; location / { rewrite ^ /index.php$uri; } location ~ ^/(?:build|tests|config|lib|3rdparty|templates|data)/ { deny all; } location ~ ^/(?:\.|autotest|occ|issue|indie|db_|console) { deny all; } location ~ ^/(?:index|remote|public|cron|core/ajax/update|status|ocs/v[12]|updater/.+|ocs-provider/.+)\.php(?:$|/) { fastcgi_split_path_info ^(.+\.php)(/.*)$; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_path_info; # fastcgi_param HTTPS on; #Avoid sending the security headers twice fastcgi_param modHeadersAvailable true; fastcgi_param front_controller_active true; fastcgi_pass php-handler; fastcgi_intercept_errors on; fastcgi_request_buffering off; } location ~ ^/(?:updater|ocs-provider)(?:$|/) { try_files $uri/ =404; index index.php; } # Adding the cache control header for js and css files # Make sure it is BELOW the PHP block location ~ \.(?:css|js|woff|svg|gif)$ { try_files $uri /index.php$uri$is_args$args; add_header Cache-Control "public, max-age=15778463"; # Add headers to serve security related headers (It is intended to # have those duplicated to the ones above) # Before enabling Strict-Transport-Security headers please read into # this topic first. # add_header Strict-Transport-Security "max-age=15768000; # includeSubDomains; preload;"; # # WARNING: Only add the preload option once you read about # the consequences in https://hstspreload.org/. This option # will add the domain to a hardcoded list that is shipped # in all major browsers and getting removed from this list # could take several months. add_header X-Content-Type-Options nosniff; add_header X-XSS-Protection "1; mode=block"; add_header X-Robots-Tag none; add_header X-Download-Options noopen; add_header X-Permitted-Cross-Domain-Policies none; # Optional: Don't log access to assets access_log off; } location ~ \.(?:png|html|ttf|ico|jpg|jpeg)$ { try_files $uri /index.php$uri$is_args$args; # Optional: Don't log access to other assets access_log off; } } ``` #### **拷贝证书到nginx/conf/conf.d/certs/pan.itisme.co目录** ```bash $ scp fullchain.pem root@docker-host:/data/docker_project/nextcloud/nginx/conf/conf.d/certs/pan.itisme.co $ scp privkey.pem root@docker-host:/data/docker_project/nextcloud/nginx/conf/conf.d/certs/pan.itisme.co ``` #### **编辑docker-compose.yml (客户端->nginx->php->db)** ```bash $ vim docker-compose.yml ``` ```yaml version: '3' services: db: image: mysql:5.7 ports: - "3306:3306" volumes: - ./mysql/conf/mysqld.cnf:/etc/mysql/mysql.conf.d/mysqld.cnf - ./mysql/data:/var/lib/mysql/:rw - ./mysql/log:/var/log/ env_file: - db.env app: image: nextcloud:fpm depends_on: - db volumes: - ./nextcloud:/var/www/html restart: always web: image: nginx ports: - 80:80 - 443:443 depends_on: - app volumes: - ./nextcloud:/var/www/html - ./nginx/conf/nginx.conf:/etc/nginx/nginx.conf:ro - ./nginx/conf/conf.d:/etc/nginx/conf.d/:ro - ./nginx/log/:/var/log/nginx/:rw restart: always ``` #### **增加db.env文件,数据库的环境变量** ```bash MYSQL_PASSWORD=123456 MYSQL_DATABASE=nextcloud MYSQL_USER=nextcloud MYSQL_ROOT_PASSWORD=123456 ``` #### **启动项目** ```bash $ docker-compose up ``` #### **启动项目后台运行** ```bash $ docker-compose up -d ``` #### **查看docker进程** ```bash $ docker-compose ps ``` ``` Name Command State Ports ------------------------------------------------------------------------------------------------ nextcloud_app_1 /entrypoint.sh php-fpm Up 9000/tcp nextcloud_db_1 docker-entrypoint.sh mysqld Up 0.0.0.0:3306->3306/tcp nextcloud_web_1 nginx -g daemon off; Up 0.0.0.0:443->443/tcp, 0.0.0.0:80->80/tcp ``` #### **浏览器访问https://pan.itisme.co/** ![](https://files.ynotes.cn/18-7-25/23443164.jpg)
阅读 2385 评论 0 收藏 0
阅读 2385
评论 0
收藏 0

兜兜    2018-07-23 09:39:28    2019-11-14 14:33:28   

iSCSI SAN
### 准备工作 所有节点: - 系统: `CentOS7.6` iSCSI : - IP/主机:`172.16.0.3(node1)` 从节点: - IP/主机:`172.16.0.4(node2)` ### 创建 iSCSI target #### 创建后备存储设备 `fdisk创建一个分区/dev/vdb1`   #### 安装targetcli ```bash yum -y install targetcli ```   #### 使用targetcli管理iSCSI targets ```bash targetcli ``` ```bash targetcli shell version 2.1.fb46 Copyright 2011-2013 by Datera, Inc and others. For help on commands, type 'help'. /> ls o- / .................................................................................. [...] o- backstores ....................................................................... [...] | o- block ........................................................... [Storage Objects: 0] | o- fileio .......................................................... [Storage Objects: 0] | o- pscsi ........................................................... [Storage Objects: 0] | o- ramdisk ......................................................... [Storage Objects: 0] o- iscsi ..................................................................... [Targets: 0] o- loopback .................................................................. [Targets: 0] ```   #### 创建block backstores 创建一个新的block ```bash /backstores/block> create dev=/dev/vdb1 name=vdb1 Created block storage object vdb1 using /dev/vdb1. /backstores/block> ls o- block ................................................................ [Storage Objects: 1] o- vdb1 ....................................... [/dev/vdb1 (0 bytes) write-thru deactivated] o- alua ................................................................. [ALUA Groups: 1] o- default_tg_pt_gp ..................................... [ALUA state: Active/optimized] ```   #### 创建iSCSI targets ```bash />cd /iscsi /iscsi>create wwn=iqn.2019-07.com.example:servers Created target iqn.2018-12.com.example:servers. Created TPG 1. Global pref auto_add_default_portal=true Created default portal listening on all IPs (0.0.0.0), port 3260. /iscsi> ls o- iscsi ......................................................................... [Targets: 1] o- iqn.2019-07.com.example:servers ................................................ [TPGs: 1] o- tpg1 ............................................................ [no-gen-acls, no-auth] o- acls ....................................................................... [ACLs: 0] o- luns ....................................................................... [LUNs: 0] o- portals ................................................................. [Portals: 1] o- 0.0.0.0:3260 .................................................................. [OK] ```   #### 添加ACLs ```bash />cd iscsi/iqn.2019-07.com.example:servers/tpg1/acls /iscsi/iqn.20...ers/tpg1/acls> create wwn=iqn.2018-12.com.example:node1 Created Node ACL for iqn.2018-12.com.example:node1 ```   #### 添加LUNs到iSCSI target ```bash /> cd iscsi/iqn.2018-12.com.example:servers/tpg1/luns /iscsi/iqn.20...ers/tpg1/luns> create /backstores/block/vdb1 Created LUN 0. Created LUN 0->0 mapping in node ACL iqn.2018-12.com.example:node1 /iscsi/iqn.20...ers/tpg1/luns> exit Global pref auto_save_on_exit=true Configuration saved to /etc/target/saveconfig.json ```   #### 启动和开启target服务 ```bash systemctl start target systemctl enable target ```     ### 创建 iSCSI Initiator #### 安装iscsi-initiator包 ```bash yum -y install iscsi-initiator-utils ```   #### 设置iSCSI Initiator名 ```bash cat /etc/iscsi/initiatorname.iscsi ``` ``` InitiatorName=iqn.2019-07.com.example:node1 ```   #### 重启iscsid ```bash systemctl restart iscsid ```   #### 发现LUNs ```bash iscsiadm --mode discovery --type sendtargets --portal 172.16.0.4 --discover ``` ``` 172.16.0.4:3260,1 iqn.2019-07.com.example:servers ``` 发现之后数据目录更新 ```bash ls -l /var/lib/iscsi/nodes ``` ``` drw------- 3 root root 4096 Jul 23 03:21 iqn.2019-07.com.example:servers ``` ```bash ls -l /var/lib/iscsi/send_targets/172.16.0.4,3260/ ``` ``` lrwxrwxrwx 1 root root 70 Jul 23 03:21 iqn.2019-07.com.example:servers,172.16.0.4,3260,1,default -> /var/lib/iscsi/nodes/iqn.2019-07.com.example:servers/172.16.0.4,3260,1 -rw------- 1 root root 549 Jul 23 03:21 st_config ```   #### 创建连接(默认持久连接,重启生效) ```bash iscsiadm --mode node --targetname iqn.2019-07.com.example:servers --login ``` ``` Logging in to [iface: default, target: iqn.2019-07.com.example:servers, portal: 172.16.0.4,3260] (multiple) Login to [iface: default, target: iqn.2019-07.com.example:servers, portal: 172.16.0.4,3260] successful. ``` 监控连接 ```bash iscsiadm --mode node -P 1 ``` ``` Target: iqn.2019-07.com.example:servers Portal: 172.16.0.4:3260,1 Iface Name: default ``` 列出scsi设备 ```bash lsscsi ``` ``` [1:0:0:0] cd/dvd QEMU QEMU DVD-ROM 2.5+ /dev/sr0 [2:0:0:0] disk LIO-ORG vdb1 4.0 /dev/sda ```   #### 移除连接 断开连接 ```bash iscsiadm --mode node --targetname iqn.2018-12.com.example:servers --portal 10.0.2.13 -u ``` ``` Logging out of session [sid: 1, target: iqn.2018-12.com.example:servers, portal: 10.0.2.13,3260] Logout of [sid: 1, target: iqn.2018-12.com.example:servers, portal: 10.0.2.13,3260] successful. ``` 删除IQN子目录和内容 ```bash iscsiadm --mode node --targetname iqn.2018-12.com.example:servers --portal 10.0.2.13 -o delete ``` `停止iscsi服务,移除/var/lib/iscsi/nodes下所有文件清理配置,重启iscsi服务,开始discovery再次登录`   #### 格式化iSCSI设备 ```bash mkfs.ext4 /dev/sda #也可以对设备进行分区 ``` #### 挂载iscsi设备 ```bash blkid /dev/sda ``` ``` /dev/sda: UUID="dce62896-9ac9-42cf-aa3b-38344974c309" TYPE="ext4" ``` #### 设置开机挂载 ```bash vim /etc/fstab ``` ``` UUID="dce62896-9ac9-42cf-aa3b-38344974c309" /test ext4 defaults 0 0 ``` 挂载/etc/fstab中的配置 ```bash mount -a ``` 查看挂载信息 ```bash df -h ``` ``` Filesystem Size Used Avail Use% Mounted on /dev/vda1 25G 1.8G 22G 8% / devtmpfs 486M 0 486M 0% /dev tmpfs 496M 0 496M 0% /dev/shm tmpfs 496M 50M 446M 11% /run tmpfs 496M 0 496M 0% /sys/fs/cgroup tmpfs 100M 0 100M 0% /run/user/0 /dev/sda 2.0G 6.0M 1.8G 1% /test ``` 应用场景:利用服务器多余的磁盘空间整合成大的逻辑卷用于备份(流程:服务器创建iSCSI targets,客户端通过iSCSI initators登陆获取targets。客户端使用LVM对磁盘创建一个总的逻辑卷)
阅读 861 评论 0 收藏 0
阅读 861
评论 0
收藏 0

兜兜    2018-07-22 17:00:39    2019-11-14 14:33:22   

高可用 DRBD
### 介绍 DRBD(Distributed Replicated Block Device)是一个用软件实现的、无共享的、服务器之间镜像块设备内容的存储复制解决方案。 #### DRBD的工作原理 ```bash +-----------+ | 文件系统 | +-----------+ | V +--------------+ | 块设备层 | | (/dev/drbd1) | +--------------+ | | | | V V +-------------+ +------------+ | 本地硬盘 | | 远程硬盘 | | (/dev/hdb1) | | (/dev/hdb1)| +-------------+ +------------+ host1 host2 ``` #### DRBD单主和双主模式 单主模式:`一个集群内一个资源在任何给定的时间内仅有一个primary角色,另一个为secondary。文件系统可以是ext3、ext4、xfs等` 双主模式:`对于一个资源,在任何给定的时刻该集群都有两个primary节点,也就是drbd两个节点均为primary,因此可以实现并发访问。使用共享集群文件系统例如gfs和ocfs系统` #### DRBD的复制模式 三种模式: `协议A:异步复制协议。本地写成功后立即返回,数据放在发送buffer中,可能丢失。` `协议B:内存同步(半同步)复制协议。本地写成功并将数据发送到对方后立即返回,如果双机掉电,数据可能丢失。` `协议C:同步复制协议。本地和对方写成功确认后返回。如果双机掉电或磁盘同时损坏,则数据可能丢失。` **在使用时,一般用协议C。由于协议C是本地和对方写成功时再认为写入成功,因此会有一定时延。** ### 准备环境: 所有节点: - 系统: `CentOS7.6` - 同步硬盘:`/dev/vdb1` 主节点: - IP/主机:`172.16.0.3(node1)` 从节点: - IP/主机:`172.16.0.4(node2)` ### 安装DRBD #### `node1和node2执行` 导入GPG key和安装elrepo库 ```bash rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm ``` 安装drbd软件包 ```bash yum install drbd90-utils kmod-drbd90 -y ``` 加载drbd模块 ```bash modprobe drbd echo drbd > /etc/modules-load.d/drbd.conf #开机加载drbd模块 ``` ### 配置DRBD #### `node1和node2执行` 配置global_common.conf文件 ```bash vim /etc/drbd.d/global_common.conf ``` ```bash global { usage-count no; #是否参加DRBD使用统计,默认为yes。官方统计drbd的装机量,改为no } common { protocol C; #DRBD的同步复制协议 handlers { pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f"; pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f"; local-io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f"; } startup { } options { } disk { on-io-error detach; #配置I/O错误处理策略为分离,添加这一行 } net { cram-hmac-alg "sha1"; #drbd同步验证方式 shared-secret "test"; #drbd同步密码信息 } syncer { rate 1024M; #设置主备节点同步时的网络速率,添加这个选项 } } ``` 配置资源文件 ```bash vim /etc/drbd.d/test.res ``` ```bash resource test { protocol C; meta-disk internal; device /dev/drbd1; syncer { verify-alg sha1; } on node1 { disk /dev/vdb; address 172.16.0.3:7789; } on node2 { disk /dev/vdb; address 172.16.0.4:7789; } ``` 初始化meta数据 ```bash drbdadm create-md test ``` 启动和开启DRBD ```bash systemctl start drbd systemctl enable drbd ``` #### `node1节点执行` ```bash drbdadm up test drbdadm primary test #如果遇到任何错误,执行:drbdadm primary test --force ``` #### `node2节点执行` ```bash drbdadm up test ``` 查看DRBD状态 ```bash cat /proc/drbd ``` ``` version: 8.4.11-1 (api:1/proto:86-101) GIT-hash: 66145a308421e9c124ec391a7848ac20203bb03c build by mockbuild@, 2018-11-03 01:26:55 1: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r----- ns:10557016 nr:8 dw:299576 dr:10266018 al:78 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0 ```   ### 测试DRBD 格式化存储 ```bash mkfs.ext4 /dev/drbd1 ``` 挂载 ```bash mount /dev/drbd1 /mnt ``` 创建测试数据 ```bash touch /mnt/f{1..5} ls -l /mnt/ ``` ``` -rw-r--r-- 1 root root 0 Jul 22 08:59 f1 -rw-r--r-- 1 root root 0 Jul 22 08:59 f2 -rw-r--r-- 1 root root 0 Jul 22 08:59 f3 -rw-r--r-- 1 root root 0 Jul 22 08:59 f4 -rw-r--r-- 1 root root 0 Jul 22 08:59 f5 ``` #### 交换主从 `node1执行` ```bash umount /mnt ``` ```bash drbdadm secondary test ``` `node2执行` ```bash drbdadm primary test ``` 挂载 ```bash mount /dev/drbd1 /mnt ``` 查看数据 ```bash ls -l /mnt ``` ``` -rw-r--r-- 1 root root 0 Jul 22 08:59 f1 -rw-r--r-- 1 root root 0 Jul 22 08:59 f2 -rw-r--r-- 1 root root 0 Jul 22 08:59 f3 -rw-r--r-- 1 root root 0 Jul 22 08:59 f4 -rw-r--r-- 1 root root 0 Jul 22 08:59 f5 ```   ### 管理命令 查看资源的状态 ```bash drbdadm cstate resouce_name #resouce_name为资源名 ``` ``` 资源的连接状态;一个资源可能有以下连接状态中的一种 StandAlone 独立的:网络配置不可用;资源还没有被连接或是被管理断开(使用 drbdadm disconnect 命令),或是由于出现认证失败或是脑裂的情况 Disconnecting 断开:断开只是临时状态,下一个状态是StandAlone独立的 Unconnected 悬空:是尝试连接前的临时状态,可能下一个状态为WFconnection和WFReportParams Timeout 超时:与对等节点连接超时,也是临时状态,下一个状态为Unconected悬空 BrokerPipe:与对等节点连接丢失,也是临时状态,下一个状态为Unconected悬空 NetworkFailure:与对等节点推动连接后的临时状态,下一个状态为Unconected悬空 ProtocolError:与对等节点推动连接后的临时状态,下一个状态为Unconected悬空 TearDown 拆解:临时状态,对等节点关闭,下一个状态为Unconected悬空 WFConnection:等待和对等节点建立网络连接 WFReportParams:已经建立TCP连接,本节点等待从对等节点传来的第一个网络包 Connected 连接:DRBD已经建立连接,数据镜像现在可用,节点处于正常状态 StartingSyncS:完全同步,有管理员发起的刚刚开始同步,未来可能的状态为SyncSource或PausedSyncS StartingSyncT:完全同步,有管理员发起的刚刚开始同步,下一状态为WFSyncUUID WFBitMapS:部分同步刚刚开始,下一步可能的状态为SyncSource或PausedSyncS WFBitMapT:部分同步刚刚开始,下一步可能的状态为WFSyncUUID WFSyncUUID:同步即将开始,下一步可能的状态为SyncTarget或PausedSyncT SyncSource:以本节点为同步源的同步正在进行 SyncTarget:以本节点为同步目标的同步正在进行 PausedSyncS:以本地节点是一个持续同步的源,但是目前同步已经暂停,可能是因为另外一个同步正在进行或是使用命令(drbdadm pause-sync)暂停了同步 PausedSyncT:以本地节点为持续同步的目标,但是目前同步已经暂停,这可以是因为另外一个同步正在进行或是使用命令(drbdadm pause-sync)暂停了同步 VerifyS:以本地节点为验证源的线上设备验证正在执行 VerifyT:以本地节点为验证目标的线上设备验证正在执行 ``` 查看资源的角色 ```bash drbdadm role resouce_name ``` ``` Parimary 主:资源目前为主,并且可能正在被读取或写入,如果不是双主只会出现在两个节点中的其中一个节点上 Secondary 次:资源目前为次,正常接收对等节点的更新 Unknown 未知:资源角色目前未知,本地的资源不会出现这种状态 ``` 查看硬盘状态命令 ```bash drbdadm dstate resouce_name ``` ``` 本地和对等节点的硬盘有可能为下列状态之一: Diskless 无盘:本地没有块设备分配给DRBD使用,这表示没有可用的设备,或者使用drbdadm命令手工分离或是底层的I/O错误导致自动分离 Attaching:读取无数据时候的瞬间状态 Failed 失败:本地块设备报告I/O错误的下一个状态,其下一个状态为Diskless无盘 Negotiating:在已经连接的DRBD设置进行Attach读取无数据前的瞬间状态 Inconsistent:数据是不一致的,在两个节点上(初始的完全同步前)这种状态出现后立即创建一个新的资源。此外,在同步期间(同步目标)在一个节点上出现这种状态 Outdated:数据资源是一致的,但是已经过时 DUnknown:当对等节点网络连接不可用时出现这种状态 Consistent:一个没有连接的节点数据一致,当建立连接时,它决定数据是UpToDate或是Outdated UpToDate:一致的最新的数据状态,这个状态为正常状态 ``` 启动、停止资源 ```bash drbdadm up resouce_name #启动资源 drbdadm down resouce_name #停止资源 ``` 升级和降级资源 ```bash drbdadm primary resouce_name #升级资源角色为主 drbdadm secondary resouce_name #升级资源角色为从 drbdadm -- --overwrite-data-of-peer primary resouce_name #同步资源 ``` `注意:在单主模式下的DRBD,两个节点同时处于连接状态,任何一个节点都可以在特定的时间内变成主;但两个节点中只能一为主,如果已经有一个主,需先降级才可能升级;在双主模式下没有这个限制` **参考:** `https://github.com/chenzhiwei/linux/tree/master/drbd` `https://www.learnitguide.net/2016/07/how-to-install-and-configure-drbd-on-linux.html` `http://yallalabs.com/linux/how-to-install-and-configure-drbd-cluster-on-rhel7-centos7/` `https://wiki.centos.org/zh/HowTos/Ha-Drbd`
阅读 1133 评论 0 收藏 0
阅读 1133
评论 0
收藏 0

兜兜    2018-07-22 10:26:13    2019-11-14 14:34:38   

数据库 mysql 集群
### 准备工作 **所有节点:** - **系统:** `CentOS 7.6` - **硬件配置:** `1核1G+2GSwap` **管理节点:** - IP/主机:`192.168.10.3(db1)` **数据节点:** - IP/主机:`192.168.10.4(db2)/192.168.10.5(db2)` **SQL节点:** - IP/主机:`192.168.10.6(db1)/192.168.10.7(db2)`   ### 初始化工作 #### `db1执行` 配置hosts文件 ```bash vim /etc/hosts ``` ```ini 192.168.10.3 db1 192.168.10.4 db2 192.168.10.5 db3 192.168.10.6 db4 192.168.10.7 db5 ``` 安装ansible ```bash yum install -y ansible ``` 配置ansible ```bash vim /etc/ansible/hosts ``` ```ini [db_nodes] db2 db3 [sql_nodes] db4 db5 ``` 创建公私秘钥对 ```bash ssh-keygen -t rsa ``` 拷贝公钥到db2-5 ```bash ssh-copy-id -i ~/.ssh/id_rsa.pub root@dbX #拷贝db1的公钥到dbX,X为2-5 ``` ping所有被管理主机 ```bash ansible all -m ping ``` ```py db2 | SUCCESS => { "ansible_facts": { "discovered_interpreter_python": "/usr/bin/python" }, "changed": false, "ping": "pong" } db5 | SUCCESS => { "ansible_facts": { "discovered_interpreter_python": "/usr/bin/python" }, "changed": false, "ping": "pong" } db3 | SUCCESS => { "ansible_facts": { "discovered_interpreter_python": "/usr/bin/python" }, "changed": false, "ping": "pong" } db4 | SUCCESS => { "ansible_facts": { "discovered_interpreter_python": "/usr/bin/python" }, "changed": false, "ping": "pong" } ``` 关闭防火墙 ```bash systemctl stop firewalld systemctl disable firewalld ``` ansible配置db2-5的hosts文件 ```bash ansible all -m copy -a "src=/etc/hosts dest=/etc/hosts" ``` ansible关闭db2-5的防火墙 ```bash ansible all -m shell -a "systemctl stop firewalld&&systemctl disable firewalld" ```   ### 安装管理节点(db1) #### 下载MySQL集群相关软件 ```bash cd ~ wget http://dev.mysql.com/get/Downloads/MySQL-Cluster-7.4/MySQL-Cluster-gpl-7.4.10-1.el7.x86_64.rpm-bundle.tar tar xvf MySQL-Cluster-gpl-7.4.10-1.el7.x86_64.rpm-bundle.tar ``` #### 安装和移除软件包 ```bash yum -y install perl-Data-Dumper libaio-devel yum -y remove mariadb-libs ``` #### 安装MySQL集群相关软件包 ```bash cd ~ rpm -Uvh MySQL-Cluster-client-gpl-7.4.10-1.el7.x86_64.rpm rpm -Uvh MySQL-Cluster-server-gpl-7.4.10-1.el7.x86_64.rpm rpm -Uvh MySQL-Cluster-shared-gpl-7.4.10-1.el7.x86_64.rpm ``` #### 配置管理节点 创建配置目录 ```bash mkdir -p /var/lib/mysql-cluster ``` 创建配置文件 ```bash cd /var/lib/mysql-cluster vi config.ini ``` ```ini [ndb_mgmd default] # Directory for MGM node log files DataDir=/var/lib/mysql-cluster [ndb_mgmd] #Management Node db1 HostName=db1 [ndbd default] NoOfReplicas=2 # Number of replicas DataMemory=256M # Memory allocate for data storage IndexMemory=256M # Memory allocate for index storage #Directory for Data Node DataDir=/var/lib/mysql-cluster [ndbd] #Data Node db2 HostName=db2 [ndbd] #Data Node db3 HostName=db3 [mysqld] #SQL Node db4 HostName=db4 [mysqld] #SQL Node db5 HostName=db5 ``` #### 启动管理节点 ```bash ndb_mgmd --config-file=/var/lib/mysql-cluster/config.ini ``` ``` MySQL Cluster Management Server mysql-5.6.28 ndb-7.4.10 2019-07-22 02:00:08 [MgmtSrvr] INFO -- The default config directory '/usr/mysql-cluster' does not exist. Trying to create it... 2019-07-22 02:00:08 [MgmtSrvr] INFO -- Successfully created config directory ``` #### 查看节点信息 ```bash ndb_mgm ``` ```sql ndb_mgm> show ``` ``` Connected to Management Server at: localhost:1186 Cluster Configuration --------------------- [ndbd(NDB)] 2 node(s) id=2 (not connected, accepting connect from db2) #因为还未搭建,显示未连接 id=3 (not connected, accepting connect from db3) [ndb_mgmd(MGM)] 1 node(s) id=1 @192.168.10.3 (mysql-5.6.28 ndb-7.4.10) [mysqld(API)] 2 node(s) id=4 (not connected, accepting connect from db4) id=5 (not connected, accepting connect from db5) ```   ### 安装数据节点(db2-3) #### `db1执行` 拷贝安装包到db2-3并解压 ```bash cd ~ ansible db_nodes -m copy -a "src=./MySQL-Cluster-gpl-7.4.10-1.el7.x86_64.rpm-bundle.tar dest=/root" ansible db_nodes -m shell -a "cd /root;tar xvf MySQL-Cluster-gpl-7.4.10-1.el7.x86_64.rpm-bundle.tar" ``` 安装和移除软件包 ```bash ansible db_nodes -m shell -a 'yum -y install perl-Data-Dumper libaio-devel' ansible db_nodes -m shell -a 'yum -y remove mariadb-libs' ``` 安装MySQL集群相关软件包 ```bash ansible db_nodes -m shell -a 'cd /root;rpm -Uvh MySQL-Cluster-client-gpl-7.4.10-1.el7.x86_64.rpm&&rpm -Uvh MySQL-Cluster-server-gpl-7.4.10-1.el7.x86_64.rpm&&rpm -Uvh MySQL-Cluster-shared-gpl-7.4.10-1.el7.x86_64.rpm' ``` #### 配置数据节点(db2-3) #### `db2-3执行` ```bash vim /etc/my.cnf ``` ```ini [mysqld] ndbcluster ndb-connectstring=db1 # IP address of Management Node [mysql_cluster] ndb-connectstring=db1 # IP address of Management Node ``` 创建数据目录 ```bash mkdir -p /var/lib/mysql-cluster ``` 启动数据节点 ```bash ndbd ``` ``` 2019-07-22 02:02:15 [ndbd] INFO -- Angel connected to 'db1:1186' 2019-07-22 02:02:15 [ndbd] INFO -- Angel allocated nodeid: 3 ```   ### 安装SQL节点(db4-5) #### `db1执行` 拷贝安装包到db4-5并解压 ```bash cd ~ ansible sql_nodes -m copy -a "src=./MySQL-Cluster-gpl-7.4.10-1.el7.x86_64.rpm-bundle.tar dest=/root" ansible sql_nodes-m shell -a "cd /root;tar xvf MySQL-Cluster-gpl-7.4.10-1.el7.x86_64.rpm-bundle.tar" ``` 安装和移除软件包 ```bash ansible sql_nodes -m shell -a 'yum -y install perl-Data-Dumper libaio-devel' ansible sql_nodes-m shell -a 'yum -y remove mariadb-libs' ``` 安装MySQL集群相关软件包 ```bash ansible sql_nodes -m shell -a 'cd /root;rpm -Uvh MySQL-Cluster-client-gpl-7.4.10-1.el7.x86_64.rpm&&rpm -Uvh MySQL-Cluster-server-gpl-7.4.10-1.el7.x86_64.rpm&&rpm -Uvh MySQL-Cluster-shared-gpl-7.4.10-1.el7.x86_64.rpm' ``` #### 配置SQL节点(db4-5) #### `db4-5执行` ```bash vim /etc/my.cnf ``` ```ini [mysqld] ndbcluster ndb-connectstring=db1 # IP address for server management node default_storage_engine=ndbcluster # Define default Storage Engine used by MySQL [mysql_cluster] ndb-connectstring=db1 # IP address for server management node ``` 启动SQL节点 ```bash systemctl start mysql ``` 查看初始密码 ```bash cd ~ cat .mysql_secret ``` ``` # The random password set for the root user at Fri Jul 19 09:50:43 2019 (local time): 9uGYuWofEZpg8EzC ``` 数据库安全加固 ```bash mysql_secure_installation ``` 连接数据库 ```bash mysql -u root -p ``` 创建远程用户 ```sql mysql> create user 'root'@'%' identified by '123456'; mysql> flush privileges; ```   ### 监控集群 #### `db1执行` 查看集群几点 ```bash ndb_mgm ``` ```sql ndb_mgm> show ``` ``` Cluster Configuration --------------------- [ndbd(NDB)] 2 node(s) id=2 @192.168.10.4 (mysql-5.6.28 ndb-7.4.10, Nodegroup: 0, *) id=3 @192.168.10.5 (mysql-5.6.28 ndb-7.4.10, Nodegroup: 0) [ndb_mgmd(MGM)] 1 node(s) id=1 @192.168.10.3 (mysql-5.6.28 ndb-7.4.10) [mysqld(API)] 2 node(s) id=4 @192.168.10.6 (mysql-5.6.28 ndb-7.4.10) id=5 @192.168.10.7 (mysql-5.6.28 ndb-7.4.10) ``` 查看集群信息 ```bash ndb_mgm -e "all status" #查看集群状态 ndb_mgm -e "all report memory" #查看集群内存 ```   ### 测试集群 #### `db4执行` ```sql 创建测试数据 mysql> create database d1; mysql> Create Table: CREATE TABLE `t1` ( `id` int(11) NOT NULL AUTO_INCREMENT, `name` varchar(10) DEFAULT NULL, PRIMARY KEY (`id`) ) ENGINE=ndbcluster AUTO_INCREMENT=2 DEFAULT CHARSET=latin1; ``` ```sql mysql> insert into t1(name) values('ynotes.cn'); ``` ```sql mysql> select * from t1; +----+-----------+ | id | name | +----+-----------+ | 1 | ynotes.cn | +----+-----------+ 1 row in set (0.00 sec) ``` #### `db5执行` 查询数据 ```sql mysql> use d1 ``` ```sql mysql> select * from t1; +----+-----------+ | id | name | +----+-----------+ | 2 | ynotes.cn | +----+-----------+ ``` `总结:通过测试发现,数据已经同步到db5节点,MySQL集群搭建成功!` ##### **参考**:`https://www.howtoforge.com/tutorial/how-to-install-and-configure-mysql-cluster-on-centos-7/`
阅读 726 评论 0 收藏 0
阅读 726
评论 0
收藏 0

兜兜    2018-07-17 20:19:54    2018-07-17 20:19:54   

docker 容器 docker-compose pycharm
![](https://files.ynotes.cn/18-7-23/70377481.jpg) #### 一、环境: ```bash windows7: Pycharm professional 2018.1 Docker Compose 0.14.0 Centos7(192.168.50.252): docker 18.03.1-ce ``` #### 二、开发部署流程: ```bash github拉取代码->pycharm pycharm修改代码 pycharm同步代码到docker主机(自动同步) pycharm通过docker-compose远程调用docker主机启动项目 push代码到github(测试通过) ``` #### 三、pycharm配置docker ##### 3.1.配置pycharm调用远程docker参数 a.远程docker开启TCP监听的配置(centos7,IP:192.168.50.252) ```bash $ vim /etc/systemd/system/docker.service.d/override.conf [Service] ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock $ systemctl daemon-reload $ systemctl restart docker ``` b.pycharm 配置docker API ``` File->settings->Build,Execution,Deployment->Docker->TCP socket Engine API URL:tcp://192.168.50.252:2376 ``` ##### 3.2.配置pycharm的docker-compose和docker-machine路径 pycharm所在机器为windows7,安装Docker Toolbox,文档参考https://docs.docker.com/toolbox/toolbox_install_windows/ ``` File->settings->Build,Execution,Deployment->Docker->Tools-> Docker Machine executable:C:\Program Files\Docker Toolbox\docker-machine.exe Docker Compose executable:C:\Program Files\Docker Toolbox\docker-compose.exe ``` ##### 3.3.代码deployment配置 ``` Tools->Deployment->Configuration->Connection Type:SFTP SFTP host:192.168.50.252 Port:22 Root path:/ User name:root Password: ******* Tools->Deployment->Configuration->Mappings Local path:C:\Users\Administrator.GZLX-20180416SV\PycharmProjects\blog Deployment path on Server '192.168.50.252':/c/Users/Administrator.GZLX-20180416SV/PycharmProjects/blog ``` #### 四、pycharm拉取github上的blog代码 ``` pycharm打开的时候选择Check out from Version Control->Git ``` #### 五、pycharm通过deployment同步blog代码到远程docker主机上 ``` project->右击项目->deployment->upload to 192.168.50.252 ``` #### 六、修改同步到docker主机部分目录的可写权限(可选项,结合自己的项目) ```bash $ chmod 777 /c/Users/Administrator.GZLX-20180416SV/PycharmProjects/blog/blog/uwsgi-django/my_project/my_project/upload $ chmod 777 /c/Users/Administrator.GZLX-20180416SV/PycharmProjects/blog/blog/uwsgi-django/my_project/my_project/upload/profile_images $ chmod 777 /c/Users/Administrator.GZLX-20180416SV/PycharmProjects/blog/blog/mysql/log ``` #### 七、pycharm运行项目的docker-compose ``` 右击项目的docker-compose.yml文件,选择运行 Run 'blog/docker-compose.yml' ``` #### 八、访问部署成功的项目web页面 https://blog.itisme.co/
阅读 2060 评论 0 收藏 0
阅读 2060
评论 0
收藏 0

兜兜    2018-07-14 20:43:16    2018-07-14 20:43:16   

docker 容器 docker-compose 容器编排
![](https://files.ynotes.cn/18-7-23/70377481.jpg) #### **项目目录结构** ```bash blog |-- docker-compose.yml #docker-compose编排文件 |-- mysql | |-- conf | | `-- mysqld.cnf #mysql配置文件 | |-- data #mysql数据存放目录 | |-- db_init_sql | | `-- blog.sql #blog的sql数据表结构 | `-- log #mysql日志目录 |-- nginx | |-- conf | | |-- mysite.template #生成blog.itisme.co.conf配置的样例文件 | | `-- nginx.conf #nginx配置文件 | |-- log #nginx日志目录 | | | | | `-- ssl | |-- fullchain.pem #ssl证书链 | `-- privkey.pem #ssl证书私钥 `-- uwsgi-django |-- build | |-- Dockerfile #构建uwsgi-django镜像的文件 | `-- requirements.txt #django项目需要的安装包 |-- conf | `-- config.ini #uwsgi启动配置参数 `-- my_project #django项目 |-- blog #blog应用 |-- manage.py `-- my_project ``` #### **新建docker项目数据配置存放目录** ```bash $ mkdir /data/docker_project/blog -p $ cd /data/docker_project/blog ``` #### **创建mysql容器使用的目录** ```bash $ mkdir mysql/{conf,data,db_init_sql,log} -p $ chmod 777 mysql/log ``` conf:存放mysql配置文件 data:存放mysql数据的目录 db_init_sql:存放的是mysql容器初始化的sql(存放django项目的建表语句) log:存放mysql日志,修改权限为777   #### **编辑mysql配置文件mysql/conf/mysqld.cnf(项目使用emoji表情,编码使用的是utf8mb4)** ```bash [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock symbolic-links=0 log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid default-time-zone = '+08:00' character-set-server=utf8 character-set-server = utf8mb4 collation-server = utf8mb4_unicode_ci character-set-client-handshake = FALSE innodb_buffer_pool_size = 128M sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES [client] default-character-set = utf8mb4 [mysql] default-character-set = utf8mb4 ``` #### **创建uwsgi-django容器使用的目录** ```bash $ mkdir uwsgi-django/{build,my_project,conf} -p ``` my_project:存放django项目 build:存放构建镜像uwsgi-django需要的文件 conf:存放uwsgi启动的配置文件 #### **创建uwsgi-django的Dockerfile文件(用于构建uwsgi-django镜像)** ```bash $ vim uwsgi-django/build/Dockerfile ``` ``` FROM python:2.7-slim RUN apt-get update && apt-get install -y \ gcc \ gettext \ mysql-client default-libmysqlclient-dev \ libpq-dev \ --no-install-recommends && rm -rf /var/lib/apt/lists/* ENV DJANGO_VERSION 1.9.5 RUN pip install mysqlclient psycopg2 uwsgi django=="$DJANGO_VERSION" WORKDIR /usr/src/app COPY requirements.txt ./ RUN pip install -r requirements.txt WORKDIR /usr/src/app/my_project ``` #### **项目的requirements.txt上传到django目录(项目所在的环境执行)** ```bash $ pip freeze >/root/requirements.txt ``` #### **拷贝requirements.txt到docker-compose编排所在机器uwsgi-django目录** ```bash $ scp /root/requirements.txt root@docker-host:/data/docker_project/blog/uwsgi-django/build ``` ### **增加uwsgi程序的配置文件uwsgi-django/conf/config.ini** ``` [uwsgi] socket = 0.0.0.0:8888 chdir = /usr/src/app/my_project module = my_project.wsgi master = true processes = 10 socket = /tmp/my_project.sock vacuum = true uid = 498 ``` #### **创建nginx容器使用的目录** ```bash $ mkdir nginx/{conf,ssl,log} $ chmod 777 nginx/log ``` conf:存放nginx的配置文件 ssl:存放SSL证书目录 log:存放日志目录 #### **编辑nginx/conf/nginx.conf** ```nginx user nginx; worker_processes 2; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { use epoll; worker_connections 10240; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; keepalive_timeout 65; include /etc/nginx/conf.d/blog.itisme.co.conf; } ``` #### **编辑nginx/conf/mysite.template(生成blog.itisme.co.conf的模板文件,通过envsubst替换环境变量)** ```nginx upstream uwsgi-django { server uwsgi-django:$UWSGI_PORT; # for a web port socket (we'll use this first) } server { listen $NGINX_PORT; server_name $NGINX_HOST; charset utf-8; rewrite ^(.*)$ https://${server_name}$1 permanent; } server { listen $NGINX_SSL_PORT ssl http2 default_server; listen [::]:443 ssl http2 default_server; server_name $NGINX_HOST; #证书配置 ssl_certificate "/etc/nginx/ssl/blog.itisme.co/fullchain.pem"; ssl_certificate_key "/etc/nginx/ssl/blog.itisme.co/privkey.pem"; ssl_session_cache shared:SSL:1m; ssl_session_timeout 10m; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:HIGH:!aNULL:!MD5:!RC4:!DHE; ssl_prefer_server_ciphers on; # max upload size client_max_body_size 75M; # adjust to taste #django项目的上传文件的目录 location /upload { alias /data/app/my_project/my_project/upload; # your Django project's media files - amend as required } #django项目的静态文件目录 location /static { alias /data/app/my_project/my_project/static_all; } #django项目uwsgi配置 location / { uwsgi_pass uwsgi-django; include /data/app/my_project/my_project/uwsgi_params; # the uwsgi_params file you installed } error_page 404 /404.html; location = /40x.html { } error_page 500 502 503 504 /50x.html; location = /50x.html { } } ``` #### **拷贝证书到nginx/ssl目录** ```bash $ scp fullchain.pem root@docker-host:/data/docker_project/blog/nginx/ssl $ scp privkey.pem root@docker-host:/data/docker_project/blog/nginx/ssl ``` #### **编辑docker-compose.yml (客户端->nginx->uwsgi->django->db)** ```bash $ vim docker-compose.yml ``` ```yaml version: '3' services: db: image: mysql:5.7 restart: always container_name: blog-db environment: MYSQL_ROOT_PASSWORD: 123456 MYSQL_DATABASE: blog MYSQL_USER: blog MYSQL_PASSWORD: 123456 volumes: #挂载mysql配置文件 - ./mysql/conf/mysqld.cnf:/etc/mysql/mysql.conf.d/mysqld.cnf #挂载数据库初始化脚本 - ./mysql/db_init_sql:/docker-entrypoint-initdb.d #挂载mysql数据目录 - ./mysql/data:/var/lib/mysql #挂载mysql日志目录 - ./mysql/log:/var/log uwsgi-django: #使用./uwsgi-django/build/目录下的Dockerfile构建镜像 build: ./uwsgi-django/build/ #当存在build时,image参数表示的是构建后的镜像名 image: uwsgi-django:1.9.5 restart: always depends_on: - db container_name: blog-uwsgi-django environment: DB_NAME: blog DB_USER: blog DB_PASS: 123456 DB_PORT: 3306 WEB_URL: blog.itisme.co volumes: #挂载django项目 - ./uwsgi-django/my_project:/usr/src/app/my_project #挂载uwsgi配置文件 - ./uwsgi-django/conf:/usr/src/app/uwsgi/conf command: uwsgi /usr/src/app/uwsgi/conf/config.ini nginx: #使用nginx官方稳定版镜像 image: nginx:stable restart: always depends_on: - uwsgi-django container_name: blog-nginx environment: NGINX_HOST: blog.itisme.co NGINX_PORT: 80 NGINX_SSL_PORT: 443 UWSGI_PORT: 8888 ports: - 80:80 - 443:443 volumes: #挂载nginx配置 - ./nginx/conf/nginx.conf:/etc/nginx/nginx.conf #挂载站点模板文件 - ./nginx/conf/mysite.template:/etc/nginx/conf.d/mysite.template #挂载ssl证书 - ./nginx/ssl/fullchain.pem:/etc/nginx/ssl/blog.itisme.co/fullchain.pem - ./nginx/ssl/privkey.pem:/etc/nginx/ssl/blog.itisme.co/privkey.pem #挂载项目静态文件 - ./uwsgi-django/my_project/my_project/upload:/data/app/my_project/my_project/upload - ./uwsgi-django/my_project/my_project/static_all:/data/app/my_project/my_project/static_all #挂载uwsgi参数文件 - ./uwsgi-django/my_project/my_project/uwsgi_params:/data/app/my_project/my_project/uwsgi_params - ./nginx/log/:/var/log/nginx/ #envsubst替换/etc/nginx/conf.d/mysite.template变量 command: /bin/bash -c "envsubst < /etc/nginx/conf.d/mysite.template > /etc/nginx/conf.d/blog.itisme.co.conf && nginx -g 'daemon off;'" ``` #### **修改django项目的mysql配置(修改项目里的mysql配置成上面的environment中指定的环境变量)** ```bash import os _env = os.environ #django指定可以访问的域 DOMAIN = _env['WEB_URL'] DATABASES = { 'default': { 'ENGINE': 'django.db.backends.mysql', 'NAME': _env['DB_NAME'], 'HOST': 'db', 'PORT': _env['DB_PORT'], 'USER': _env['DB_USER'], 'PASSWORD': _env['DB_PASS'], 'OPTIONS': {'charset':'utf8mb4'}, } } ``` #### **启动项目** ```bash $ docker-compose up ``` #### **启动项目后台运行** ```bash $ docker-compose up -d ``` #### **查看docker进程** ```bash $ docker-compose ps ``` ``` Name Command State Ports ---------------------------------------------------------------------- blog-db docker-entrypoint.sh mysqld Up 3306/tcp blog-nginx /bin/bash -c envsubst < /e ... Up 0.0.0.0:443->443/tcp, 0.0.0.0:8880->80/tcp blog-uwsgi-django uwsgi /usr/src/app/uwsgi/c ... Up ``` #### **浏览器访问https://ynotes.cn/blog/**
阅读 4280 评论 0 收藏 0
阅读 4280
评论 0
收藏 0

兜兜    2018-07-13 19:03:51    2018-07-13 19:03:51   

docker 容器 docker-compose 容器编排
![](https://files.ynotes.cn/18-7-23/70377481.jpg) #### **项目目录结构** ```bash test/ |-- docker-compose.yml |-- env | |-- db.env #db容器环境变量 | |-- project.env #toomcat容器环境变量 | `-- tomcat.env #tomcat容器环境变量 |-- mysql | |-- conf | | `-- mysqld.cnf #mysql配置文件 | |-- data #mysql数据目录 | |-- db_init_sql | | `-- competitionShare.sql #tomcat项目的数据库表结构 | `-- log #mysql日志文件 `-- tomcat |-- log #tomcat日志 `-- webapps #tomcat项目存放目录 `-- tomcat_project ``` #### **新建docker项目数据配置存放目录** ```bash $ mkdir /data/docker_project/test -p $ cd /data/docker_project/test ``` #### **创建mysql容器使用的目录** ```bash $ mkdir mysql/{conf,data,db_init_sql,log} -p $ chmod 777 mysql/log ``` conf:存放mysql配置文件 data:存放mysql数据的目录 db_init_sql:存放的是mysql容器初始化的sql(如建表语句) log:存放mysql日志   #### **编辑mysql配置文件mysql/conf/mysqld.cnf** ```bash [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock symbolic-links=0 log-error=/var/log/mysqld.log collation-server=utf8_general_ci pid-file=/var/run/mysqld/mysqld.pid default-time-zone = '+08:00' character-set-server=utf8 innodb_buffer_pool_size = 1024M sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES [client] default-character-set=utf8 [mysql] default-character-set=utf8 ``` #### **创建tomcat容器使用的目录** ```bash $ mkdir tomcat/{log,webapps} -p ``` log:存放tomcat日志 webapps:存放tomcat项目的目录 #### **编辑docker-compose.yml** ```yaml version: '3' services: db: image: mysql:5.7 restart: always environment: MYSQL_ROOT_PASSWORD: 123456 MYSQL_DATABASE: db_name MYSQL_USER: test MYSQL_PASSWORD: 123456 ports: - 3306:3306 volumes: - ./mysql/conf/mysqld.cnf:/etc/mysql/mysql.conf.d/mysqld.cnf - ./mysql/db_init_sql:/docker-entrypoint-initdb.d - ./mysql/data:/var/lib/mysql - ./mysql/log:/var/log tomcat: image: tomcat:8.0.53-jre8 restart: always depends_on: - db container_name: test-tomcat env_file: - ./env/project.env environment: JDBC_URL: jdbc:mysql://db:3306/db_name??characterEncoding=UTF-8 JDBC_USER: test JDBC_PASS: 123456 ports: - 8787:8080 volumes: - ./tomcat/webapps:/usr/local/tomcat/webapps - ./tomcat/log:/log ``` #### **修改tomcat项目的mysql配置(修改项目里的mysql配置成上面的environment中指定的环境变量)** ```bash jdbc_url=${JDBC_URL} jdbc_username=${JDBC_USER} jdbc_password=${JDBC_PASS} ``` #### **增加环境变量配置文件project.env(该配置为tomcat启动参数)** ```bash JAVA_OPTS="-Dsupplements.host=supplements" CATALINA_OPTS=-server -Xms256M -Xmx1024M -XX:MaxNewSize=256m -XX:PermSize=64M -XX:MaxPermSize=256m ``` #### **启动项目** ```bash $ docker-compose up ``` #### **启动项目后台运行** ```bash $ docker-compose up -d ```
阅读 2840 评论 0 收藏 0
阅读 2840
评论 0
收藏 0

兜兜    2018-07-10 11:41:52    2019-11-14 14:34:32   

Keepalived 高可用 LVS 负载均衡
### LVS+NAT **`通过网络地址转换,调度器重写请求报文的目标地址,根据预设的调度算法,将请求分派给后端的RS;RS的响应报文通过调度器时,报文的源地址被重写,再返回给客户,完成整个负载调度过程。`** #### 准备环境 ```bash LVS主机: 192.168.50.253 Real Server: 192.168.50.251/192.168.50.252 网络模式:NAT ``` #### **DR配置** #### 安装ipvsadm ```bash yum install ipvsadm -y ``` #### 设置ipv4转发 ```bash sysctl -w net.ipv4.ip_forward=1 ``` #### 关selinux,firewall,iptables ```bash setenforce 0 systemctl stop firewall iptables -F ``` #### 设置ipvsadm ```bash ipvsadm -A -t 192.168.50.253:80 -s rr ipvsadm -a -t 192.168.50.253:80 -r 192.168.50.251:80 -m ipvsadm -a -t 192.168.50.253:80 -r 192.168.50.252:80 -m ipvsadm -S # -A 添加虚拟服务 # -a 添加一个真是的主机到虚拟服务 # -S 保存 # -s 选择调度方法 # rr 轮训调度 # -m 网络地址转换NAT ``` #### **RS配置** 安装web ```bash yum install nginx -y ``` 修改网关 ```bash vim /etc/sysconfig/network-scripts/ifcfg-enp0s3 ``` ```ini GATEWAY0=192.168.50.253 ``` #### 测试(外网机器) `注意:外网测试,同网段直接访问192.168.50.253,LVS仅修改目的地址成RS,当RS应答给客户端,发现为同网段,不会经过LVS去做SNAT,客户端发送的DEST地址和收到的应答包的SRC地址不一致丢弃` 路由器上面做个端口映射 113.119.xx.xx:8999->192.168.50.253:80 ```bash curl http://113.119.xx.xx:8999/ ``` &emsp; ### LVS+DR(`无VIP`) **`VS/DR通过改写请求报文的MAC地址,将请求发送到RS,而RS将响应直接返回给客户。同VS/TUN技术一样,VS/DR技术可极大地 提高集群系统的伸缩性。这种方法没有IP隧道的开销,对集群中的RS也没有必须支持IP隧道协议的要求,但是要求调度器与RS都有一块网卡连 在同一物理网段上。`** #### 准备环境 ```bash LVS主机: 192.168.50.253 08:00:27:e6:f4:0a Real Server: 192.168.50.251 08:00:27:8a:58:c1 192.168.50.252 08:00:27:29:31:d8 网络模式:DR ``` #### **DR配置** #### 安装ipvsadm ```bash yum install ipvsadm -y ``` #### 设置ipv4转发 ```bash sysctl -w net.ipv4.ip_forward=1 ``` #### 关selinux,firewall,iptables ```bash setenforce 0 systemctl stop firewall iptables -F ``` #### 设置ipvsadm ```bash ipvsadm -A -t 192.168.50.253:8080 -s rr #虚拟服务端口需要和真实服务端口一致 ipvsadm -a -t 192.168.50.253:8080 -r 192.168.50.251:8080 -g -w 1 ipvsadm -a -t 192.168.50.253:8080 -r 192.168.50.252:8080 -g -w 1 ipvsadm -S # -A 添加虚拟服务 # -a 添加一个真是的主机到虚拟服务 # -S 保存 # -s 选择调度方法 # -g DR模式 # rr 轮训调度 ``` 配置RS MAC静态绑定 `因为负载均衡服务器使用的是真实IP 192.168.50.253,当查询ARP,因为RS回环接口配置192.168.50.253,所以不会应答ARP,相反,如果配置VIP,那么回环地址配置VIP,192.168.50.253请求ARP的时候,RS会应答ARP` ```bash arp -s 192.168.50.251 08:00:27:8a:58:c1 arp -s 192.168.50.252 08:00:27:29:31:d8 ``` #### **RS配置** 启动web服务 ```bash python -m SimpleHTTPServer 8080 ``` 修改内核参数 ```bash echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce ``` 逻辑网卡添加ip地址192.168.50.253 ```bash ifconfig lo:0 192.168.50.253 netmask 255.255.255.255 broadcast 192.168.50.255 ``` 添加路由(确保请求的IP是192.168.50.253,出去的数据包也为192.168.50.253) ```bash route add -host 192.168.50.253 dev lo:0 ``` &emsp; ### LVS+DR(`有VIP`) #### 准备环境 ```bash VIP: 192.168.50.240 LVS主机: 192.168.50.253 08:00:27:e6:f4:0a Real Server: 192.168.50.251 08:00:27:8a:58:c1 192.168.50.252 08:00:27:29:31:d8 网络模式:DR ``` #### **DR配置** #### 安装ipvsadm ```bash yum install ipvsadm -y ``` #### 设置ipv4转发 ```bash sysctl -w net.ipv4.ip_forward=1 ``` #### 关selinux,firewall,iptables ```bash setenforce 0 systemctl stop firewall iptables -F ``` #### 配置VIP ```bash ifconfig eth0:0 192.168.50.240 netmask 255.255.255.255 broadcast 192.168.50.255 ``` #### 设置ipvsadm ```bash ipvsadm -A -t 192.168.50.240:8080 -s rr #虚拟服务端口需要和真实服务端口一致 ipvsadm -a -t 192.168.50.240:8080 -r 192.168.50.251:8080 -g -w 1 ipvsadm -a -t 192.168.50.240:8080 -r 192.168.50.252:8080 -g -w 1 ipvsadm -S # -A 添加虚拟服务 # -a 添加一个真是的主机到虚拟服务 # -S 保存 # -s 选择调度方法 # -g DR模式 # rr 轮训调度 ``` #### **RS配置** 启动web服务 ```bash python -m SimpleHTTPServer 8080 ``` 修改内核参数 ```bash echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce ``` 添加VIP ```bash ifconfig lo:0 192.168.50.240 netmask 255.255.255.255 broadcast 192.168.50.255 ``` 添加路由(确保请求的IP是192.168.50.253,出去的数据包也为192.168.50.253) ```bash route add -host 192.168.50.240 dev lo:0 ``` &emsp; ### LVS+NAT **`通过网络地址转换,调度器重写请求报文的目标地址,根据预设的调度算法,将请求分派给后端的RS;RS的响应报文通过调度器时,报文的源地址被重写,再返回给客户,完成整个负载调度过程。`** #### 准备环境 ```bash LVS主机: 192.168.50.253 Real Server: 192.168.50.251/192.168.50.252 网络模式:NAT ``` #### **DR配置** #### 安装ipvsadm ```bash yum install ipvsadm -y ``` #### 设置ipv4转发 ```bash sysctl -w net.ipv4.ip_forward=1 ``` #### 关selinux,firewall,iptables ```bash setenforce 0 systemctl stop firewall iptables -F ``` #### 设置ipvsadm ```bash ipvsadm -A -t 192.168.50.253:80 -s rr ipvsadm -a -t 192.168.50.253:80 -r 192.168.50.251:80 -m ipvsadm -a -t 192.168.50.253:80 -r 192.168.50.252:80 -m ipvsadm -S # -A 添加虚拟服务 # -a 添加一个真是的主机到虚拟服务 # -S 保存 # -s 选择调度方法 # rr 轮训调度 # -m 网络地址转换NAT ``` #### **RS配置** 安装web ```bash yum install nginx -y ``` 修改网关 ```bash vim /etc/sysconfig/network-scripts/ifcfg-enp0s3 ``` ```ini GATEWAY0=192.168.50.253 ``` #### 测试(外网机器) `注意:外网测试,同网段直接访问192.168.50.253,LVS仅修改目的地址成RS,当RS应答给客户端,发现为同网段,不会经过LVS去做SNAT,客户端发送的DEST地址和收到的应答包的SRC地址不一致丢弃` 路由器上面做个端口映射 113.119.xx.xx:8999->192.168.50.253:80 ```bash curl http://113.119.xx.xx:8999/ ``` &emsp; ### LVS+DR(`无VIP`) **`VS/DR通过改写请求报文的MAC地址,将请求发送到RS,而RS将响应直接返回给客户。同VS/TUN技术一样,VS/DR技术可极大地 提高集群系统的伸缩性。这种方法没有IP隧道的开销,对集群中的RS也没有必须支持IP隧道协议的要求,但是要求调度器与RS都有一块网卡连 在同一物理网段上。`** #### 准备环境 ```bash LVS主机: 192.168.50.253 08:00:27:e6:f4:0a Real Server: 192.168.50.251 08:00:27:8a:58:c1 192.168.50.252 08:00:27:29:31:d8 网络模式:DR ``` #### **DR配置** #### 安装ipvsadm ```bash yum install ipvsadm -y ``` #### 设置ipv4转发 ```bash sysctl -w net.ipv4.ip_forward=1 ``` #### 关selinux,firewall,iptables ```bash setenforce 0 systemctl stop firewall iptables -F ``` #### 设置ipvsadm ```bash ipvsadm -A -t 192.168.50.253:8080 -s rr #虚拟服务端口需要和真实服务端口一致 ipvsadm -a -t 192.168.50.253:8080 -r 192.168.50.251:8080 -g -w 1 ipvsadm -a -t 192.168.50.253:8080 -r 192.168.50.252:8080 -g -w 1 ipvsadm -S # -A 添加虚拟服务 # -a 添加一个真是的主机到虚拟服务 # -S 保存 # -s 选择调度方法 # -g DR模式 # rr 轮训调度 ``` 配置RS MAC静态绑定 `因为负载均衡服务器使用的是真实IP 192.168.50.253,当查询ARP,因为RS回环接口配置192.168.50.253,所以不会应答ARP,相反,如果配置VIP,那么回环地址配置VIP,192.168.50.253请求ARP的时候,RS会应答ARP` ```bash arp -s 192.168.50.251 08:00:27:8a:58:c1 arp -s 192.168.50.252 08:00:27:29:31:d8 ``` #### **RS配置** 启动web服务 ```bash python -m SimpleHTTPServer 8080 ``` 修改内核参数 ```bash echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce ``` 逻辑网卡添加ip地址192.168.50.253 ```bash ifconfig lo:0 192.168.50.253 netmask 255.255.255.255 broadcast 192.168.50.255 ``` 添加路由(确保请求的IP是192.168.50.253,出去的数据包也为192.168.50.253) ```bash route add -host 192.168.50.253 dev lo:0 ``` &emsp; ### LVS+DR(`有VIP`) #### 准备环境 ```bash VIP: 192.168.50.240 LVS主机: 192.168.50.253 08:00:27:e6:f4:0a Real Server: 192.168.50.251 08:00:27:8a:58:c1 192.168.50.252 08:00:27:29:31:d8 网络模式:DR ``` #### **DR配置** #### 安装ipvsadm ```bash yum install ipvsadm -y ``` #### 设置ipv4转发 ```bash sysctl -w net.ipv4.ip_forward=1 ``` #### 关selinux,firewall,iptables ```bash setenforce 0 systemctl stop firewall iptables -F ``` #### 配置VIP ```bash ifconfig eth0:0 192.168.50.240 netmask 255.255.255.255 broadcast 192.168.50.255 ``` #### 设置ipvsadm ```bash ipvsadm -A -t 192.168.50.240:8080 -s rr #虚拟服务端口需要和真实服务端口一致 ipvsadm -a -t 192.168.50.240:8080 -r 192.168.50.251:8080 -g -w 1 ipvsadm -a -t 192.168.50.240:8080 -r 192.168.50.252:8080 -g -w 1 ipvsadm -S # -A 添加虚拟服务 # -a 添加一个真是的主机到虚拟服务 # -S 保存 # -s 选择调度方法 # -g DR模式 # rr 轮训调度 ``` #### **RS配置** 启动web服务 ```bash python -m SimpleHTTPServer 8080 ``` 修改内核参数 ```bash echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce ``` 添加VIP ```bash ifconfig lo:0 192.168.50.240 netmask 255.255.255.255 broadcast 192.168.50.255 ``` 添加路由(确保请求的IP是192.168.50.253,出去的数据包也为192.168.50.253) ```bash route add -host 192.168.50.240 dev lo:0 ``` &emsp; ### LVS+TUNNEL(内网跨网段) #### 准备环境 ```bash Client: 192.168.10.3 路由器(LINUX) 192.168.10.4 192.168.20.4 VIP: 192.168.10.100 LVS主机: 192.168.10.5 Real Server: 192.168.20.3 网络模式:TUNNEL ``` &emsp; #### **DR配置** 配置VIP ```bash ifconfig eth1:1 192.168.10.100 netmask 255.255.255.255 broadcast 192.168.10.100 up route add -host 192.168.10.100 dev eth1:1 ``` 修改内核参数 ```bash echo "0" >/proc/sys/net/ipv4/ip_forward echo "1" >/proc/sys/net/ipv4/conf/all/send_redirects echo "1" >/proc/sys/net/ipv4/conf/default/send_redirects echo "1" >/proc/sys/net/ipv4/conf/eth1/send_redirects ``` 配置lvs ```bash ipvsadm -C ipvsadm -A -t 192.168.10.100:8080 -s rr ipvsadm -a -t 192.168.10.100:8080 -r 192.168.20.3 -i ``` &emsp; #### **RS配置** 配置tunnel隧道 ```bash modprobe ipip ifconfig tunl0 192.168.10.100 netmask 255.255.255.255 broadcast 192.168.10.100 up route add -host 192.168.10.100 dev tunl0 ``` 修改内核参数 ```bash echo 0 > /proc/sys/net/ipv4/ip_forward echo 1 > /proc/sys/net/ipv4/conf/tunl0/arp_ignore echo 2 > /proc/sys/net/ipv4/conf/tunl0/arp_announce echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce echo 0 > /proc/sys/net/ipv4/conf/tunl0/rp_filter echo 0 > /proc/sys/net/ipv4/conf/all/rp_filter ``` 启动服务 ```bash echo "RS SERVER" >index.html python -m SimpleHTTPServer 8080 ``` &emsp; #### 客户端测试 ```bash curl http://192.168.10.100:8080 ``` ``` RS SERVER ``` `测试成功` &emsp; #### RS主机抓包 监听隧道接口 ```bash tcpdump -i tunl0 -nnn -vvv ``` ```bash 08:32:34.269392 IP (tos 0x0, ttl 64, id 51443, offset 0, flags [DF], proto TCP (6), length 60) 192.168.10.3.46734 > 192.168.10.100.8080: Flags [S], cksum 0xdbc2 (correct), seq 2270636359, win 28200, options [mss 1410,sackOK,TS val 1958324 ecr 0,nop,wscale 7], length 0 08:32:34.271186 IP (tos 0x0, ttl 64, id 51444, offset 0, flags [DF], proto TCP (6), length 52) 192.168.10.3.46734 > 192.168.10.100.8080: Flags [.], cksum 0xa968 (correct), seq 2270636360, ack 1942327447, win 221, options [nop,nop,TS val 1958326 ecr 1821109], length 0 08:32:34.271345 IP (tos 0x0, ttl 64, id 51445, offset 0, flags [DF], proto TCP (6), length 135) 192.168.10.3.46734 > 192.168.10.100.8080: Flags [P.], cksum 0xa230 (correct), seq 0:83, ack 1, win 221, options [nop,nop,TS val 1958327 ecr 1821109], length 83: HTTP, length: 83 GET / HTTP/1.1 User-Agent: curl/7.29.0 Host: 192.168.10.100:8080 Accept: */* ``` `通过上面的抓包日志可以分析DR已经将SYN包发送到RS主机` 监听eth1接口8080端口的数据包 ```bash tcpdump -i eth1 and port 8080 -nnn -vvv ``` ```bash 08:31:57.116678 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 60) 192.168.10.100.8080 > 192.168.10.3.46732: Flags [S.], cksum 0x95e6 (incorrect -> 0x6552), seq 2019760809, ack 3127325422, win 27960, options [mss 1410,sackOK,TS val 1783956 ecr 1921172,nop,wscale 7], length 0 08:31:57.118567 IP (tos 0x0, ttl 64, id 8764, offset 0, flags [DF], proto TCP (6), length 52) 192.168.10.100.8080 > 192.168.10.3.46732: Flags [.], cksum 0x95de (incorrect -> 0xfff2), seq 1, ack 84, win 219, options [nop,nop,TS val 1783958 ecr 1921174], length 0 08:31:57.118811 IP (tos 0x0, ttl 64, id 8765, offset 0, flags [DF], proto TCP (6), length 69) 192.168.10.100.8080 > 192.168.10.3.46732: Flags [P.], cksum 0x95ef (incorrect -> 0x4015), seq 1:18, ack 84, win 219, options [nop,nop,TS val 1783958 ecr 1921174], length 17: HTTP, length: 17 HTTP/1.0 200 OK ``` `通过上面的抓包日志可以发现,RS也将应答包SYN+ACK直接发送给客户端` **`总结:LVS tunnel 跨网段转发是成功的。`** &emsp; ### LVS+TUNNEL(公网) `Ip Tunnel模式最大的优点就在于它可以跨网段转发,没有DR和NAT模式的组网限制。这在部署上带来的很大的灵活性,甚至还可以跨机房转发,不过不建议这样使用,一是会带来跨机房间的流量,提高了成本;二是跨机房转发必然会要在RS机房上绑定LVS机房的VIP,这有可能会被运营商的防火墙认为是IP伪造请求而拦截` #### 准备环境(Vultr VPS) ```bash Client: 183.54.238.66 VIP: 104.238.150.254 LVS主机: 167.179.115.37 Real Server: 202.182.125.31 139.180.202.67 网络模式:TUNNEL ``` &emsp; #### **DR配置** DR主机所在的VPS申请添加一个公网ip,配置到eth0:1接口上 ```bash ifconfig eth0:1 104.238.150.254 netmask 255.255.255.255 broadcast 104.238.150.254 up route add -host 104.238.150.254 dev eth0:1 ``` 修改内核参数 ```bash echo "0" >/proc/sys/net/ipv4/ip_forward echo "1" >/proc/sys/net/ipv4/conf/all/send_redirects echo "1" >/proc/sys/net/ipv4/conf/default/send_redirects echo "1" >/proc/sys/net/ipv4/conf/eth0/send_redirects ``` 配置lvs ```bash ipvsadm -C ipvsadm -A -t 104.238.150.254:8080 -s rr ipvsadm -a -t 104.238.150.254:8080 -r 202.182.125.31 -i ipvsadm -a -t 104.238.150.254:8080 -r 139.180.202.67 -i ``` &emsp; #### **RS配置** 配置tunnel隧道 ```bash modprobe ipip ifconfig tunl0 104.238.150.254 netmask 255.255.255.255 broadcast 104.238.150.254 up route add -host 104.238.150.254 dev tunl0 ``` 修改内核参数 ```bash echo 0 > /proc/sys/net/ipv4/ip_forward echo 1 > /proc/sys/net/ipv4/conf/tunl0/arp_ignore echo 2 > /proc/sys/net/ipv4/conf/tunl0/arp_announce echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce echo 0 > /proc/sys/net/ipv4/conf/tunl0/rp_filter echo 0 > /proc/sys/net/ipv4/conf/all/rp_filter ``` 启动服务 ```bash python -m SimpleHTTPServer 8080 ``` &emsp; #### 客户端测试 浏览器访问 `http://104.238.150.254:8080` `浏览器显示访问不了!!!` &emsp; #### RS主机抓包 监听隧道接口 ```bash tcpdump -i tunl0 -nnn -vvv ``` ```bash tcpdump: listening on tunl0, link-type RAW (Raw IP), capture size 262144 bytes 09:20:12.713031 IP (tos 0x0, ttl 116, id 29938, offset 0, flags [DF], proto TCP (6), length 48) 183.54.238.66.63376 > 104.238.150.254.8080: Flags [S], cksum 0x5a81 (correct), seq 3523639844, win 8192, options [mss 1440,nop,nop,sackOK], length 0 09:20:54.742880 IP (tos 0x0, ttl 116, id 30339, offset 0, flags [DF], proto TCP (6), length 52) 183.54.238.66.63400 > 104.238.150.254.8080: Flags [S], cksum 0xb006 (correct), seq 2103403813, win 8192, options [mss 1440,nop,wscale 2,nop,nop,sackOK], length 0 09:20:54.993380 IP (tos 0x0, ttl 116, id 30342, offset 0, flags [DF], proto TCP (6), length 52) 183.54.238.66.63402 > 104.238.150.254.8080: Flags [S], cksum 0x8306 (correct), seq 484897436, win 8192, options [mss 1440,nop,wscale 2,nop,nop,sackOK], length 0 09:20:57.744463 IP (tos 0x0, ttl 116, id 30376, offset 0, flags [DF], proto TCP (6), length 52) ``` `通过上面的抓包日志可以分析DR已经将SYN包发送到RS主机` 监听客户端访问服务器8080端口的数据包 ```bash tcpdump -i eth0 host 183.54.238.66 and port 8080 -vvv -nnn ``` ```bash tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes 09:19:13.659753 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 52) 104.238.150.254.8080 > 183.54.238.66.63354: Flags [S.], cksum 0xa58c (incorrect -> 0x6d88), seq 2281292955, ack 1846351957, win 29200, options [mss 1460,nop,nop,sackOK,nop,wscale 7], length 0 09:19:14.861060 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 52) 104.238.150.254.8080 > 183.54.238.66.63354: Flags [S.], cksum 0xa58c (incorrect -> 0x6d88), seq 2281292955, ack 1846351957, win 29200, options [mss 1460,nop,nop,sackOK,nop,wscale 7], length 0 09:19:16.660745 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 52) 104.238.150.254.8080 > 183.54.238.66.63354: Flags [S.], cksum 0xa58c (incorrect -> 0x6d88), seq 2281292955, ack 1846351957, win 29200, options [mss 1460,nop,nop,sackOK,nop,wscale 7], length 0 09:19:19.061063 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 52) ``` `通过上面的抓包日志可以发现,RS也将应答包SYN+ACK直接发送给客户端` **`总结:LVS tunnel DR转发,RS应答是正常的。RS应答的时候被Vultr的防火墙认为是IP伪造请求而拦截,所以导致实验失败!`** &emsp; ### LVS+FULLNAT `LVS 当前应用主要采用 DR 和 NAT 模式,但这 2 种模式要求 RealServer 和 LVS 在同一个 vlan中,导致部署成本过高;TUNNEL 模式虽然可以跨 vlan,但 RealServer上需要部署 ipip 模块等,网络拓扑上需要连通外网,较复杂,不易运维。` `为了解决上述问题,我们在 LVS 上添加了一种新的转发模式:FULLNAT,该 模式和 NAT 模式的区别是:Packet IN 时,除了做 DNAT,还做 SNAT(用户 ip->内 网 ip),从而实现 LVS-RealServer 间可以跨 vlan 通讯,RealServer 只需要连接到内 网;` LVS FULLNAT 实战:https://www.haxi.cc/archives/LVS-FULLNAT实战.html 相关链接: LVS-ospf集群:http://noops.me/?p=974
阅读 4193 评论 0 收藏 0
阅读 4193
评论 0
收藏 0

第 9 页 / 共 11 页
 
第 9 页 / 共 11 页