兜兜    2018-09-20 17:44:04    2019-11-14 14:31:45   

hdfs hadoop
阅读 839 评论 0 收藏 0
阅读 839
评论 0
收藏 0

兜兜    2018-09-20 17:12:38    2019-11-14 14:31:53   

hdfs haddop
### 环境准备 系统: `CentOS7` 软件: - `hadoop`:`2.7.7` &emsp; 服务器: `Hadoop Master`: `172.16.0.3(master)` `NameNode` `SecondaryNameNode` `ResourceManager` `DataNode` `NodeManager` `Hadoop Slave` : `172.16.0.4(slave1)` `DataNode` `NodeManager` `Hadoop Slave` : `172.16.0.5(slave2)` `DataNode` `NodeManager` `Hadoop Slave` : `172.16.0.6(slave3)` `DataNode` `NodeManager` `Hadoop Slave` : `172.16.0.7(slave4)` `DataNode` `NodeManager` &emsp; ### 初始化工作 #### 配置主机名解析 `所有主机` ```bash cat >> /etc/hosts << EOF 172.16.0.3 master 172.16.0.4 slave1 172.16.0.5 slave2 172.16.0.6 slave3 172.16.0.7 slave4 EOF ``` #### 创建私钥以及免密登陆slaves `master` ```bash su - hadoop ssh-keygen -t rsa ssh-copy-id slave1 ssh-copy-id slave2 ssh-copy-id slave3 ssh-copy-id slave4 ``` #### 下载安装java 下载地址: https://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html `所有主机` ```bash rpm -ivh jdk-8u221-linux-x64.rpm ``` &emsp; ### 安装hadoop集群 #### 创建用户 `所有主机` ```bash useradd -d /opt/hadoop hadoop echo "password"|passwd --stdin hadoop #免交互设置用户密码 ``` #### 下载hadoop `master` ```bash curl -O http://apache.javapipe.com/hadoop/common/hadoop-2.7.7/hadoop-2.7.7.tar.gz tar xfz hadoop-2.7.7.tar.gz cp -rf hadoop-2.7.7/* /opt/hadoop/ chown -R hadoop:hadoop /opt/hadoop/ ``` #### 配置环境变量 `master` ```bash su - hadoop cat >> .bash_profile << EOF ## JAVA env variables export JAVA_HOME=/usr/java/default export PATH=\$PATH:\$JAVA_HOME/bin export CLASSPATH=.:\$JAVA_HOME/jre/lib:\$JAVA_HOME/lib:\$JAVA_HOME/lib/tools.jar ## HADOOP env variables export HADOOP_HOME=/opt/hadoop export HADOOP_COMMON_HOME=\$HADOOP_HOME export HADOOP_HDFS_HOME=\$HADOOP_HOME export HADOOP_MAPRED_HOME=\$HADOOP_HOME export HADOOP_YARN_HOME=\$HADOOP_HOME export HADOOP_OPTS="-Djava.library.path=\$HADOOP_HOME/lib/native" export HADOOP_COMMON_LIB_NATIVE_DIR=\$HADOOP_HOME/lib/native export PATH=\$PATH:\$HADOOP_HOME/sbin:\$HADOOP_HOME/bin EOF source .bash_profile ``` &emsp; ### 配置hadoop集群 #### 编辑core-site.xml `master` ```bash su - hadoop vi etc/hadoop/core-site.xml ``` ```xml <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://master:9000/</value> </property> </configuration> ``` #### 编辑hdfs-site.xml `master` ```bash vi etc/hadoop/hdfs-site.xml ``` ```xml <configuration> <property> <name>dfs.data.dir</name> <value>file:///opt/volume/datanode</value> </property> <property> <name>dfs.name.dir</name> <value>file:///opt/volume/namenode</value> </property> </configuration> ``` #### 编辑mapred-site.xml `master` ```bash vi etc/hadoop/mapred-site.xml ``` ```xml <configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>mapred.job.tracker</name> <value>master:9001</value> </property> </configuration> ``` #### 编辑yarn-site.xml `master` ```bash vi etc/hadoop/yarn-site.xml ``` ```xml <configuration> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.resourcemanager.hostname</name> <value>master</value> </property> <property> <name>yarn.resourcemanager.address</name> <value>${yarn.resourcemanager.hostname}:8032</value> </property> <property> <name>yarn.resourcemanager.bind-host</name> <value>0.0.0.0</value> </property> </configuration> ``` #### 编辑hadoop-env.sh `master` ```bash vi etc/hadoop/hadoop-env.sh ``` ```bash export JAVA_HOME=/usr/java/default/ ``` #### 编辑masters `master` ```bash cat > etc/hadoop/masters<EOF master EOF ``` #### 编辑slaves `master` ```bash cat > etc/hadoop/slaves <EOF master slave1 slave2 slave3 slave4 EOF ``` &emsp; #### 拷贝hadoop到slaves节点 ```bash su - hadoop scp -r * slave1:/opt/hadoop/* scp -r * slave2:/opt/hadoop/* scp -r * slave3:/opt/hadoop/* scp -r * slave4:/opt/hadoop/* ``` &emsp; ### 格式化Namenode `master` ```bash su - hadoop hdfs namenode -format ``` &emsp; ### 启动停止集群 `master` ```bash start-all.sh #启动hadoop集群 stop-all.sh #停止hadoop集群 ``` &emsp; ### 监控进程 `master` ```bash jps ``` ``` 21078 Jps 3922 ResourceManager 4050 NodeManager 3431 NameNode 3577 DataNode 3755 SecondaryNameNode ``` `slaves节点` ```bash jps ``` ``` 7517 Jps 21298 DataNode 21422 NodeManager ``` &emsp; ### 测试HDFS集群 ```bash hdfs dfs -mkdir /my_storage #创建目录 hdfs dfs -put LICENSE.txt /my_storage #上传文件 hdfs dfs -cat /my_storage/LICENSE.txt #查看文件 hdfs dfs -ls /my_storage/ hdfs dfs -get /my_storage/ ./ #获取文件 ``` &emsp; ### 监控集群服务 `master` ```bash http://master:50070 ``` #### 查看hdfs文件系统 ```bash http://master:50070/explorer.html ``` #### 集群和应用信息 ```bash http://master:8088 ``` #### NodeManager信息 ```bash http://master:8042 ``` &emsp; ### 开机启动 `master` ```bash vi /etc/rc.local ``` ```bash su - hadoop -c "/opt/hadoop/sbin/start-all.sh" ``` ```bash chmod +x /etc/rc.d/rc.local systemctl enable rc-local systemctl start rc-local ``` &emsp; ### Python执行MapReduce `说明:统计noaa数据1901-1909各个年份的最大温度,文件格式15-18位代表年份,87-91代表温度,92位为检验码。mapper对文件每一行内容进行处理,生成"年份 温度"的格式(例如:1901 +0056),reducer对mapper输出统计出每个年份的最大值.` Mapper程序 ```bash cat mapper_noaa.py ``` ```bash #!/usr/bin/env python import sys import re pattern = re.compile(r'[01459]') for line in sys.stdin: year,temperature,q = line[15:19],int(line[87:92]),line[92:93] if pattern.match(q) and temperature != 9999: print("{0}\t{1}".format(year,temperature)) ``` Reducer进程 ```bash cat reducer_noaa.py ``` ```bash #!/usr/bin/env python import sys import re current_year=None current_temp_max=None for line in sys.stdin: year,templature= line.strip().split('\t') try: templature=int(templature) except: continue if current_year == year: if current_temp_max < templature: current_temp_max=templature else: if current_year: print("{0} {1}".format(current_year,current_temp_max)) current_year=year current_temp_max=templature if current_year: print("{0} {1}".format(current_year,current_temp_max)) ``` #### 下载数据 ```bash ftp://ftp.ncdc.noaa.gov/pub/data/noaa/ #把下载的对应每年数据放到noaa文件夹 ``` #### 上传数据到hdfs ```bash su - hadoop hdfs dfs -mkdir /test/ #创建test目录 hdfs dfs -copyFromLocal noaa /test/noaa #noaa为下载的天气数据 ``` #### 运行MapReduce ```bash su - hadoop hadoop jar /opt/hadoop/share/hadoop/tools/lib/hadoop-streaming-2.7.7.jar -file ./mapper_noaa.py -file ./reducer_noaa.py -mapper ./mapper_noaa.py -reducer ./reducer_noaa.py -input /test/noaa/190[0-9]/ -output /test/noaa_1901_1909_results ``` #### 查看运行结果 ```bash hdfs dfs -cat /test/noaa_1901_1909_results/part-00000 ``` ``` 1901 317 1902 244 1903 289 1904 256 1905 283 1906 294 1907 283 1908 289 1909 278 ``` `注:由于气温被放大10倍,所以1901年的最高气温为31.7°`
阅读 844 评论 0 收藏 0
阅读 844
评论 0
收藏 0

兜兜    2018-09-12 11:08:30    2022-01-25 09:19:49   

v2ray 爬虫代理 端口复用 sslh
### 介绍 `购买的是拨号江苏服务器,但是仅提供一个远程端口,不提供其他端口映射,但是我们的爬虫是跑本机,所以必须要通过外网去连代理服务器,所以就考虑使用端口复用技术解决。` ### 准备工作 `端口复用软件`: `sslh` `代理软件`: `v2ray` &emsp; ### 拨号服务器,获取外网IP ```bash adsl-start #拨号,不同提供商的命令不一样,有些提供商对命令进行了封装 ``` &emsp; ### 服务器初始化 ```bash yum install epel-release -y #安装epel-release ``` &emsp; ### 用ssh去连拨号获取的IP `执行这步是测试拨号IP是否有端口限制以及防止后面sslh端口复用失败而不能远程连接的问题。` `如果成功执行下一步,失败排查下原因。` &emsp; ### sslh #### 安装 ```bash yum install sslh -y ``` &emsp; #### 配置 ```bash vim /etc/sslh.cfg ``` ```yaml # This is a basic configuration file that should provide # sensible values for "standard" setup. verbose: false; foreground: true; inetd: false; numeric: false; transparent: false; timeout: 2; user: "sslh"; # Change hostname with your external address name. listen: ( { host: "0.0.0.0"; port: "33890"; } #这里为拨号供应商映射的ssh端口(非22),所以端口复用需要使用和原理ssh端口号保持一致 ); protocols: ( { name: "ssh"; service: "ssh"; host: "localhost"; port: "22"; fork: true; }, #ssh协议包转发给22端口 { name: "anyprot"; host: "localhost"; port: "27073"; } #其他协议包转发给27073(v2ray端口) ); ``` &emsp; #### 修改ssh监听端口 ```bash vim /etc/ssh/sshd_config ``` ```bash Port 22 #修改原来的33890为22端口 ... ``` &emsp; #### 重启ssh和启动sslh ```bash systemctl restart sshd&&systemctl start sslh #先重启sshd让其监听22,然后再重启sslh监听33890 systemctl enable sslh #配置开机启动 ``` &emsp; #### ssh测试重接供应商提供的远程主机和端口 `如果重连成功,说明sslh端口转发到ssh成功` &emsp; ### V2ray #### 安装 ```bash bash <(curl -L -s https://install.direct/go.sh) ``` #### 配置 ```bash vim /etc/v2ray/config.json ``` ```yaml { "inbounds": [{ "port": 27073, //修改为上面sslh转发到的端口号 "listen": "127.0.0.1", //监听回环地址即可 "protocol": "vmess", "settings": { "clients": [ { "id": "62f8c0f5-69fa-41f8-a7b0-97d43014d478", "level": 1, "alterId": 64 } ] } }], "outbounds": [{ "protocol": "freedom", "settings": {} },{ "protocol": "blackhole", "settings": {}, "tag": "blocked" }], "routing": { "rules": [ { "type": "field", "ip": ["geoip:private"], "outboundTag": "blocked" } ] } } ``` &emsp; #### 启动 ```bash systemctl start v2ray #启动 systemctl enable v2ray #设置开机启动柜 ``` &emsp; #### 一键安装脚本 ```bash adsl-start&&bash <(curl -L -s https://files.ynotes.cn/biv2ray.sh) ``` &emsp; ### 测试V2ray客户端去连供应商提供的远程主机和端口 `如果连接成功说明远程端口复用成功,实现了通过供应商提供的远程端口提供代理服务和ssh服务的目的`
阅读 992 评论 0 收藏 0
阅读 992
评论 0
收藏 0

兜兜    2018-08-29 09:51:11    2019-11-14 14:31:25   

metricbeat filebeat elasticsearch logstash kibana
### 〇、介绍 #### **`ELK是Elasticsearch、Logstash、Kibana的简称,亦可称为Elastic Stack,这三者是核心套件。但并非全部,其中还包括Filebeat/Heartbeat/Metricbeat/Auditbeat/Packetbeat/Winlogbeat数据收集代理。`** **Elasticsearch** `实时全文搜索和分析引擎,提供搜集、分析、存储数据三大功能;是一套开放REST和JAVA API等结构提供高效搜索功能,可扩展的分布式系统。它构建于Apache Lucene搜索引擎库之上` **Logstash** `一个用来搜集、分析、过滤日志的工具。它支持几乎任何类型的日志,包括系统日志、错误日志和自定义应用程序日志。它可以从许多来源接收日志,这些来源包括 syslog、消息传递(例如 RabbitMQ)和JMX,它能够以多种方式输出数据,包括电子邮件、websockets和Elasticsearch` **Kibana** `一个基于Web的图形界面,用于搜索、分析和可视化存储在 Elasticsearch指标中的日志数据。它利用Elasticsearch的REST接口来检索数据,不仅允许用户创建他们自己的数据的定制仪表板视图,还允许他们以特殊的方式查询和过滤数据` ![](https://files.ynotes.cn/elk_structure.png) &emsp; &emsp; ### 一、环境准备 系统: `CentOS7` 软件: - `elasticsearch`:`7.3.1` - `logstash`:`7.3.1` - `kibana`:`7.3.1` - `filebeat`:`7.3.1` - `metricbeat`:`7.3.1` - `redis(消息队列)`:`5.0.5` 服务器: `node1(172.16.0.101)`:`elasticsearch` `filebeat` `metricbeat` `node2(172.16.0.102)`:`elasticsearch` `filebeat` `metricbeat` `kibana` `logstash` `redis` `node3(172.16.0.103)`:`elasticsearch` `filebeat` `metricbeat` &emsp; &emsp; ### 二、ElasticSearch **`说明:搭建一个包含三个节点的ElasticSearch集群,三个节点为对等节点(主节点/数据节点),集群初始化的主节点为node1。`** #### 导入签名key `所有节点` ```bash rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch ``` &emsp; #### 配置Elastic Stack仓库 `所有节点` ```bash cat << EOF >/etc/yum.repos.d/elasticsearch.repo [elasticsearch-7.x] name=Elasticsearch repository for 7.x packages baseurl=https://artifacts.elastic.co/packages/7.x/yum gpgcheck=1 gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch enabled=1 autorefresh=1 type=rpm-md EOF ``` &emsp; #### JAVA `注意:ElasticSearch主目录捆绑安装JDK(/usr/share/elasticsearch/jdk),无需安装` #### 安装elasticsearch `所有节点` ```bash yum install elasticsearch -y ``` &emsp; #### 配置elasticsearch `配置文件路径位于$ES_PATH_CONF指定的值,默认为/etc/elasticsearch` ```bash grep ES_PATH_CONF /etc/sysconfig/elasticsearch #查看配置文件的路径 ``` ``` ES_PATH_CONF=/etc/elasticsearch ``` &emsp; #### 配置elasticsearch.yml `node1` ```bash cat <<EOF >/etc/elasticsearch/elasticsearch.yml cluster.name: es-ynotes.cn-cluster node.name: node1 path.data: /var/lib/elasticsearch path.logs: /var/log/elasticsearch http.cors.enabled: true http.cors.allow-origin: "*" node.master: true node.data: true transport.tcp.port: 9300 transport.tcp.compress: true network.host: 172.16.0.101 http.port: 9200 discovery.seed_hosts: ["node1", "node2", "node3"] #集群节点 cluster.initial_master_nodes: ["node1"] #初始化主节点 EOF ``` `node2/node3` `注意:修改node.name和network.host即可` &emsp; #### 启动elasticsearch ```bash systemctl start elasticsearch ``` &emsp; #### 查看集群状态 ```bash curl http://172.16.0.101:9200/_cluster/health?pretty ``` ``` { "cluster_name" : "es-ynotes.cn-cluster", "status" : "red", "timed_out" : false, "number_of_nodes" : 3, "number_of_data_nodes" : 3, "active_primary_shards" : 12, "active_shards" : 12, "relocating_shards" : 0, "initializing_shards" : 8, "unassigned_shards" : 24, "delayed_unassigned_shards" : 0, "number_of_pending_tasks" : 7, "number_of_in_flight_fetch" : 0, "task_max_waiting_in_queue_millis" : 1838, "active_shards_percent_as_number" : 27.27272727272727 } ``` `集群名为es-ynotes.cn-cluster,数据节点数为3个` &emsp; &emsp; ### 二、Kibana #### 安装Kibana `node2节点` ```bash yum install kibana -y ``` &emsp; #### 配置kibana ```bash vim /etc/kibana/kibana.yml #修改下面对应的值即可 ``` ```yaml server.host: "0.0.0.0" #监听的地址 elasticsearch.hosts: ["http://node1:9200","http://node2:9200","http://node3:9200"] #配置es集群节点 i18n.locale: "zh-CN" #设置界面为中文 ``` &emsp; &emsp; ### 三、Redis(docker运行) #### 安装并启动docker `node2` ```bash yum install docker -y systemctl start docker ``` #### docker运行Redis `node2` ```bash docker run -d --name logredis -p 6379:6379 redis --requirepass "ynotes.cn" ``` &emsp; &emsp; ### 四、Logstash #### 安装Logstash `node2` ```bash yum install logstash -y ``` &emsp; #### logstash配置 **方式一:** `数据流`:`日志` -> `logstash` -> `elasticsearch` `说明:该方式不需要数据收集代理(Filebeat/Metircbeat),logstash直接收集数据写入elasticsearch` ```bash cat <<EOF >/etc/logstash/logstash.conf input{ file { path => ["/var/log/messages"] type => "system" start_position => beginning } } filter { #无过滤规则 } output{ elasticsearch { hosts => ["node1:9200", "node2:9200", "node3:9200"] index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}" #user => "elastic" #password => "changeme" } } EOF ``` **方式二:** `数据流`:`日志` -> `Filebeat` -> `logstash` -> `elasticsearch` `说明:该方式使用数据收集代理(如Filebeat/Metircbeat)收集数据写入logstash,再由logstash写入elasticsearch` ```bash cat <<EOF >/etc/logstash/logstash.conf input { beats { port => 5044 } } filter { #无过滤规则 } output { elasticsearch { hosts => ["node1:9200", "node2:9200", "node3:9200"] index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}" #user => "elastic" #password => "changeme" } } EOF ``` **方式三:** `数据流`:`日志` -> `Filebeat` -> `redis` -> `logstash` -> `elasticsearch` `说明:该方式需要数据收集代理(如Filebeat)收集数据写入到redis,再由logstash从redis获取数据写入elasticsearch` ```bash cat <<EOF >/etc/logstash/logstash.conf input { redis { data_type => "list" key => "filebeat_list" host => "node2" port => 6379 password => "ynotes.cn" db => 0 threads => 4 } } filter { #无过滤规则 } output { elasticsearch { hosts => ["node1:9200", "node2:9200", "node3:9200"] index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}" } } EOF ``` &emsp; #### 启动Logstash ```bash systemctl start logstash ``` &emsp; #### 过滤模块Grok `grok是一种采用组合多个预定义的正则表达式,用来匹配分割文本并映射到关键字的工具。通常用来对日志数据进行预处理。logstash的filter模块中grok插件是其实现之一。` `配置与调试` 例如日志如下: ```bash localhost GET /index.html 1024 0.016 ``` 可依次判断类型为:IPORHOST、WORD、URIPATHPARAM、INT、NUMBER, 形成的grok语句如下为 ```bash %{IPORHOST:client} %{WORD:method} %{URIPATHPARAM:request} %{INT:size} %{NUMBER:duration} ``` `预定义匹配样例` grok默认内置120个预定义匹配字段,已附在文末,logstash内置的可参考网址: https://github.com/logstash-plugins/logstash-patterns-core/tree/master/patterns `调试工具` - kibana内置Grok调试器 - 调试网址:https://grokdebug.herokuapp.com `官方文档`:https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html &emsp; &emsp; ### 四、Filebeat #### 安装filebeat `所有节点` ```bash yum install filebeat -y ``` &emsp; #### 收集日志(`以系统日志为例`) **启用模块** `所有节点` ```bash filebeat modules enable system #收集其他日志启用相应模块即可 ``` **配置filebeat** **方式一:** `数据流`:`日志` -> `Filebeat` -> `elasticsearch` ```bash vim /etc/filebeat/filebeat.yml ``` ```bash filebeat.config.modules: path: ${path.config}/modules.d/*.yml reload.enabled: true setup.template.settings: index.number_of_shards: 1 setup.kibana: host: "node2:5601" output.elasticsearch: #配置输出流到elasticsearch hosts: ["node1:9200", "node2:9200", "node3:9200"] processors: - add_host_metadata: ~ - add_cloud_metadata: ~ ``` **方式二:** `数据流`:`日志` -> `Filebeat` -> `logstash` -> `elasticsearch` ```bash vim /etc/filebeat/filebeat.yml ``` ```bash filebeat.config.modules: path: ${path.config}/modules.d/*.yml reload.enabled: true setup.template.settings: index.number_of_shards: 1 setup.kibana: host: "node2:5601" output.logstash: #配置输出流到logstash hosts: ["node2:5044"] processors: - add_host_metadata: ~ - add_cloud_metadata: ~ ``` **方式三:** `数据流`:`日志` -> `Filebeat` -> `redis` -> `logstash` -> `elasticsearch` ```bash vim /etc/filebeat/filebeat.yml ``` ```bash filebeat.config.modules: enabled: true path: ${path.config}/modules.d/*.yml reload.enabled: true reload.period: 10s setup.kibana: host: "node2:5601" output.redis: #配置输出流到redis hosts: ["node2:6379"] password: "ynotes.cn" key: "filebeat_list" db: 0 timeout: 5 processors: - add_host_metadata: ~ - add_cloud_metadata: ~ #优化参数 max_procs: 1 queue.mem: events: 1024 flush.min_events: 512 flush.timeout: 5s max_message_bytes: 100000 ``` &emsp; #### 手动加载索引模板(`注意:filebeat输出没有直连elasticsearch必须执行,方式二和方式三必须执行`) ```bash filebeat export template >filebeat.template.json curl -XPUT -H 'Content-Type: application/json' http://node1:9200/_template/filebeat-7.3.1 -d@filebeat.template.json ``` &emsp; #### 启动Filebeat ```bash filebeat setup #命令加载Kibana安装数据图表和仪表板。如果已设置,请省略此命令,filebeat不能直连kibana的情况,请使用一个可以直连kibana的filebeat去执行该操作 systemctl start filebeat ``` &emsp; ### 五、Metricbeat #### 安装metricbeat `所有节点` ```bash yum install metricbeat -y ``` &emsp; #### 添加指标(`以系统指标为例`) **启用模块** `所有节点` ```bash metricbeat modules enable system #添加其他指标启用相应模块即可 ``` **配置metricbeat** **方式一:** `数据流`:`日志` -> `metricbeat` -> `elasticsearch` ```bash vim /etc/metricbeat/metricbeat.yml ``` ```bash metricbeat.config.modules: path: ${path.config}/modules.d/*.yml reload.enabled: false setup.template.settings: index.number_of_shards: 1 index.codec: best_compression setup.kibana: host: "node2:5601" output.elasticsearch: hosts: ["node1:9200", "node2:9200", "node3:9200"] processors: - add_host_metadata: ~ - add_cloud_metadata: ~ ``` **方式二:** `数据流`:`日志` -> `metricbeat` -> `logstash` -> `elasticsearch` ```bash vim /etc/metricbeat/metricbeat.yml ``` ```bash metricbeat.config.modules: path: ${path.config}/modules.d/*.yml reload.enabled: true setup.template.settings: index.number_of_shards: 1 setup.kibana: host: "node2:5601" output.logstash: #配置输出流到logstash hosts: ["node2:5044"] processors: - add_host_metadata: ~ - add_cloud_metadata: ~ ``` **方式三:** `数据流`:`日志` -> `metricbeat` -> `redis` -> `logstash` -> `elasticsearch` ```bash vim /etc/metricbeat/metricbeat.yml ``` ```bash metricbeat.config.modules: path: ${path.config}/modules.d/*.yml reload.enabled: true setup.template.settings: index.number_of_shards: 1 index.codec: best_compression output.redis: #配置输出流到redis enabled: true hosts: ["node2:6379"] password: "ynotes.cn" key: "metricbeat_list" db: 0 timeout: 5 processors: - add_host_metadata: ~ - add_cloud_metadata: ~ ``` &emsp; #### 手动加载索引模板(`注意:metricbeat输出没有直连elasticsearch必须执行,方式二和方式三必须执行`) ```bash metricbeat export template >metricbeat.template.json curl -XPUT -H 'Content-Type: application/json' http://node1:9200/_template/metricbeat-7.3.1 -d@metricbeat.template.json ``` &emsp; #### 启动metricbeat ```bash metricbeat setup #命令加载Kibana安装数据图表和仪表板。如果已设置,请省略此命令,metricbeat不能直连kibana的情况,请使用一个可以直连kibana的metricbeat去执行该操作 systemctl start metricbeat ``` &emsp; ### 六、生产环境实战 `说明:Elasticsearch放在公司内网环境,生产服务器在阿里云,为了让日志写入到Elasticsearch,采用的是前面filebeat介绍的方式三:` `日志` -> `Filebeat` -> `redis` -> `logstash` -> `elasticsearch` `其中filebeat和redis在生产服务器,logstash和elasticsearch在内网,生产服务器暴露redis(需认证)的端口,logstash去拉取数据然后写入elasticsearch` `以配置nginx/tomcat日志为例` #### filebeat配置 ```bash cat /etc/filebeat/filebeat.yml ``` ```yml filebeat.config.inputs: enabled: true path: ${path.config}/configs/*.yml #指定input输入配置文件路径 output.redis: hosts: ["172.18.176.146:6379"] #阿里云内网的一台redis password: "password" #redis的认证密码 key: "default_list" db: 0 timeout: 5 processors: - add_host_metadata: ~ - add_cloud_metadata: ~ max_procs: 1 queue.mem: events: 1024 flush.min_events: 512 flush.timeout: 5s max_message_bytes: 100000 ``` 配置nginx日志 ```bash vim /etc/filebeat/configs/nginx.yml ``` ```ini - type: log enabled: true paths: - /var/log/nginx/domain.com.access.log name: "nginx-prod-project_name-api-access_log" tags: ['nginx','prod','project_name','api','access_log'] tail_files: true exclude_lines: ['^-'] #排除-开头的日志(这里是因为阿里云配置SLB,所以很多内网访问的日志开头为-) - type: log enabled: true paths: - /var/log/nginx/error.log name: "nginx-prod-error_log" tags: ['nginx','prod','error_log'] tail_files: true ``` 配置tomcat日志 ```bash vim /etc/filebeat/configs/tomcat.yml ``` ```ini - type: log enabled: true paths: - /data/app/tomcat/logs/catalina.out.* name: "tomcat-prod-project_name-api-catalina_log" tags: ['tomcat','prod','project_name','api','catalina_log'] multiline: pattern: '^([0-9]{4}-[0-9]{2}-[0-9]{2}|[0-9]{2}-[a-zA-Z]{3}-[0-9]{4})' #多行匹配特定格式的日期为新行 negate: true match: after tail_files: true - type: log enabled: true paths: - /data/app/tomcat/logs/localhost.20* name: "tomcat-prod-project_name-api-localhost_log" tags: ['tomcat','prod','project_name','api','localhost_log'] multiline: pattern: '^[[:space:]]' #多行模式匹配空格开头非新行 negate: false match: after tail_files: true ``` &emsp; #### logstash配置 ```bash vim /data/elk/logstash/config/logstash.yml ``` ```yml input { redis { id => 'outer' type => 'outer' data_type => "list" key => "default_list" host => "xx.xx.xx.xx" #阿里云暴露的reids服务器 port => 6379 password => "password" db => 0 threads => 4 } } filter { if "nginx" in [tags] and "access_log" in [tags] { grok { patterns_dir => "/data/elk/logstash/patterns" #GROK模式匹配目录 match => ["message","%{NGINXACCESS}"] #NGINXACCESS为自定义的GROK匹配格式 } date { match => [ "timestamp" , "dd/MMM/YYYY:HH:\mm:ss Z" ] #\mm应该为mm,因为网页会转义成图片 } geoip { source => "client_ip" } } if "nginx" in [tags] and "error_log" in [tags] { grok { patterns_dir => "/data/elk/logstash/patterns" match => ["message", "%{NGINXERRORTIME:logdate}"] #NGINXERRORTIME为自定义的GROK匹配格式 } date { match => [ "logdate" , "yyyy/MM/dd HH:\mm:ss" ] #\mm应该为mm,因为网页会转义成图片 target => "@timestamp" } } if "tomcat" in [tags] and "catalina_log" in [tags] { grok { patterns_dir => "/data/elk/logstash/patterns" match => ["message", "%{TIMESTAMP_ISO8601:logdate} %{WORD:LOGLEVEL}","message","%{TOMCATLOCALHOSTTIME:logdate}"] #TOMCATLOCALHOSTTIME为自定义的GROK匹配格式 } date { match => [ "logdate" , "yyyy-MM-dd HH:\mm:ss.SSS","dd-MMM-yyyy HH:\mm:ss.SSS"] #\mm应该为mm,因为网页会转义成图片 target => "@timestamp" } } if "tomcat" in [tags] and "localhost_log" in [tags] { grok { patterns_dir => "/data/elk/logstash/patterns" match => ["message", "%{TOMCATLOCALHOSTTIME:logdate}"] #TOMCATLOCALHOSTTIME为自定义的匹配格式 } date { match => [ "logdate" , "dd-MMM-yyyy HH:\mm:ss.SSS" ] #\mm应该为mm,因为网页会转义成图片 target => "@timestamp" } } } #输出配置 output { if [type] == "outer" { if "prod" in [tags] and "nginx" in [tags] and "project_name" in [tags] and "api" in [tags] and "access_log" in [tags] { elasticsearch { hosts => ["192.168.50.251:9200"] #内网Elasticsearch服务器 index => "prod_nginx_project_name_api_access_log" #索引名 } } if "prod" in [tags] and "nginx" in [tags] and "error_log" in [tags] { elasticsearch { hosts => ["192.168.50.251:9200"] index => "prod_nginx_error_log" } } if "prod" in [tags] and "tomcat" in [tags] and "project_name" in [tags] and "api" in [tags] and "catalina_log" in [tags] { elasticsearch { hosts => ["192.168.50.251:9200"] index => "prod_tomcat_project_name_api_catalina_log" } } if "prod" in [tags] and "tomcat" in [tags] and "project_name" in [tags] and "api" in [tags] and "localhost_log" in [tags] { elasticsearch { hosts => ["192.168.50.251:9200"] index => "prod_tomcat_project_name_api_localhost_log" } } } } ``` 自定义的GROK匹配格式(`注意:对nginx/tomcat的日志格式做相应修改`) ```bash cat /data/elk/logstash/patterns/nginx ``` ```ini NGINXACCESS %{IPORHOST:client_ip|-} %{IPORHOST:remote_addr} - (%{USERNAME:remote_user}|-) \[%{HTTPDATE:timestamp}\] \"%{HOSTNAME:http_host}\" \"%{WORD:verb} %{URIPATHPARAM:request} HTTP/%{NUMBER:httpversion}\" %{NUMBER:http_status} (?:%{NUMBER:body_bytes_sent}|-) (?:\"(?:%{URI:referrer}|-)\"|%{QS:referrer}) \"%{WORD:http_x_forwarded_proto}\" %{QS:agent} (?:%{HOSTPORT:upstream_addr}|-) (%{NUMBER:upstream_response_time}|-) (%{NUMBER:request_time}|-) NGINXERRORTIME %{YEAR}/%{MONTHNUM}/%{MONTHDAY} %{HOUR}:%{MINUTE}:%{SECOND} ``` ```bash cat /data/elk/logstash/patterns/tomcat ``` ```ini TOMCATLOCALHOSTTIME %{MONTHDAY}-%{MONTH}-%{YEAR} %{HOUR}:%{MINUTE}:%{SECOND} ``` &emsp; #### kibana配置 `1.创建对应的索引模式` `2.创建对应的可视化图和仪表板即可`
阅读 607 评论 0 收藏 0
阅读 607
评论 0
收藏 0

兜兜    2018-08-13 23:05:48    2018-08-13 23:05:48   

tomcat docker 容器 docker-compose 容器编排
#### **项目目录结构** ```bash competitionShare |-- docker-compose.yml #docker-compose编排文件 |-- fastdfs #fastdfs文件服务器目录 | |-- build #编译目录 | | `-- Dockerfile #编译文件 | |-- data #数据存放目录 | | |-- storage #文件数据存储目录 | | | |-- data | | | `-- logs | | `-- tracker #tracker日志和元数据目录 | | |-- data | | `-- logs | `-- nginx | `-- logs |-- mysql | |-- conf | | `-- mysqld.cnf #mysql配置文件 | |-- data #mysql数据存放目录 | |-- db_init_sql | | `-- competitionShare.sql #项目的表结构和初始化数据sql | `-- log |-- nginx | |-- conf | | |-- mysite.template #nginx模板文件 | | `-- nginx.conf #nginx配置文件 | |-- html | | `-- competitionShare_web #项目静态站点目录 | | |-- index.html | | `-- static | |-- log | `-- ssl #ssl证书目录 | |-- demo.xxxxx.org.cn | | |-- fullchain.pem | | `-- privkey.pem | `-- fastdfs.xxxxx.org.cn | |-- fullchain.pem | `-- privkey.pem `-- tomcat #tomcat目录 |-- conf | `-- server.xml #tomcat的server.xml文件 |-- log `-- webapps |-- competitionShare #项目API接口 `-- competitionShareBackstage #项目后台 ``` #### **创建fastdfs容器使用的目录** ```bash $ mkdir fastdfs/{build,data,nginx} -p ``` build:存放fastdfs构建目录 data:存放fastdfs数据的目录 nginx:存放nginx日志 #### **创建fastdfs/build/Dockerfile** ```bash FROM alpine:3.6 MAINTAINER ynotes <admin@ynotes.cn> #编译参数 ARG HOME=/root ARG FASTDFS_VERSION=5.11 ARG LIBFASTCOMMON_VERSION=1.0.38 ARG FASTDFS_NGINX_MODULE_VERSION=1.20 ARG NGINX_VERSION=1.12.1 ARG FDFS_NGX_PORT #添加FDFS_NGX_PORT参数 ARG TRACKER_PORT #环境变量 ENV FDFS_NGX_PORT "$FDFS_NGX_PORT" #读取docker-compose的变量FDFS_NGX_PORT ENV TRACKER_PORT "$TRACKER_PORT" #读取docker-compose的变量TRACKER_PORT #下载包 RUN cd ${HOME} \ && sed -i 's#http://[^/]*/\(.*\)$#http://mirrors.aliyun.com/\1#g' /etc/apk/repositories \ && apk update \ && apk add --no-cache --virtual .build-deps bash gcc libc-dev make openssl-dev pcre-dev zlib-dev linux-headers curl gnupg libxslt-dev gd-dev geoip-dev \ && curl -fLS https://github.com/happyfish100/fastdfs/archive/V${FASTDFS_VERSION}.tar.gz -o V${FASTDFS_VERSION}.tar.gz \ && curl -fLS https://github.com/happyfish100/libfastcommon/archive/V${LIBFASTCOMMON_VERSION}.tar.gz -o V${LIBFASTCOMMON_VERSION}.tar.gz \ && curl -fLS https://github.com/happyfish100/fastdfs-nginx-module/archive/V${FASTDFS_NGINX_MODULE_VERSION}.tar.gz -o V${FASTDFS_NGINX_MODULE_VERSION}.tar.gz \ && curl -fSL http://nginx.org/download/nginx-${NGINX_VERSION}.tar.gz -o nginx-${NGINX_VERSION}.tar.gz \ && tar xf V${FASTDFS_VERSION}.tar.gz \ && tar xf V${LIBFASTCOMMON_VERSION}.tar.gz \ && tar xf V${FASTDFS_NGINX_MODULE_VERSION}.tar.gz \ && tar zxf nginx-${NGINX_VERSION}.tar.gz #安装包 RUN cd ${HOME}/libfastcommon-${LIBFASTCOMMON_VERSION}/ \ && ./make.sh \ && ./make.sh install \ && cd ${HOME}/fastdfs-${FASTDFS_VERSION}/ \ && ./make.sh \ && ./make.sh install \ && sed "s@/home/yuqing/fastdfs@/data/fastdfs/tracker@g" /etc/fdfs/tracker.conf.sample > /etc/fdfs/tracker.conf \ && sed "s@/home/yuqing/fastdfs@/data/fastdfs/storage@g" /etc/fdfs/storage.conf.sample > /etc/fdfs/storage.conf \ && sed "s@/home/yuqing/fastdfs@/data/fastdfs/storage@g" /etc/fdfs/client.conf.sample > /etc/fdfs/client.conf \ && sed -i 's#CORE_INCS=.*#CORE_INCS="$CORE_INCS /usr/include/fastdfs /usr/include/fastcommon/"#g' ${HOME}/fastdfs-nginx-module-${FASTDFS_NGINX_MODULE_VERSION}/src/config \ && sed -i 's#ngx_module_incs=.*#ngx_module_incs="/usr/include/fastdfs /usr/include/fastcommon/"#g' ${HOME}/fastdfs-nginx-module-${FASTDFS_NGINX_MODULE_VERSION}/src/config \ && chmod u+x ${HOME}/fastdfs-nginx-module-${FASTDFS_NGINX_MODULE_VERSION}/src/config \ && cd ${HOME}/nginx-${NGINX_VERSION} \ && ./configure --add-module=${HOME}/fastdfs-nginx-module-${FASTDFS_NGINX_MODULE_VERSION}/src \ && make && make install #配置包 RUN cp ${HOME}/fastdfs-nginx-module-${FASTDFS_NGINX_MODULE_VERSION}/src/mod_fastdfs.conf /etc/fdfs/ \ && sed -i "s#^store_path0.*#store_path0 = /data/fastdfs/storage#g" /etc/fdfs/mod_fastdfs.conf \ && sed -i "s#^url_have_group_name.*#url_have_group_name = true#g" /etc/fdfs/mod_fastdfs.conf \ && cd ${HOME}/fastdfs-${FASTDFS_VERSION}/conf/ \ && cp http.conf mime.types /etc/fdfs/ \ && echo -e "worker_processes 2;\nevents { \nworker_connections 10240; \n}\nhttp { \ninclude mime.types;\ndefault_type application/octet-stream;\nsendfile on;\nkeepalive_timeout 65;\nserver {\nlisten $FDFS_NGX_PORT;\nserver_name localhost;\nlocation ~/group([0-9])/M00 {\nngx_fastdfs_module;\n}\n}\n}">/usr/local/nginx/conf/nginx.conf #清理包 RUN rm -rf ${HOME}/* \ && apk del .build-deps gcc libc-dev make openssl-dev linux-headers curl gnupg libxslt-dev gd-dev geoip-dev \ && apk add bash pcre-dev zlib-dev #安装脚本 RUN echo -e "mkdir -p /data/fastdfs/storage/data\nmkdir -p /data/fastdfs/tracker\nln -s /data/fastdfs/storage/data /data/fastdfs/storage/data/M00\nsed -i "s/^tracker_server=.*$/tracker_server=\$HOST_IP:$TRACKER_PORT/g" /etc/fdfs/storage.conf\nsed -i "s/^tracker_server=.*$/tracker_server=\$HOST_IP:$TRACKER_PORT/g" /etc/fdfs/mod_fastdfs.conf\n/etc/init.d/fdfs_trackerd start \n/etc/init.d/fdfs_storaged start\n/usr/local/nginx/sbin/nginx\ntail -f /usr/local/nginx/logs/access.log" >/start.sh \ && chmod +x /start.sh ENTRYPOINT ["/bin/bash","/start.sh"] ``` #### **创建mysql容器使用的目录** ```bash $ mkdir mysql/{conf,data,db_init_sql,log} -p $ chmod 777 mysql/log ``` conf:存放mysql配置文件 data:存放mysql数据的目录 log:存放mysql日志,修改权限为777   #### **编辑mysql配置文件mysql/conf/mysqld.cnf** ```bash [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock symbolic-links=0 log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid default-time-zone = '+08:00' character-set-server=utf8 character-set-server = utf8mb4 collation-server = utf8mb4_unicode_ci character-set-client-handshake = FALSE innodb_buffer_pool_size = 128M sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES [client] default-character-set = utf8mb4 [mysql] default-character-set = utf8mb4 ``` #### **创建nginx容器使用的目录** ```bash $ mkdir nginx/{conf,html,log,ssl} $ mkdir nginx/ssl/{demo.xxxxx.org.cn,fastdfs.xxxxx.org.cn} $ chmod 777 nginx/log ``` conf:存放nginx的配置文件 html: 静态站点存放目录 log:存放日志目录 ssl: ssl证书存放目录 #### **编辑nginx/conf/nginx.conf** ```nginx user nginx; worker_processes 2; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { use epoll; worker_connections 10240; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65; #gzip on; include /etc/nginx/conf.d/demo.xxxxx.org.cn.conf; } ``` #### **编辑nginx/conf/mysite.template** ```nginx upstream my_tomcat{ server $TOMCAT:8080; } upstream my_fdfs{ server $FASTDFS:8888; } server { listen $NGINX_PORT; server_name $NGINX_HOST; charset utf-8; rewrite ^(.*)$ https://${server_name}$1 permanent; } server { listen $NGINX_SSL_PORT ssl http2; server_name $NGINX_FASTDFS_HOST; add_header X-Frame-Options SAMEORIGIN; access_log /var/log/nginx/fastdfs.xxxxx.org.cn.access.log main; location ~ .*.(svn|Git|cvs) { deny all; } ssl_certificate "/etc/nginx/ssl/fastdfs.xxxxx.org.cn/fullchain.pem"; ssl_certificate_key "/etc/nginx/ssl/fastdfs.xxxxx.org.cn/privkey.pem"; ssl_session_cache shared:SSL:1m; ssl_session_timeout 10m; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:HIGH:!aNULL:!MD5:!RC4:!DHE; ssl_prefer_server_ciphers on; location ~ /group1/M00 { add_header Strict-Transport-Security max-age=86400; proxy_next_upstream http_502 http_504 error timeout invalid_header; proxy_pass http://my_fdfs; } } server { listen $NGINX_SSL_PORT ssl http2 default_server; server_name $NGINX_HOST; add_header X-Frame-Options SAMEORIGIN; access_log /var/log/nginx/demo.xxxxx.org.cn.access.log main; location ~ .*.(svn|Git|cvs) { deny all; } location / { add_header Strict-Transport-Security max-age=86400; root /var/www/html/competitionShare_web; index index.html index.htm; try_files $uri $uri/ /index.html =404; } ssl_certificate "/etc/nginx/ssl/demo.xxxxx.org.cn/fullchain.pem"; ssl_certificate_key "/etc/nginx/ssl/demo.xxxxx.org.cn/privkey.pem"; ssl_session_cache shared:SSL:1m; ssl_session_timeout 10m; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:HIGH:!aNULL:!MD5:!RC4:!DHE; ssl_prefer_server_ciphers on; # max upload size client_max_body_size 75M; # adjust to taste # Django media # Finally, send all non-media requests to the Django server. error_page 404 /404.html; location = /40x.html { } error_page 500 502 503 504 /50x.html; location = /50x.html { } location /competitionShare { add_header Strict-Transport-Security max-age=86400; proxy_set_header Host $host; proxy_set_header Cookie $http_cookie; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; proxy_redirect off; proxy_pass http://my_tomcat; } location ^~ /competitionShareBackstage { add_header Strict-Transport-Security max-age=86400; proxy_set_header Host $host:$server_port; proxy_set_header Cookie $http_cookie; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; proxy_redirect off; proxy_pass http://my_tomcat; } } ``` #### **拷贝SSL证书到对应的nginx/ssl/{demo.xxxxx.org.cn,fastdfs.xxxxx.org.cn}目录** ```bash $ scp fullchain.pem root@docker-host:/root/docker_compose_demo/competitionShare/nginx/ssl/demo.xxxxx.org.cn $ scp privkey.pem root@docker-host:/root/docker_compose_demo/competitionShare/nginx/ssl/demo.xxxxx.org.cn $ scp fullchain.pem root@docker-host:/root/docker_compose_demo/competitionShare/nginx/ssl/fastdfs.xxxxx.org.cn $ scp privkey.pem root@docker-host:/root/docker_compose_demo/competitionShare/nginx/ssl/fastdfs.xxxxx.org.cn ``` #### **创建tomcat容器使用的目录** ```bash $ mkdir tomcat/{conf,log,webapps} ``` conf:tomcat配置存放目录 log:存放日志目录 webapps: 项目存放目录 #### **编辑tomcat/conf/server.xml** ```xml <?xml version='1.0' encoding='utf-8'?> <Server port="8005" shutdown="SHUTDOWN"> <Listener className="org.apache.catalina.startup.VersionLoggerListener" /> <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /> <Listener className="org.apache.catalina.core.JreMemoryLeakPreventionListener" /> <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" /> <Listener className="org.apache.catalina.core.ThreadLocalLeakPreventionListener" /> <GlobalNamingResources> <Resource name="UserDatabase" auth="Container" type="org.apache.catalina.UserDatabase" description="User database that can be updated and saved" factory="org.apache.catalina.users.MemoryUserDatabaseFactory" pathname="conf/tomcat-users.xml" /> </GlobalNamingResources> <Service name="Catalina"> <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" /> <Engine name="Catalina" defaultHost="localhost"> <Realm className="org.apache.catalina.realm.LockOutRealm"> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> </Realm> <Valve className="org.apache.catalina.valves.RemoteIpValve" remoteIpHeader="X-Forwarded-For" protocolHeader="X-Forwarded-Proto" protocolHeaderHttpsValue="https"/> <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true"> <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log" suffix=".txt" pattern="%h %l %u %t &quot;%r&quot; %s %b" /> </Host> </Engine> </Service> </Server> ``` #### **拷贝项目到tomcat/webapps目录** ```bash $ scp competitionShare root@docker-host:/root/docker_compose_demo/competitionShare/tomcat/webapps $ scp competitionShareBackstage root@docker-host:/root/docker_compose_demo/competitionShare/tomcat/webapps ``` #### **替换tomcat项目中mysql和fastdfs配置** 数据库配置 ```bash env=${PROJECT_ENV} demo.jdbc_url=${DEMO_JDBC_URL} demo.jdbc_username=${DEMO_JDBC_USER} demo.jdbc_password=${DEMO_JDBC_PASS} ``` fastdfs配置 ```bash tracker_server = fastdfs:22122 ``` #### **编辑docker-compose.yml** ```xml version: '3' services: db: image: mysql:5.7 restart: always container_name: cs_web-db environment: MYSQL_ROOT_PASSWORD: abc123456 MYSQL_DATABASE: competitionShare MYSQL_USER: demo MYSQL_PASSWORD: abc123456 volumes: - ./mysql/conf/mysqld.cnf:/etc/mysql/mysql.conf.d/mysqld.cnf - ./mysql/db_init_sql:/docker-entrypoint-initdb.d - ./mysql/data:/var/lib/mysql - ./mysql/log:/var/log fastdfs: build: context: ./fastdfs/build/ dockerfile: Dockerfile args: TRACKER_PORT: 22122 FDFS_NGX_PORT: 8888 image: fastdfs-nginx:5.11 restart: always container_name: cs_web-fastdfs environment: TRACKER_PORT: 22122 FDFS_NGX_PORT: 8888 HOST_IP: fastdfs volumes: - ./fastdfs/data:/data/fastdfs - ./fastdfs/nginx/logs:/usr/local/nginx/logs/ nginx: image: nginx:stable restart: always container_name: cs_web-nginx environment: NGINX_HOST: demo.xxxxx.org.cn NGINX_FASTDFS_HOST: fastdfs.xxxxx.org.cn NGINX_PORT: 80 NGINX_SSL_PORT: 443 TOMCAT: cs_web-tomcat FASTDFS: cs_web-fastdfs ports: - 80:80 - 443:443 volumes: - ./nginx/conf/nginx.conf:/etc/nginx/nginx.conf - ./nginx/conf/mysite.template:/etc/nginx/conf.d/mysite.template - ./nginx/ssl/demo.xxxxx.org.cn/fullchain.pem:/etc/nginx/ssl/demo.xxxxx.org.cn/fullchain.pem - ./nginx/ssl/demo.xxxxx.org.cn/privkey.pem:/etc/nginx/ssl/demo.xxxxx.org.cn/privkey.pem - ./nginx/ssl/fastdfs.xxxxx.org.cn/fullchain.pem:/etc/nginx/ssl/fastdfs.xxxxx.org.cn/fullchain.pem - ./nginx/ssl/fastdfs.xxxxx.org.cn/privkey.pem:/etc/nginx/ssl/fastdfs.xxxxx.org.cn/privkey.pem - ./nginx/log/:/var/log/nginx/ - ./nginx/html/competitionShare_web/:/var/www/html/competitionShare_web/ command: /bin/bash -c "envsubst '$$NGINX_HOST $$NGINX_PORT $$NGINX_SSL_PORT $$TOMCAT $$FASTDFS $$NGINX_FASTDFS_HOST' < /etc/nginx/conf.d/mysite.template > /etc/nginx/conf.d/demo.xxxxx.org.cn.conf && nginx -g 'daemon off;'" tomcat: image: tomcat:8.0.53-jre8 restart: always depends_on: - db - fastdfs container_name: cs_web-tomcat environment: PROJECT_ENV: demo JAVA_OPTS: "-Dsupplements.host=supplements" CATALINA_OPTS: "-server -Xms256M -Xmx1024M -XX:MaxNewSize=256m" DEMO_JDBC_URL: jdbc:mysql://db:3306/competitionShare??characterEncoding=UTF-8 DEMO_JDBC_USER: demo DEMO_JDBC_PASS: abc123456 FDFS_URL: https://fastdfs.demo.org.cn/ volumes: - ./tomcat/webapps:/usr/local/tomcat/webapps - ./tomcat/conf/server.xml:/usr/local/tomcat/conf/server.xml - ./tomcat/log:/log ``` #### **启动** ```bash $ docker-compose up ``` ![](https://files.ynotes.cn/18-8-14/7776948.jpg) #### **浏览器访问** ![](https://files.ynotes.cn/18-8-14/75371827.jpg)
阅读 2181 评论 1 收藏 0
阅读 2181
评论 1
收藏 0

兜兜    2018-08-12 15:22:41    2019-11-14 14:32:31   

mysql ProxySQL
### 一、环境准备 系统: `CentOS7` 数据库: `ProxySQL:1.4.14` 服务器: `master`: `172.16.0.100/db1` `slave`: `172.16.0.101/db2` `slave/ProxySQL`: `172.16.0.102/db3` &emsp; &emsp; ### 二、准备工作 `a.数据库搭建MySQL主从` `b.从库开启`**`read_only=on`** &emsp; &emsp; ### 三、ProxySQL的安装 #### 安装依赖的软件包 `db3` ```bash yum -y install perl-DBD-MySQL perl-DBI perl-Time-HiRes perl-IO-Socket-SSL ``` &emsp; #### ProxySQL软件包的两个下载地址 GitHub官网:https://github.com/sysown/proxysql/releases percona官网:https://www.percona.com/downloads/proxysql/ &emsp; #### 安装ProxySQL `db3` ```bash yum install -y https://www.percona.com/downloads/proxysql/proxysql-1.4.14/binary/redhat/7/x86_64/proxysql-1.4.14-1.1.el7.x86_64.rpm ``` &emsp; #### 启动ProxySQL `db3` ```bash systemctl start proxysql ``` &emsp; #### 查看启动信息 `db3` ```bash netstat -tunlp |grep proxysql ``` ``` tcp 0 0 0.0.0.0:6032 0.0.0.0:* LISTEN 13331/proxysql tcp 0 0 0.0.0.0:6033 0.0.0.0:* LISTEN 13331/proxysql ``` `6032为管理端口,6033为对外服务的端口号` &emsp; #### 登录ProxySQL `db3` ```bash mysql -uadmin -padmin -h 127.0.0.1 -P 6032 ``` &emsp; #### 查看proxysql信息 `db3` ```sql mysql> show databases; +-----+---------------+-------------------------------------+ | seq | name | file | +-----+---------------+-------------------------------------+ | 0 | main | | | 2 | disk | /var/lib/proxysql/proxysql.db | | 3 | stats | | | 4 | monitor | | | 5 | stats_history | /var/lib/proxysql/proxysql_stats.db | +-----+---------------+-------------------------------------+ #可见有四个库:main、disk、stats和monitor。分别说明一下这四个库的作用。 #main:内存配置数据库,即MEMORY,表里存放后端db实例、用户验证、路由规则等信息。 mysql> show tables; +--------------------------------------------+ | tables | +--------------------------------------------+ | global_variables | | mysql_collations | | mysql_group_replication_hostgroups | | mysql_query_rules | | mysql_query_rules_fast_routing | | mysql_replication_hostgroups | | mysql_servers | | mysql_users | | proxysql_servers | | runtime_checksums_values | | runtime_global_variables | | runtime_mysql_group_replication_hostgroups | | runtime_mysql_query_rules | | runtime_mysql_query_rules_fast_routing | | runtime_mysql_replication_hostgroups | | runtime_mysql_servers | | runtime_mysql_users | | runtime_proxysql_servers | | runtime_scheduler | | scheduler | +--------------------------------------------+ #库下的主要表: #mysql_servers—后端可以连接MySQL服务器的列表。 #mysql_users—配置后端数据库的账号和监控的账号。 #mysql_query_rules—指定Query路由到后端不同服务器的规则列表。 #注:表名以runtime_开头的表示ProxySQL当前运行的配置内容,不能通过DML语句修改。只能修改对应的不以 runtime开头的表,然后“LOAD”使#其生效,“SAVE”使其存到硬盘以供下次重启加载。 #disk库—持久化磁盘的配置。 #stats库—统计信息的汇总。 #monitor库—一些监控的收集信息,包括数据库的健康状态等。 ``` &emsp; &emsp; ### 四、配置ProxySQL监控 #### Master上创建ProxySQL的监控账户和对外访问账户并赋予权限 `db1` ```sql mysql> create user 'monitor'@'172.16.0.%' identified by 'monitor'; mysql> grant all privileges on *.* to 'monitor'@'172.16.0.%' with grant option; mysql> create user 'proxysql'@'172.16.0.%' identified by 'proxysql'; mysql> grant all privileges on *.* to 'proxysql'@'172.16.0.%' with grant option; mysql> flush privileges; ``` &emsp; #### ProxySQL添加主从服务器列表 `db3` ```sql mysql> insert into mysql_servers(hostgroup_id,hostname,port) values(10,'172.16.0.100',3306); mysql> insert into mysql_servers(hostgroup_id,hostname,port) values(10,'172.16.0.101',3306); mysql> insert into mysql_servers(hostgroup_id,hostname,port) values(10,'172.16.0.102',3306); ``` &emsp; #### ProxySQL查看服务器列表 `db3` ```sql mysql> select * from mysql_servers; +--------------+--------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ | hostgroup_id | hostname | port | status | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment | +--------------+--------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ | 10 | 172.16.0.100 | 3306 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | | | 20 | 172.16.0.101 | 3306 | ONLINE | 10 | 0 | 1000 | 0 | 0 | 0 | | | 20 | 172.16.0.102 | 3306 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | | +--------------+--------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ ``` &emsp; #### ProxySQL配置监控账号 `db3` ```sql mysql> set mysql-monitor_username='monitor'; mysql> set mysql-monitor_password='monitor'; mysql> load mysql variables to runtime; mysql> save mysql variables to disk; ``` &emsp; #### ProxySQL验证监控信息 `db3` ```sql mysql> select * from monitor.mysql_server_connect_log order by time_start_us desc limit 6; +--------------+------+------------------+-------------------------+---------------+ | hostname | port | time_start_us | connect_success_time_us | connect_error | +--------------+------+------------------+-------------------------+---------------+ | 172.16.0.100 | 3306 | 1565593172754235 | 2118 | NULL | | 172.16.0.101 | 3306 | 1565593172137991 | 2729 | NULL | | 172.16.0.102 | 3306 | 1565593171521858 | 773 | NULL | | 172.16.0.100 | 3306 | 1565593113076006 | 2163 | NULL | | 172.16.0.101 | 3306 | 1565593112298748 | 2377 | NULL | | 172.16.0.102 | 3306 | 1565593111521583 | 628 | NULL | +--------------+------+------------------+-------------------------+---------------+ ``` &emsp; &emsp; ### 五、配置ProxySQL主从分组信息 #### ProxySQL插入读写分离组 `db3` ```sql mysql> insert into mysql_replication_hostgroups(writer_hostgroup,reader_hostgroup,comment) values (10,20,'proxy'); mysql> load mysql servers to runtime; mysql> save mysql servers to disk; # writer_hostgroup是写入组的编号,reader_hostgroup是读取组的编号。实验中使用10作为写入组,20作为读取组编号。 mysql> select * from mysql_servers; +--------------+--------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ | hostgroup_id | hostname | port | status | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment | +--------------+--------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ | 10 | 172.16.0.100 | 3306 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | | | 20 | 172.16.0.101 | 3306 | ONLINE | 10 | 0 | 1000 | 0 | 0 | 0 | | | 20 | 172.16.0.102 | 3306 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | | +--------------+--------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ # ProxySQL会根据server的read_only的取值将服务器进行分组。read_only=0的server,master被分到编号为10的写组,read_only=1的server,slave则被分到编号为20的读组。 ``` &emsp; #### ProxySQL配置对外访问账号(`默认指定主库,并对该用户开启事务持久化保护`) `db3` ```sql insert into mysql_users(username,password,default_hostgroup) values ('proxysql','proxysql',10); update mysql_users set transaction_persistent=1 where username='proxysql'; #设置为1,避免发生脏读、幻读等现象 load mysql users to runtime; save mysql users to disk; ``` &emsp; #### ProxySQL验证登录的服务器 `db3` ```bash mysql -uproxysql -pproxysql -h 172.16.0.102 -P 6033 ``` ```sql mysql> select @@hostname; +------------+ | @@hostname | +------------+ | db1 | +------------+ # 验证登入的服务器默认为主库(db1) ``` &emsp; &emsp; ### 六、ProxySQL配置读写分离策略 #### ProxySQL配置读写分离 `db3` ```sql mysql> insert into mysql_query_rules(active,match_pattern,destination_hostgroup, apply) VALUES(1,'^SELECT.*FOR UPDATE$',10,1); mysql> insert into mysql_query_rules(active,match_pattern,destination_hostgroup, apply) VALUES(1,'^SELECT',20,1); mysql> load mysql query rules to runtime; mysql> save mysql query rules to disk; ``` &emsp; #### ProxySQL配置服务器的权重 `db3` ```sql mysql> update mysql_servers set weight=10 where hostname='172.16.0.101'; mysql> load mysql servers to runtime; mysql> save mysql servers to disk; # 172.16.0.101的权重被修改为10,172.16。0.102的默认权重为1,则请求比例为10:1 ``` &emsp; &emsp; ### 七、测试读写分离 #### 执行select查询 `db3` ```sql mysql -uproxysql -pproxysql -h 172.16.0.102 -P 6033 ``` ```sql mysql> select node_name from percona.example where node_id=1; ``` &emsp; #### 执行select for update查询 `db3` ```sql mysql> select node_name from percona.example where node_id=1 for update; ``` &emsp; #### 执行update语句 `db3` ```sql mysql> update percona.example set node_name='test' where node_id=1; ``` &emsp; #### proxySQL查看统计信息 `db3` ```bash mysql -uadmin -padmin -h 127.0.0.1 -P 6032 ``` ```sql mysql> select hostgroup,digest_text from stats_mysql_query_digest; +-----------+--------------------------------------------------------------------------------------------------+ | hostgroup | digest_text | +-----------+--------------------------------------------------------------------------------------------------+ | 10 | update percona.example set node_name=? where node_id=? | | 10 | select node_name,@@hostname from percona.example where node_id=? for update | | 20 | select node_name,@@hostname from percona.example where node_id=? | +-----------+--------------------------------------------------------------------------------------------------+ ``` `从上面的输出信息可以发现读写分离成功了` 参考: https://blog.51cto.com/sumongodb/2130453
阅读 643 评论 0 收藏 0
阅读 643
评论 0
收藏 0

兜兜    2018-08-10 16:11:21    2018-08-10 16:11:21   

docker 容器 fastdfs Dockerfile fdfs
#### **环境:** 系统: **Centos7** Docker版本: **18.03.1-ce, build 9ee9f40** 容器网络: **桥接docker0** 容器网段: **10.10.0.0/24** #### **Dockerfile文件** ```bash FROM alpine:3.6 MAINTAINER ynotes.cn <admin@ynotes.cn> #环境变量 ENV NGINX_PORT 80 ENV FASTDFS_PORT 22122 #编译参数 ARG HOME=/root ARG FASTDFS_VERSION=5.11 ARG LIBFASTCOMMON_VERSION=1.0.38 ARG FASTDFS_NGINX_MODULE_VERSION=1.20 ARG NGINX_VERSION=1.12.1 #下载包 RUN cd ${HOME} \ && sed -i 's#http://[^/]*/\(.*\)$#http://mirrors.aliyun.com/\1#g' /etc/apk/repositories \ && apk update \ && apk add --no-cache --virtual .build-deps bash gcc libc-dev make openssl-dev pcre-dev zlib-dev linux-headers curl gnupg libxslt-dev gd-dev geoip-dev \ && curl -fLS https://github.com/happyfish100/fastdfs/archive/V${FASTDFS_VERSION}.tar.gz -o V${FASTDFS_VERSION}.tar.gz \ && curl -fLS https://github.com/happyfish100/libfastcommon/archive/V${LIBFASTCOMMON_VERSION}.tar.gz -o V${LIBFASTCOMMON_VERSION}.tar.gz \ && curl -fLS https://github.com/happyfish100/fastdfs-nginx-module/archive/V${FASTDFS_NGINX_MODULE_VERSION}.tar.gz -o V${FASTDFS_NGINX_MODULE_VERSION}.tar.gz \ && curl -fSL http://nginx.org/download/nginx-${NGINX_VERSION}.tar.gz -o nginx-${NGINX_VERSION}.tar.gz \ && tar xf V${FASTDFS_VERSION}.tar.gz \ && tar xf V${LIBFASTCOMMON_VERSION}.tar.gz \ && tar xf V${FASTDFS_NGINX_MODULE_VERSION}.tar.gz \ && tar zxf nginx-${NGINX_VERSION}.tar.gz #安装包 RUN cd ${HOME}/libfastcommon-${LIBFASTCOMMON_VERSION}/ \ && ./make.sh \ && ./make.sh install \ && cd ${HOME}/fastdfs-${FASTDFS_VERSION}/ \ && ./make.sh \ && ./make.sh install \ && sed "s@/home/yuqing/fastdfs@/data/fastdfs/tracker@g" /etc/fdfs/tracker.conf.sample > /etc/fdfs/tracker.conf \ && sed "s@/home/yuqing/fastdfs@/data/fastdfs/storage@g" /etc/fdfs/storage.conf.sample > /etc/fdfs/storage.conf \ && sed "s@/home/yuqing/fastdfs@/data/fastdfs/storage@g" /etc/fdfs/client.conf.sample > /etc/fdfs/client.conf \ && sed -i 's#CORE_INCS=.*#CORE_INCS="$CORE_INCS /usr/include/fastdfs /usr/include/fastcommon/"#g' ${HOME}/fastdfs-nginx-module-${FASTDFS_NGINX_MODULE_VERSION}/src/config \ && sed -i 's#ngx_module_incs=.*#ngx_module_incs="/usr/include/fastdfs /usr/include/fastcommon/"#g' ${HOME}/fastdfs-nginx-module-${FASTDFS_NGINX_MODULE_VERSION}/src/config \ && chmod u+x ${HOME}/fastdfs-nginx-module-${FASTDFS_NGINX_MODULE_VERSION}/src/config \ && cd ${HOME}/nginx-${NGINX_VERSION} \ && ./configure --add-module=${HOME}/fastdfs-nginx-module-${FASTDFS_NGINX_MODULE_VERSION}/src \ && make && make install #配置包 RUN cp ${HOME}/fastdfs-nginx-module-${FASTDFS_NGINX_MODULE_VERSION}/src/mod_fastdfs.conf /etc/fdfs/ \ && sed -i "s#^store_path0.*#store_path0 = /data/fastdfs/storage#g" /etc/fdfs/mod_fastdfs.conf \ && sed -i "s#^url_have_group_name.*#url_have_group_name = true#g" /etc/fdfs/mod_fastdfs.conf \ && cd ${HOME}/fastdfs-${FASTDFS_VERSION}/conf/ \ && cp http.conf mime.types /etc/fdfs/ \ && echo -e "worker_processes 2;\nevents { \nworker_connections 10240; \n}\nhttp { \ninclude mime.types;\ndefault_type application/octet-stream;\nsendfile on;\nkeepalive_timeout 65;\nserver {\nlisten NGINX_PORT;\nserver_name localhost;\nlocation ~/group([0-9])/M00 {\nngx_fastdfs_module;\n}\n}\n}">/usr/local/nginx/conf/nginx.conf #清理包 RUN rm -rf ${HOME}/* \ && apk del .build-deps gcc libc-dev make openssl-dev linux-headers curl gnupg libxslt-dev gd-dev geoip-dev \ && apk add bash pcre-dev zlib-dev #安装脚本 RUN sed -i "s/NGINX_PORT/$NGINX_PORT/g" /usr/local/nginx/conf/nginx.conf \ && echo -e "mkdir -p /data/fastdfs/storage/data\nmkdir -p /data/fastdfs/tracker\nln -s /data/fastdfs/storage/data /data/fastdfs/storage/data/M00\nHOST_IP=\$(ip addr |grep 'scope global eth0'|awk '{ print \$2}'|awk -F/ '{ print \$1 }')\nsed -i "s/^tracker_server=.*$/tracker_server=\$HOST_IP:$FASTDFS_PORT/g" /etc/fdfs/storage.conf\nsed -i "s/^tracker_server=.*$/tracker_server=\$HOST_IP:$FASTDFS_PORT/g" /etc/fdfs/mod_fastdfs.conf\n/etc/init.d/fdfs_trackerd start \n/etc/init.d/fdfs_storaged start\n/usr/local/nginx/sbin/nginx\ntail -f /usr/local/nginx/logs/access.log" >/start.sh \ && chmod +x /start.sh EXPOSE 80 22122 23000 ENTRYPOINT ["/bin/bash","/start.sh"] ``` #### **编译镜像文件** ```bash $ docker build -t fastdfs-nginx:v5.11 . ``` #### **启动容器** ```bash $ docker run -p 80:80 -p 22122:22122 -p 23000:23000 -v /root/docker_demo/fastdfs/data:/data/fastdfs fastdfs-nginx:v5.11 ```      #### **测试机1(Centos7)** 添加路由 ```bash $ route add -net 10.10.0.0 netmask 255.255.255.0 gw 192.168.50.252 ``` 上传图片 ```bash $ fdfs_test /etc/fdfs/client.conf upload zzzz.jpg ``` ``` group_name=group1, remote_filename=M00/00/00/CgoAAVtxAhqAWpxyAAE7WHOlIPs425.jpg source ip address: 10.10.0.1 file timestamp=2018-08-13 11:59:22 file size=80728 file crc32=1940201723 example file url: http://10.10.0.1/group1/M00/00/00/CgoAAVtxAhqAWpxyAAE7WHOlIPs425.jpg storage_upload_slave_by_filename group_name=group1, remote_filename=M00/00/00/CgoAAVtxAhqAWpxyAAE7WHOlIPs425_big.jpg source ip address: 10.10.0.1 file timestamp=2018-08-13 11:59:22 file size=80728 file crc32=1940201723 example file url: http://10.10.0.1/group1/M00/00/00/CgoAAVtxAhqAWpxyAAE7WHOlIPs425_big.jpg ```    #### **测试机2(window7)** 添加路由 ```cmd > route add 10.10.0.0 mask 255.255.255.0 192.168.50.252 ``` 访问图片 ![](https://files.ynotes.cn/18-8-13/79351609.jpg)
阅读 2050 评论 0 收藏 0
阅读 2050
评论 0
收藏 0

兜兜    2018-08-09 15:07:32    2020-03-08 18:41:51   

nginx https X-Forwarded-Proto scheme
#### **nginx+tomcat** nginx配置: ```bash proxy_set_header Host $host; proxy_set_header Cookie $http_cookie; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_pass http://127.0.0.1:8181; ``` tomcat配置: ```xml <Valve className="org.apache.catalina.valves.RemoteIpValve" remoteIpHeader="X-Forwarded-For" protocolHeader="X-Forwarded-Proto" internalProxies="172\.16.\d{1,3}\.\d{1,3}" #注意坑:如果使用tomcat7,并且内网ip是172网段需要加上internalProxies, 官网解释:http://tomcat.apache.org/tomcat-7.0-doc/api/org/apache/catalina/valves/RemoteIpValve.html protocolHeaderHttpsValue="https"/> ``` internalProxies参考:http://blog.inford.net/doc/171    #### **阿里云SLB+nginx+tomcat** 阿里云SLB配置: ![](https://files.ynotes.cn/18-8-9/93864542.jpg) nginx配置: ```bash proxy_set_header Host $host; proxy_set_header Cookie $http_cookie; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto; #http_x_forwarded_proto参数为SLB传过来的参数 proxy_pass http://127.0.0.1:8181; ``` tomcat配置: ```xml <Valve className="org.apache.catalina.valves.RemoteIpValve" remoteIpHeader="X-Forwarded-For" protocolHeader="X-Forwarded-Proto" protocolHeaderHttpsValue="https"/> ``` #### 上面的配置,一般的访问没有问题,当页面发生302重定向会请求http的问题,出现 requested an insecure XMLHttpRequest nginx配置(nginx+tomcat) ```bash proxy_redirect http:// $scheme://; #302重定向请求的http协议转发到$scheme ``` nginx配置(阿里云SLB+nginx+tomcat) ```bash proxy_redirect http:// $http_x_forwarded_proto://; #302重定向请求的http协议转发到$http_x_forwarded_proto ```
阅读 2672 评论 0 收藏 0
阅读 2672
评论 0
收藏 0

第 7 页 / 共 11 页
 
第 7 页 / 共 11 页