兜兜    2021-09-03 11:23:08    2022-01-25 09:20:42   

k8s ELK
#### 工作流程 ```sh 1. helm安装微服务配置filebeat sidercar容器 2. 微服务pod运行的包含一个业务容器和filebeat sidercar容器 3. filebeat会读取业务容器的日志,把日志推送到elasticsearch集群 ``` ##### 1.helm安装微服务配置filebeat sidercar容器 deployment.yaml ```yaml ... containers: - name: {{ .Values.image2.name }} #filebeat容器 image: "{{ .Values.image2.repository }}:{{ .Values.image2.tag }}" imagePullPolicy: {{ .Values.image.pullPolicy }} command: - "/bin/sh" args: - "-c" - "filebeat -c /etc/filebeat/filebeat.yml" volumeMounts: - name: app-logs mountPath: /log - name: filebeat-{{.Release.Name}}-config mountPath: /etc/filebeat/ - name: {{ .Chart.Name }} securityContext: {{- toYaml .Values.securityContext | nindent 12 }} image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}" imagePullPolicy: {{ .Values.image.pullPolicy }} volumeMounts: - name: app-logs mountPath: /serverlog ports: - name: http containerPort: {{ .Values.service.targetPort | default 80 }} protocol: TCP livenessProbe: httpGet: path: /actuator/health/liveness port: {{ .Values.service.targetPort | default 80 }} initialDelaySeconds: 20 failureThreshold: 15 timeoutSeconds: 10 periodSeconds: 5 readinessProbe: httpGet: path: /actuator/health/readiness port: {{ .Values.service.targetPort | default 80 }} initialDelaySeconds: 20 failureThreshold: 15 timeoutSeconds: 10 periodSeconds: 5 resources: {{- toYaml .Values.resources | nindent 12 }} volumes: - name: app-logs emptyDir: {} - name: filebeat-{{.Release.Name}}-config configMap: name: filebeat-{{.Release.Name}}-config #filebat的配置文件 ... ``` configmap.yaml ```yml apiVersion: v1 kind: ConfigMap metadata: name: filebeat-{{.Release.Name}}-config data: filebeat.yml: | filebeat.inputs: - type: log enabled: true paths: - "/log/*/log_info.log" #日志路径 - "/log/*/*/log_info.log" - "/log/*/*/*/log_info.log" tags: ["{{ .Release.Name }}"] multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}' multiline.negate: true multiline.match: after exclude_lines: ['.*com.alibaba.nacos.naming.client.listener.*'] output.elasticsearch: #配置日志输出到elasticsearch hosts: ["xxxxx.elasticsearch.com"] username: "elastic" password: "xxxxx" index: "{{ .Release.Name }}-%{+yyyy.MM.dd}" setup.ilm.enabled: false setup.template.name: "{{ .Release.Name }}" setup.template.pattern: "{{ .Release.Name }}-*" ``` values.yaml ```yaml image2: name: filebeat repository: shudoon-k8s-registry-test-registry-vpc.cn-shenzhen.cr.aliyuncs.com/shudoon/filebeat pullPolicy: IfNotPresent imagePullPolicy: Always tag: "7.4.2" ``` ##### 2.微服务pod运行的包含一个业务容器和filebeat sidercar容器 ```sh $ kubectl describe pod shudoon-data-service-springboot-demo-74d7d7d656-9fb7w Name: shudoon-data-service-springboot-demo-74d7d7d656-9fb7w Namespace: default Priority: 0 Node: k8s-master1/172.16.100.100 Start Time: Tue, 28 Sep 2021 12:14:17 +0800 Labels: app.kubernetes.io/instance=shudoon-data-service app.kubernetes.io/name=springboot-demo pod-template-hash=74d7d7d656 Annotations: <none> Status: Running IP: 10.244.0.136 IPs: IP: 10.244.0.136 Controlled By: ReplicaSet/shudoon-data-service-springboot-demo-74d7d7d656 Init Containers: skywalking-agent-sidecar: Container ID: docker://c4254f70dfe4d8f75b8f163ef4731c7a5a7f3e9299ccf509fb6eb4c334a762b1 Image: harbor.example.com/shudoon/skywalking-agent-sidecar:8.7.0-fixbug-1 Image ID: docker-pullable://harbor.example.com/shudoon/skywalking-agent-sidecar@sha256:b39e3d2174eac4a1e50a6d1c08c7f4e882601856c8741604e02740f95a57862d Port: <none> Host Port: <none> Command: sh Args: -c mkdir -p /skywalking/agent && cp -r /usr/skywalking/agent/* /skywalking/agent State: Terminated Reason: Completed Exit Code: 0 Started: Tue, 28 Sep 2021 12:14:18 +0800 Finished: Tue, 28 Sep 2021 12:14:18 +0800 Ready: True Restart Count: 0 Environment: <none> Mounts: /skywalking/agent from sw-agent (rw) /var/run/secrets/kubernetes.io/serviceaccount from shudoon-data-service-springboot-demo-token-8zfkb (ro) Containers: filebeat: Container ID: docker://17de1c185b0c943e883ec1225d1da78103a6456b2edbe3c2fee03661aa350b36 Image: harbor.example.com/shudoon/filebeat:7.4.2 Image ID: docker-pullable://harbor.example.com/shudoon/filebeat@sha256:d223bd603c1e2b6cfde0123d0f89a48bcd9feac29a788653f9873728d05a3b12 Port: <none> Host Port: <none> Command: /bin/sh Args: -c filebeat -c /etc/filebeat/filebeat.yml State: Running Started: Tue, 28 Sep 2021 12:14:19 +0800 Ready: True Restart Count: 0 Environment: <none> Mounts: /etc/filebeat/ from filebeat-shudoon-data-service-config (rw) /log from app-logs (rw) /var/run/secrets/kubernetes.io/serviceaccount from shudoon-data-service-springboot-demo-token-8zfkb (ro) springboot-demo: Container ID: docker://29e9068d215e0257302f116b826c25bea69d4c0c8f2ea5bd94928d85ae4929b1 Image: harbor.example.com/shudoon/shudoon-data-service:master-bb01f0f-2 Image ID: docker-pullable://harbor.example.com/shudoon/shudoon-data-service@sha256:efb4b305cc40a3787984ef028678d6cb2ed6c15a1dae1dec410e63f9acf8213f Port: 18004/TCP Host Port: 0/TCP State: Running Started: Tue, 28 Sep 2021 12:16:27 +0800 Last State: Terminated Reason: Error Exit Code: 137 Started: Tue, 28 Sep 2021 12:14:26 +0800 Finished: Tue, 28 Sep 2021 12:16:27 +0800 Ready: True Restart Count: 1 Limits: cpu: 1 memory: 2Gi Requests: cpu: 100m memory: 256Mi Liveness: http-get http://:18004/actuator/health/liveness delay=20s timeout=10s period=5s #success=1 #failure=15 Readiness: http-get http://:18004/actuator/health/readiness delay=20s timeout=10s period=5s #success=1 #failure=15 Environment: JAVA_TOOL_OPTIONS: -javaagent:/usr/skywalking/agent/skywalking-agent.jar SW_AGENT_NAME: shudoon-data-service SW_AGENT_COLLECTOR_BACKEND_SERVICES: skywalking-oap:11800 Mounts: /serverlog from app-logs (rw) /usr/skywalking/agent from sw-agent (rw) /var/run/secrets/kubernetes.io/serviceaccount from shudoon-data-service-springboot-demo-token-8zfkb (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: app-logs: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: <unset> sw-agent: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: <unset> filebeat-shudoon-data-service-config: Type: ConfigMap (a volume populated by a ConfigMap) Name: filebeat-shudoon-data-service-config Optional: false shudoon-data-service-springboot-demo-token-8zfkb: Type: Secret (a volume populated by a Secret) SecretName: shudoon-data-service-springboot-demo-token-8zfkb Optional: false QoS Class: Burstable Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: <none> ``` 3.filebeat会读取业务容器的日志,把日志推送到elasticsearch集群 ```yaml filebeat.yml: | filebeat.inputs: - type: log enabled: true paths: - "/log/*/log_info.log" #日志路径 - "/log/*/*/log_info.log" - "/log/*/*/*/log_info.log" tags: ["{{ .Release.Name }}"] multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}' multiline.negate: true multiline.match: after exclude_lines: ['.*com.alibaba.nacos.naming.client.listener.*'] output.elasticsearch: #配置日志输出到elasticsearch hosts: ["xxxxx.elasticsearch.com"] username: "elastic" password: "xxxxx" index: "{{ .Release.Name }}-%{+yyyy.MM.dd}" setup.ilm.enabled: false setup.template.name: "{{ .Release.Name }}" setup.template.pattern: "{{ .Release.Name }}-*" ```
阅读 821 评论 0 收藏 0
阅读 821
评论 0
收藏 0

兜兜    2021-09-03 11:00:47    2022-01-25 09:30:51   

k8s kubernets
#### 环境 `Kubernets 1.18.8` #### 一、下载kube-prometheus ```bash git clone https://github.com/coreos/kube-prometheus.git git checkout -b release-0.6 remotes/origin/release-0.6 #release-0.6支持1.18+ ``` #### 二、安装kube-prometheus ##### 安装kube-prometheus相关资源和服务 ```bash # Create the namespace and CRDs, and then wait for them to be availble before creating the remaining resources kubectl create -f manifests/setup until kubectl get servicemonitors --all-namespaces ; do date; sleep 1; echo ""; done kubectl create -f manifests/ ``` ##### 卸载kube-prometheus相关资源和服务 ```bash kubectl delete --ignore-not-found=true -f manifests/ -f manifests/setup ``` #### 三、配置访问 prometheus ```bash $ kubectl --namespace monitoring port-forward svc/prometheus-k8s 9090 ``` http://localhost:9090 Grafana ```bash $ kubectl --namespace monitoring port-forward svc/grafana 3000 ``` http://localhost:3000 账号密码: admin:admin. AlertManager ```bash $ kubectl --namespace monitoring port-forward svc/alertmanager-main 9093 ``` http://localhost:9093 #### 四、增加钉钉告警 4.1 k8s安装webhook-dingtalk ```bash $ cat dingtalk.yaml ``` ```yaml apiVersion: apps/v1 kind: Deployment metadata: labels: app: webhook-dingtalk name: webhook-dingtalk namespace: monitoring spec: replicas: 1 selector: matchLabels: app: webhook-dingtalk template: metadata: labels: app: webhook-dingtalk spec: containers: - image: billy98/webhook-dingtalk:latest name: webhook-dingtalk args: - "https://oapi.dingtalk.com/robot/send?access_token=xxxxxxxxxxx" #access_token改为自己的即可 ports: - containerPort: 8080 protocol: TCP resources: requests: cpu: 100m memory: 100Mi limits: cpu: 500m memory: 500Mi livenessProbe: failureThreshold: 3 initialDelaySeconds: 30 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 tcpSocket: port: 8080 readinessProbe: failureThreshold: 3 initialDelaySeconds: 30 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 httpGet: port: 8080 path: / imagePullSecrets: - name: IfNotPresent --- apiVersion: v1 kind: Service metadata: labels: app: webhook-dingtalk name: webhook-dingtalk namespace: monitoring spec: ports: - name: http port: 80 protocol: TCP targetPort: 8080 selector: app: webhook-dingtalk type: ClusterIP ``` ```bash $ kubectl create -f dingtalk.yaml ``` 4.2 获取alertmanager配置 ```bash $ kubectl get secret alertmanager-main -n monitoring -o yaml #alertmanager.yaml后面即为配置文件,用base64 -d解码即可 apiVersion: v1 data: alertmanager.yaml: Imdsb2JhbCI6CiAgInJlc29sdmVfdGltZW91dCI6ICI1bSIKImluaGliaXRfcnVsZXMiOgotICJlcXVhbCI6CiAgLSAibmFtZXNwYWNlIgogIC0gImFsZXJ0bmFtZSIKICAic291cmNlX21hdGNoIjoKICAgICJzZXZlcml0eSI6ICJjcml0aWNhbCIKICAidGFyZ2V0X21hdGNoX3JlIjoKICAgICJzZXZlcml0eSI6ICJ3YXJuaW5nfGluZm8iCi0gImVxdWFsIjoKICAtICJuYW1lc3BhY2UiCiAgLSAiYWxlcnRuYW1lIgogICJzb3VyY2VfbWF0Y2giOgogICAgInNldmVyaXR5IjogIndhcm5pbmciCiAgInRhcmdldF9tYXRjaF9yZSI6CiAgICAic2V2ZXJpdHkiOiAiaW5mbyIKInJlY2VpdmVycyI6Ci0gIm5hbWUiOiAiRGVmYXVsdCIKLSAibmFtZSI6ICJXYXRjaGRvZyIKLSAibmFtZSI6ICJDcml0aWNhbCIKLSAibmFtZSI6ICJXZWJob29rIgogICJ3ZWJob29rX2NvbmZpZ3MiOgogIC0gInVybCI6ICJodHRwOi8vd2ViaG9vay1kaW5ndGFsay9kaW5ndGFsay9zZW5kLyIKICAgICJzZW5kX3Jlc29sdmVkIjogdHJ1ZQoicm91dGUiOgogICJncm91cF9ieSI6CiAgLSAibmFtZXNwYWNlIgogICJncm91cF9pbnRlcnZhbCI6ICI1bSIKICAiZ3JvdXBfd2FpdCI6ICIzMHMiCiAgInJlY2VpdmVyIjogIldlYmhvb2siCiAgInJlcGVhdF9pbnRlcnZhbCI6ICIxMmgiCiAgInJvdXRlcyI6CiAgLSAibWF0Y2giOgogICAgICAiYWxlcnRuYW1lIjogIldhdGNoZG9nIgogICAgInJlY2VpdmVyIjogIldhdGNoZG9nIgogIC0gIm1hdGNoIjoKICAgICAgInNldmVyaXR5IjogImNyaXRpY2FsIgogICAgInJlY2VpdmVyIjogIkNyaXRpY2FsIgo= kind: Secret metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"v1","data":{"alertmanager.yaml":"Imdsb2JhbCI6CiAgInJlc29sdmVfdGltZW91dCI6ICI1bSIKImluaGliaXRfcnVsZXMiOgotICJlcXVhbCI6CiAgLSAibmFtZXNwYWNlIgogIC0gImFsZXJ0bmFtZSIKICAic291cmNlX21hdGNoIjoKICAgICJzZXZlcml0eSI6ICJjcml0aWNhbCIKICAidGFyZ2V0X21hdGNoX3JlIjoKICAgICJzZXZlcml0eSI6ICJ3YXJuaW5nfGluZm8iCi0gImVxdWFsIjoKICAtICJuYW1lc3BhY2UiCiAgLSAiYWxlcnRuYW1lIgogICJzb3VyY2VfbWF0Y2giOgogICAgInNldmVyaXR5IjogIndhcm5pbmciCiAgInRhcmdldF9tYXRjaF9yZSI6CiAgICAic2V2ZXJpdHkiOiAiaW5mbyIKInJlY2VpdmVycyI6Ci0gIm5hbWUiOiAiRGVmYXVsdCIKLSAibmFtZSI6ICJXYXRjaGRvZyIKLSAibmFtZSI6ICJDcml0aWNhbCIKLSAibmFtZSI6ICJXZWJob29rIgogICJ3ZWJob29rX2NvbmZpZ3MiOgogIC0gInVybCI6ICJodHRwOi8vd2ViaG9vay1kaW5ndGFsay9kaW5ndGFsay9zZW5kLyIKICAgICJzZW5kX3Jlc29sdmVkIjogdHJ1ZQoicm91dGUiOgogICJncm91cF9ieSI6CiAgLSAibmFtZXNwYWNlIgogICJncm91cF9pbnRlcnZhbCI6ICI1bSIKICAiZ3JvdXBfd2FpdCI6ICIzMHMiCiAgInJlY2VpdmVyIjogIldlYmhvb2siCiAgInJlcGVhdF9pbnRlcnZhbCI6ICIxMmgiCiAgInJvdXRlcyI6CiAgLSAibWF0Y2giOgogICAgICAiYWxlcnRuYW1lIjogIldhdGNoZG9nIgogICAgInJlY2VpdmVyIjogIldhdGNoZG9nIgogIC0gIm1hdGNoIjoKICAgICAgInNldmVyaXR5IjogImNyaXRpY2FsIgogICAgInJlY2VpdmVyIjogIkNyaXRpY2FsIgo="},"kind":"Secret","metadata":{"annotations":{},"creationTimestamp":"2021-09-02T15:58:11Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:alertmanager.yaml":{}},"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:type":{}},"manager":"kubectl","operation":"Update","time":"2021-09-02T18:21:15Z"}],"name":"alertmanager-main","namespace":"monitoring","resourceVersion":"455008700","selfLink":"/api/v1/namespaces/monitoring/secrets/alertmanager-main","uid":"a7339c20-1b65-49f7-8b2e-db0459f14155"},"type":"Opaque"} creationTimestamp: "2021-09-03T02:06:04Z" managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:data: .: {} f:alertmanager.yaml: {} f:metadata: f:annotations: .: {} f:kubectl.kubernetes.io/last-applied-configuration: {} f:type: {} manager: kubectl operation: Update time: "2021-09-03T02:06:04Z" name: alertmanager-main namespace: monitoring resourceVersion: "457308386" selfLink: /api/v1/namespaces/monitoring/secrets/alertmanager-main uid: 6be76878-ac5d-4b6c-8f5a-e6928c1b4d67 type: Opaque ``` 4.3 更改解码后配置 ```yaml "global": "resolve_timeout": "5m" "inhibit_rules": - "equal": - "namespace" - "alertname" "source_match": "severity": "critical" "target_match_re": "severity": "warning|info" - "equal": - "namespace" - "alertname" "source_match": "severity": "warning" "target_match_re": "severity": "info" "receivers": - "name": "Default" - "name": "Watchdog" - "name": "Critical" - "name": "Webhook" #新增的webhook "webhook_configs": - "url": "http://webhook-dingtalk/dingtalk/send/" #配置前面部署服务webhook-dingtalk "send_resolved": true "route": "group_by": - "namespace" "group_interval": "5m" "group_wait": "30s" "receiver": "Webhook" #默认Webhook "repeat_interval": "12h" "routes": - "match": "alertname": "Watchdog" "receiver": "Watchdog" - "match": "severity": "critical" "receiver": "Critical" ``` 4.4 更新配置 ```bash $ kubectl edit secret alertmanager-main -n monitoring #把上面更新的内容base64加密替换旧的内容即可 ``` 4.5 重新部署Alertmanager #### 五、问题: `a.查看node-exporter的pod为pengding状态?` _解决方法: 查看日志提示spec.template.spec.containers[1].ports[1].name: Duplicate value: "https",端口冲突,9100更改端口19100_ ```bash $ sed -i s/9100/19100/g manifests/node-exporter-daemonset.yaml $ kubectl delete -f node-exporter-daemonset.yaml $ kubectl create -f node-exporter-daemonset.yaml ```
阅读 966 评论 0 收藏 0
阅读 966
评论 0
收藏 0

兜兜    2021-08-26 18:05:39    2021-08-26 18:09:23   

kubernetes k8s
#### 1.查看pod的docker容器ID ```bash $ kubectl get pod -n kube-system nginx-ingress-controller-8489c5b8c4-fccs5 -o json ``` ```yaml ... "containerStatuses": [ { "containerID": "docker://eda6562ab8d504599016d4dba8ceea4b9b255dbf97446f044ce89ad4410ab49a", "image": "registry-vpc.cn-shenzhen.aliyuncs.com/acs/aliyun-ingress-controller:v0.44.0.3-8e83e7dc6-aliyun", "imageID": "docker-pullable://registry-vpc.cn-shenzhen.aliyuncs.com/acs/aliyun-ingress-controller@sha256:7238b6230b678b312113a891ad5f9f7bbedc7839a913eaaee0def8aa748c3313", "lastState": { "terminated": { "containerID": "docker://e9a6d774429c0385be4f3a3129860677a4380277bfe5647563ae4541335111a7", "exitCode": 255, "finishedAt": "2021-07-09T06:48:39Z", "reason": "Error", "startedAt": "2021-07-08T03:29:36Z" } }, "name": "nginx-ingress-controller", "ready": true, "restartCount": 1, "started": true, "state": { "running": { "startedAt": "2021-07-09T06:48:56Z" } } } ], ... ``` #### 2.查看网卡接口索引号 ```bash $ docker exec eda6562ab8d504599016d4dba8ceea4b9b255dbf97446f044ce89ad4410ab49a /bin/bash -c 'cat /sys/class/net/eth0/iflink' ``` ```bash 7 ``` #### 3.查看网卡接口索引对应的veth网卡 ```bash $ IFLINK_INDEX=7 #IFLINK_INDEX值为上面查询的结果 $ for i in /sys/class/net/veth*/ifindex; do grep -l ^$IFLINK_INDEX$ $i; done ``` ```bash /sys/class/net/vethfe247b7f/ifindex ``` #### 4.tcpdump抓包pod ```bash $ tcpdump -i vethfe247b7f -nnn ``` ```bash listening on vethfe247b7f, link-type EN10MB (Ethernet), capture size 262144 bytes 18:04:47.362688 IP 100.127.5.192.25291 > 10.151.0.78.80: Flags [S], seq 3419414283, win 2920, options [mss 1460,sackOK,TS val 2630688633 ecr 0,nop,wscale 0], length 0 18:04:47.362704 IP 10.151.0.78.80 > 100.127.5.192.25291: Flags [S.], seq 3153672948, ack 3419414284, win 65535, options [mss 1460,sackOK,TS val 1408463572 ecr 2630688633,nop,wscale 9], length 0 18:04:47.362857 IP 100.127.5.192.25291 > 10.151.0.78.80: Flags [.], ack 1, win 2920, options [nop,nop,TS val 2630688633 ecr 1408463572], length 0 ```
阅读 1065 评论 0 收藏 0
阅读 1065
评论 0
收藏 0

兜兜    2021-08-26 14:51:18    2021-09-10 12:44:37   

k8s kubernets
这里以用户访问 https://gw.example.com gateway服务为例,整个网络包的调用过程如下: `CLIENT->阿里云SLB->K8S NODE(IPVS/kube-proxy)->INGRESS POD(nginx controller)->GATEWAY POD(gateway服务)` ```BASH CLIENT IP: CLIENT_IP SLB IP: 47.107.x.x K8S NODE IP: 172.18.238.85 INGRESS POD IP: 10.151.0.78 GATEWAY POD IP: 10.151.0.107 ``` #### 1.CLIENT-->阿里云SLB ```bash 解析gw.example.com 47.107.x.x(SLB公网ip), 数据包到达阿里云SLB(CLIENT_IP:RANDOM_PORT---->47.107.x.x:443) ``` #### 2.阿里云SLB-->K8S NODE(IPVS/kube-proxy) ```bash 阿里云SLB配置后端虚拟服务: TCP:443-->172.18.238.85:30483 数据包到达K8S NODE(CLIENT_IP:RANDOM_PORT---->172.18.238.85:30483) ``` K8S NODE抓包 ```bash $ tcpdump -i eth0 ip host CLIENT_IP -n 14:39:33.043508 IP CLIENT_IP.RANDOM_PORT > 172.18.238.85.30483: Flags [S], seq 1799504552, win 29200, options [mss 1460,sackOK,TS val 1092093183 ecr 0,nop,wscale 7], length 0 ``` #### 3.K8S NODE(IPVS/kube-proxy)-->INGRESS POD(nginx controller) IPVS配置后端服务: ```BASH $ ipvsadm -L -n TCP 172.18.238.85:30483 rr -> 10.151.0.78:443 Masq 1 2 40 -> 10.151.0.83:443 Masq 1 8 42 ``` ```BASH 数据包到达nginx ingress(CLIENT_IP:RANDOM_PORT---->10.151.0.78.443) ``` K8S NODE抓包nginx ingress服务([抓包pod教程](https://ynotes.cn/blog/article_detail/260)) ```bash $ tcpdump -i vethfe247b7f -nnn |grep "\.443" #vethfe247b7f为ingress controller pod的网卡 16:45:28.687578 IP CLIENT_IP.RANDOM_PORT > 10.151.0.78.443: Flags [S], seq 2547516746, win 29200, options [mss 1460,sackOK,TS val 1099648828 ecr 0,nop,wscale 7], length 0 ``` #### 4.INGRESS POD(nginx controller)->GATEWAY POD(gateway服务) ```bash $ kubectl get pods -o wide --all-namespaces|grep 10.151.0.78 kube-system nginx-ingress-controller-8489c5b8c4-fccs5 1/1 Running 1 49d 10.151.0.78 cn-shenzhen.172.18.238.85 <none> <none> ``` ```BASH 数据包到达gateway服务(10.151.0.78.57270---->10.151.0.107.18880) ``` K8S NODE抓包gateway服务 ```bash $ tcpdump -i veth553c1000 -nnn port 18880 17:05:58.463497 IP 10.151.0.78.57270 > 10.151.0.107.18880: Flags [S], seq 3538162899, win 65535, options [mss 1460,sackOK,TS val 878505289 ecr 0,nop,wscale 9], length 0 ```
阅读 765 评论 0 收藏 0
阅读 765
评论 0
收藏 0

兜兜    2021-08-20 16:24:24    2021-08-21 15:23:08   

jenkins
#### _介绍:jenkins的Image Tag Parameter插件支持harbor仓库中获取项目的Tag,可惜阿里云容器镜像仓库不支持Docker V2 API,不过阿里云镜像仓库提供自己一套API。_ #### _`解决方案:python Flask封装阿里云的API(阿里云API是通过access_key和access_secret认证授权,REST list Parameter插件目前不支持),jenkins通过REST list Parameter插件获取数据。`_ #### 一、封装阿里云的API 1.1 python安装Flask和阿里云SDK ```bash pip install flask pip install aliyun-python-sdk-cr==4.1.2 ``` 1.2 添加tools.py(封装阿里云的SDK) ```python #!/usr/bin/env python #coding=utf-8 from aliyunsdkcore.client import AcsClient from aliyunsdkcore.acs_exception.exceptions import ClientException from aliyunsdkcore.acs_exception.exceptions import ServerException from aliyunsdkcore.auth.credentials import AccessKeyCredential from aliyunsdkcore.auth.credentials import StsTokenCredential from aliyunsdkcr.request.v20181201.ListRepoTagRequest import ListRepoTagRequest from aliyunsdkcr.request.v20181201.GetRepositoryRequest import GetRepositoryRequest import json class ContainerImage: def __init__(self, access_key, access_secret, instance_id, region_id='cn-shenzhen', accept_format='json', encoding='utf-8'): self.client = AcsClient(region_id=region_id, credential=AccessKeyCredential(access_key,access_secret)) self.instance_id = instance_id self.accept_format = accept_format self.encoding = encoding def get_repo(self, space_name, repo_name): request = GetRepositoryRequest() request.set_accept_format(self.accept_format) request.set_InstanceId(self.instance_id) request.set_RepoNamespaceName(space_name) request.set_RepoName(repo_name) response = self.client.do_action_with_exception(request) return json.loads(str(response,encoding=self.encoding)) def list_repo_tag(self, space_name, repo_name): repo_obj = self.get_repo(space_name, repo_name) repo_id = repo_obj['RepoId'] request = ListRepoTagRequest() request.set_accept_format(self.accept_format) request.set_InstanceId(self.instance_id) request.set_RepoId(repo_id) response = self.client.do_action_with_exception(request) return json.loads(str(response,encoding=self.encoding)) ``` 1.3 添加Flask的文件app.py ```python from flask import Flask from tools import ContainerImage #导入tools中的ContainerImage类 #配置access_key和access_secret access_key='LTAI5tG3YCyHxxxxxxxxxx' access_secret='oNBXXKfIxxxxxxxxxxxxxxxxx' region_id='cn-shenzhen' instance_id='cri-xxxxxxxxxx' container_image=ContainerImage(access_key, access_secret, instance_id) app = Flask(__name__) #通过url路径获取space_name和repo_name @app.route('/repo/<space_name>/<repo_name>/tags') def list_tags(space_name,repo_name): list_repo_tags=container_image.list_repo_tag(space_name,repo_name) return list_repo_tags if __name__ == '__main__': app.run(host='0.0.0.0', debug=True) ``` 1.4 启动Flask ```bash python app.py ``` 1.5 测试结果 ```bash curl http://172.16.100.202:5000/repo/<space_name>/<repo_name>/tags ``` ```json { "Code": "success", "Images": [ { "Digest": "16c579443109881cd3ba264913824cb074d8e977bfd89d5860aaafad0b10194f", "ImageCreate": 1629278747000, "ImageId": "f79086b9b1a4532e44b30efbf761fde76792cd61be26e9bf5f19469d1e8e358d", "ImageSize": 55157349, "ImageUpdate": 1629278747000, "Status": "NORMAL", "Tag": "master-7d9acb6-17" }, { "Digest": "d577c281172233318ee4d9394882ae0bb6582bb01efc694654890ebf8118b0cf", "ImageCreate": 1629272078000, "ImageId": "8b52daeee868663c3d1fcd49447d17cf8bdd7f9b87ba07904e3a675e008ce90f", "ImageSize": 55157354, "ImageUpdate": 1629272078000, "Status": "NORMAL", "Tag": "master-7d9acb6-16" } ], "IsSuccess": true, "PageNo": 1, "PageSize": 30, "RequestId": "B81C478C-3607-590E-90EC-6C5120446D48", "TotalCount": 2 } ``` #### 三、jenkins pipeline配置REST list Parameter ```groovy parameters { RESTList( name: 'BUILD_IMAGE_TAG', description: '', restEndpoint: 'http://172.16.100.202:5000/repo/<space_name>/<repo_name>/tags', credentialId: '', mimeType: 'APPLICATION_JSON', valueExpression: '$.Images[*].Tag', cacheTime: 10, // optional defaultValue: '', // optional filter: '.*', // optional valueOrder: 'ASC' // optional ) } ```
阅读 1465 评论 0 收藏 0
阅读 1465
评论 0
收藏 0

兜兜    2021-08-17 15:35:04    2022-01-25 09:31:53   

mysql
阅读 458 评论 0 收藏 0
阅读 458
评论 0
收藏 0

兜兜    2021-08-12 16:12:02    2021-08-12 17:59:00   

ssl https certbot
`certbot(pip)安装要求:安装python3且安装了ssl模块,验证方式:import ssl,如果当前环境已满足要求,则直接跳到三、安装certbot` #### 一、安装openssl ```bash ##Download openssl file wget https://www.openssl.org/source/openssl-1.1.1a.tar.gz tar -xzvf openssl-1.1.1a.tar.gz ##decompression #Compile and install, install path is/usr/local/openssl cd openssl-1.1.1a ./config shared zlib --prefix=/usr/local/openssl && make && make install ./config -t make depend #Enter / usr/local to execute the following command ln -s /usr/local/openssl /usr/local/ssl ##Create Links #In/etc/Ld.so.confAt the end of the file, add the following: echo "/usr/local/openssl/lib" >> /etc/ld.so.conf #Execute the following command ldconfig #Set the environment variable for OPESSL and add it on the last line of the etc/profile file: cat >> /etc/profile <<EOF export OPENSSL=/usr/local/openssl/bin export PATH=\$OPENSSL:\$PATH:\$HOME/bin EOF ``` &nbsp; #### 二、安装python3 ```python wget https://www.python.org/ftp/python/3.9.2/Python-3.9.2.tgz#Download Python 3.9 tar zxvf Python-3.9.2.tgz #decompression cd Python-3.9.2 ``` 编辑文件Python3.9/Module/setup ```python # Socket module helper for socket(2) _socket socketmodule.c #Install socket module, source code is socketmodule.c # Socket module helper for SSL support; you must comment out the other # socket line above, and possibly edit the SSL variable: SSL=/usr/local/ssl _ssl _ssl.c \ #Install SSL module, source code is ssl.c -DUSE_SSL -I$(SSL)/include -I$(SSL)/include/openssl \ -L$(SSL)/lib -lssl -lcrypto ``` ```bash ./configure --with-openssl=/usr/local/openssl #Preinstall openssl directory --enable-optimizations #Optimize installation --with-ssl-default-suites=python #Install python's own ssl by default, #It's a little unclear--the difference between the with-openssl and--with-ssl-default-suites commands, but I still run them together make make install ``` ##### 测试python的ssl模块 ```python import ssl ``` &nbsp; #### 三、安装certbot ##### 安装python虚拟环境 ```bash python3 -m venv /opt/certbot/ /opt/certbot/bin/pip install --upgrade pip ``` ##### 安装certbot包 ```bash /opt/certbot/bin/pip install certbot certbot-nginx ln -s /opt/certbot/bin/certbot /usr/bin/certbot ``` certbot获取证书两种方式 `方式一:验证nginx获取证书` ```bash certbot certonly --nginx ``` `方式二:webroot文件获取证书` 修改nginx的server添加验证的location ```bash server { listen 443; server_name ynotes.cn www.ynotes.cn; ... # 配置webroot验证目录 location ^~ /.well-known/acme-challenge/ { default_type "text/plain"; root /var/www/letsencrypt; } } ``` webroot方式获取证书 ```bash certbot certonly --webroot --agree-tos --email sheyinsong@qq.com --webroot-path /var/www/letsencrypt --domains ynotes.cn ``` 配置nginx的SSL证书 ```bash server { listen 443; server_name ynotes.cn www.ynotes.cn; ssl on; ssl_certificate /etc/letsencrypt/live/www.ynotes.cn/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/www.ynotes.cn/privkey.pem; ssl_session_cache shared:SSL:1m; ssl_session_timeout 5m; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; location ^~ / { root /var/www/html/v3/ynotes.cn; } error_page 500 502 503 504 /50x.html; error_page 404 https://www.ynotes.cn/; location = /50x.html { root html; } } ``` 添加计划任务 ```bash echo "0 0,12 * * * root /opt/certbot/bin/python -c 'import random; import time; time.sleep(random.random() * 3600)' && certbot renew -q" | sudo tee -a /etc/crontab > /dev/null ```
阅读 1107 评论 0 收藏 0
阅读 1107
评论 0
收藏 0

兜兜    2021-07-17 17:31:15    2021-07-17 17:31:15   

kubernets
阅读 2102 评论 0 收藏 0
阅读 2102
评论 0
收藏 0

第 4 页 / 共 11 页
 
第 4 页 / 共 11 页