兜兜    2021-09-03 11:23:08    2022-01-25 09:20:42   

k8s ELK
#### 工作流程 ```sh 1. helm安装微服务配置filebeat sidercar容器 2. 微服务pod运行的包含一个业务容器和filebeat sidercar容器 3. filebeat会读取业务容器的日志,把日志推送到elasticsearch集群 ``` ##### 1.helm安装微服务配置filebeat sidercar容器 deployment.yaml ```yaml ... containers: - name: {{ .Values.image2.name }} #filebeat容器 image: "{{ .Values.image2.repository }}:{{ .Values.image2.tag }}" imagePullPolicy: {{ .Values.image.pullPolicy }} command: - "/bin/sh" args: - "-c" - "filebeat -c /etc/filebeat/filebeat.yml" volumeMounts: - name: app-logs mountPath: /log - name: filebeat-{{.Release.Name}}-config mountPath: /etc/filebeat/ - name: {{ .Chart.Name }} securityContext: {{- toYaml .Values.securityContext | nindent 12 }} image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}" imagePullPolicy: {{ .Values.image.pullPolicy }} volumeMounts: - name: app-logs mountPath: /serverlog ports: - name: http containerPort: {{ .Values.service.targetPort | default 80 }} protocol: TCP livenessProbe: httpGet: path: /actuator/health/liveness port: {{ .Values.service.targetPort | default 80 }} initialDelaySeconds: 20 failureThreshold: 15 timeoutSeconds: 10 periodSeconds: 5 readinessProbe: httpGet: path: /actuator/health/readiness port: {{ .Values.service.targetPort | default 80 }} initialDelaySeconds: 20 failureThreshold: 15 timeoutSeconds: 10 periodSeconds: 5 resources: {{- toYaml .Values.resources | nindent 12 }} volumes: - name: app-logs emptyDir: {} - name: filebeat-{{.Release.Name}}-config configMap: name: filebeat-{{.Release.Name}}-config #filebat的配置文件 ... ``` configmap.yaml ```yml apiVersion: v1 kind: ConfigMap metadata: name: filebeat-{{.Release.Name}}-config data: filebeat.yml: | filebeat.inputs: - type: log enabled: true paths: - "/log/*/log_info.log" #日志路径 - "/log/*/*/log_info.log" - "/log/*/*/*/log_info.log" tags: ["{{ .Release.Name }}"] multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}' multiline.negate: true multiline.match: after exclude_lines: ['.*com.alibaba.nacos.naming.client.listener.*'] output.elasticsearch: #配置日志输出到elasticsearch hosts: ["xxxxx.elasticsearch.com"] username: "elastic" password: "xxxxx" index: "{{ .Release.Name }}-%{+yyyy.MM.dd}" setup.ilm.enabled: false setup.template.name: "{{ .Release.Name }}" setup.template.pattern: "{{ .Release.Name }}-*" ``` values.yaml ```yaml image2: name: filebeat repository: shudoon-k8s-registry-test-registry-vpc.cn-shenzhen.cr.aliyuncs.com/shudoon/filebeat pullPolicy: IfNotPresent imagePullPolicy: Always tag: "7.4.2" ``` ##### 2.微服务pod运行的包含一个业务容器和filebeat sidercar容器 ```sh $ kubectl describe pod shudoon-data-service-springboot-demo-74d7d7d656-9fb7w Name: shudoon-data-service-springboot-demo-74d7d7d656-9fb7w Namespace: default Priority: 0 Node: k8s-master1/172.16.100.100 Start Time: Tue, 28 Sep 2021 12:14:17 +0800 Labels: app.kubernetes.io/instance=shudoon-data-service app.kubernetes.io/name=springboot-demo pod-template-hash=74d7d7d656 Annotations: <none> Status: Running IP: 10.244.0.136 IPs: IP: 10.244.0.136 Controlled By: ReplicaSet/shudoon-data-service-springboot-demo-74d7d7d656 Init Containers: skywalking-agent-sidecar: Container ID: docker://c4254f70dfe4d8f75b8f163ef4731c7a5a7f3e9299ccf509fb6eb4c334a762b1 Image: harbor.example.com/shudoon/skywalking-agent-sidecar:8.7.0-fixbug-1 Image ID: docker-pullable://harbor.example.com/shudoon/skywalking-agent-sidecar@sha256:b39e3d2174eac4a1e50a6d1c08c7f4e882601856c8741604e02740f95a57862d Port: <none> Host Port: <none> Command: sh Args: -c mkdir -p /skywalking/agent && cp -r /usr/skywalking/agent/* /skywalking/agent State: Terminated Reason: Completed Exit Code: 0 Started: Tue, 28 Sep 2021 12:14:18 +0800 Finished: Tue, 28 Sep 2021 12:14:18 +0800 Ready: True Restart Count: 0 Environment: <none> Mounts: /skywalking/agent from sw-agent (rw) /var/run/secrets/kubernetes.io/serviceaccount from shudoon-data-service-springboot-demo-token-8zfkb (ro) Containers: filebeat: Container ID: docker://17de1c185b0c943e883ec1225d1da78103a6456b2edbe3c2fee03661aa350b36 Image: harbor.example.com/shudoon/filebeat:7.4.2 Image ID: docker-pullable://harbor.example.com/shudoon/filebeat@sha256:d223bd603c1e2b6cfde0123d0f89a48bcd9feac29a788653f9873728d05a3b12 Port: <none> Host Port: <none> Command: /bin/sh Args: -c filebeat -c /etc/filebeat/filebeat.yml State: Running Started: Tue, 28 Sep 2021 12:14:19 +0800 Ready: True Restart Count: 0 Environment: <none> Mounts: /etc/filebeat/ from filebeat-shudoon-data-service-config (rw) /log from app-logs (rw) /var/run/secrets/kubernetes.io/serviceaccount from shudoon-data-service-springboot-demo-token-8zfkb (ro) springboot-demo: Container ID: docker://29e9068d215e0257302f116b826c25bea69d4c0c8f2ea5bd94928d85ae4929b1 Image: harbor.example.com/shudoon/shudoon-data-service:master-bb01f0f-2 Image ID: docker-pullable://harbor.example.com/shudoon/shudoon-data-service@sha256:efb4b305cc40a3787984ef028678d6cb2ed6c15a1dae1dec410e63f9acf8213f Port: 18004/TCP Host Port: 0/TCP State: Running Started: Tue, 28 Sep 2021 12:16:27 +0800 Last State: Terminated Reason: Error Exit Code: 137 Started: Tue, 28 Sep 2021 12:14:26 +0800 Finished: Tue, 28 Sep 2021 12:16:27 +0800 Ready: True Restart Count: 1 Limits: cpu: 1 memory: 2Gi Requests: cpu: 100m memory: 256Mi Liveness: http-get http://:18004/actuator/health/liveness delay=20s timeout=10s period=5s #success=1 #failure=15 Readiness: http-get http://:18004/actuator/health/readiness delay=20s timeout=10s period=5s #success=1 #failure=15 Environment: JAVA_TOOL_OPTIONS: -javaagent:/usr/skywalking/agent/skywalking-agent.jar SW_AGENT_NAME: shudoon-data-service SW_AGENT_COLLECTOR_BACKEND_SERVICES: skywalking-oap:11800 Mounts: /serverlog from app-logs (rw) /usr/skywalking/agent from sw-agent (rw) /var/run/secrets/kubernetes.io/serviceaccount from shudoon-data-service-springboot-demo-token-8zfkb (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: app-logs: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: <unset> sw-agent: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: <unset> filebeat-shudoon-data-service-config: Type: ConfigMap (a volume populated by a ConfigMap) Name: filebeat-shudoon-data-service-config Optional: false shudoon-data-service-springboot-demo-token-8zfkb: Type: Secret (a volume populated by a Secret) SecretName: shudoon-data-service-springboot-demo-token-8zfkb Optional: false QoS Class: Burstable Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: <none> ``` 3.filebeat会读取业务容器的日志,把日志推送到elasticsearch集群 ```yaml filebeat.yml: | filebeat.inputs: - type: log enabled: true paths: - "/log/*/log_info.log" #日志路径 - "/log/*/*/log_info.log" - "/log/*/*/*/log_info.log" tags: ["{{ .Release.Name }}"] multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}' multiline.negate: true multiline.match: after exclude_lines: ['.*com.alibaba.nacos.naming.client.listener.*'] output.elasticsearch: #配置日志输出到elasticsearch hosts: ["xxxxx.elasticsearch.com"] username: "elastic" password: "xxxxx" index: "{{ .Release.Name }}-%{+yyyy.MM.dd}" setup.ilm.enabled: false setup.template.name: "{{ .Release.Name }}" setup.template.pattern: "{{ .Release.Name }}-*" ```
阅读 73 评论 0 收藏 0
阅读 73
评论 0
收藏 0

兜兜    2021-09-03 11:00:47    2022-01-25 09:30:51   

k8s kubernets
#### 环境 `Kubernets 1.18.8` #### 一、下载kube-prometheus ```bash git clone https://github.com/coreos/kube-prometheus.git git checkout -b release-0.6 remotes/origin/release-0.6 #release-0.6支持1.18+ ``` #### 二、安装kube-prometheus ##### 安装kube-prometheus相关资源和服务 ```bash # Create the namespace and CRDs, and then wait for them to be availble before creating the remaining resources kubectl create -f manifests/setup until kubectl get servicemonitors --all-namespaces ; do date; sleep 1; echo ""; done kubectl create -f manifests/ ``` ##### 卸载kube-prometheus相关资源和服务 ```bash kubectl delete --ignore-not-found=true -f manifests/ -f manifests/setup ``` #### 三、配置访问 prometheus ```bash $ kubectl --namespace monitoring port-forward svc/prometheus-k8s 9090 ``` http://localhost:9090 Grafana ```bash $ kubectl --namespace monitoring port-forward svc/grafana 3000 ``` http://localhost:3000 账号密码: admin:admin. AlertManager ```bash $ kubectl --namespace monitoring port-forward svc/alertmanager-main 9093 ``` http://localhost:9093 #### 四、增加钉钉告警 4.1 k8s安装webhook-dingtalk ```bash $ cat dingtalk.yaml ``` ```yaml apiVersion: apps/v1 kind: Deployment metadata: labels: app: webhook-dingtalk name: webhook-dingtalk namespace: monitoring spec: replicas: 1 selector: matchLabels: app: webhook-dingtalk template: metadata: labels: app: webhook-dingtalk spec: containers: - image: billy98/webhook-dingtalk:latest name: webhook-dingtalk args: - "https://oapi.dingtalk.com/robot/send?access_token=xxxxxxxxxxx" #access_token改为自己的即可 ports: - containerPort: 8080 protocol: TCP resources: requests: cpu: 100m memory: 100Mi limits: cpu: 500m memory: 500Mi livenessProbe: failureThreshold: 3 initialDelaySeconds: 30 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 tcpSocket: port: 8080 readinessProbe: failureThreshold: 3 initialDelaySeconds: 30 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 httpGet: port: 8080 path: / imagePullSecrets: - name: IfNotPresent --- apiVersion: v1 kind: Service metadata: labels: app: webhook-dingtalk name: webhook-dingtalk namespace: monitoring spec: ports: - name: http port: 80 protocol: TCP targetPort: 8080 selector: app: webhook-dingtalk type: ClusterIP ``` ```bash $ kubectl create -f dingtalk.yaml ``` 4.2 获取alertmanager配置 ```bash $ kubectl get secret alertmanager-main -n monitoring -o yaml #alertmanager.yaml后面即为配置文件,用base64 -d解码即可 apiVersion: v1 data: alertmanager.yaml: Imdsb2JhbCI6CiAgInJlc29sdmVfdGltZW91dCI6ICI1bSIKImluaGliaXRfcnVsZXMiOgotICJlcXVhbCI6CiAgLSAibmFtZXNwYWNlIgogIC0gImFsZXJ0bmFtZSIKICAic291cmNlX21hdGNoIjoKICAgICJzZXZlcml0eSI6ICJjcml0aWNhbCIKICAidGFyZ2V0X21hdGNoX3JlIjoKICAgICJzZXZlcml0eSI6ICJ3YXJuaW5nfGluZm8iCi0gImVxdWFsIjoKICAtICJuYW1lc3BhY2UiCiAgLSAiYWxlcnRuYW1lIgogICJzb3VyY2VfbWF0Y2giOgogICAgInNldmVyaXR5IjogIndhcm5pbmciCiAgInRhcmdldF9tYXRjaF9yZSI6CiAgICAic2V2ZXJpdHkiOiAiaW5mbyIKInJlY2VpdmVycyI6Ci0gIm5hbWUiOiAiRGVmYXVsdCIKLSAibmFtZSI6ICJXYXRjaGRvZyIKLSAibmFtZSI6ICJDcml0aWNhbCIKLSAibmFtZSI6ICJXZWJob29rIgogICJ3ZWJob29rX2NvbmZpZ3MiOgogIC0gInVybCI6ICJodHRwOi8vd2ViaG9vay1kaW5ndGFsay9kaW5ndGFsay9zZW5kLyIKICAgICJzZW5kX3Jlc29sdmVkIjogdHJ1ZQoicm91dGUiOgogICJncm91cF9ieSI6CiAgLSAibmFtZXNwYWNlIgogICJncm91cF9pbnRlcnZhbCI6ICI1bSIKICAiZ3JvdXBfd2FpdCI6ICIzMHMiCiAgInJlY2VpdmVyIjogIldlYmhvb2siCiAgInJlcGVhdF9pbnRlcnZhbCI6ICIxMmgiCiAgInJvdXRlcyI6CiAgLSAibWF0Y2giOgogICAgICAiYWxlcnRuYW1lIjogIldhdGNoZG9nIgogICAgInJlY2VpdmVyIjogIldhdGNoZG9nIgogIC0gIm1hdGNoIjoKICAgICAgInNldmVyaXR5IjogImNyaXRpY2FsIgogICAgInJlY2VpdmVyIjogIkNyaXRpY2FsIgo= kind: Secret metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"v1","data":{"alertmanager.yaml":"Imdsb2JhbCI6CiAgInJlc29sdmVfdGltZW91dCI6ICI1bSIKImluaGliaXRfcnVsZXMiOgotICJlcXVhbCI6CiAgLSAibmFtZXNwYWNlIgogIC0gImFsZXJ0bmFtZSIKICAic291cmNlX21hdGNoIjoKICAgICJzZXZlcml0eSI6ICJjcml0aWNhbCIKICAidGFyZ2V0X21hdGNoX3JlIjoKICAgICJzZXZlcml0eSI6ICJ3YXJuaW5nfGluZm8iCi0gImVxdWFsIjoKICAtICJuYW1lc3BhY2UiCiAgLSAiYWxlcnRuYW1lIgogICJzb3VyY2VfbWF0Y2giOgogICAgInNldmVyaXR5IjogIndhcm5pbmciCiAgInRhcmdldF9tYXRjaF9yZSI6CiAgICAic2V2ZXJpdHkiOiAiaW5mbyIKInJlY2VpdmVycyI6Ci0gIm5hbWUiOiAiRGVmYXVsdCIKLSAibmFtZSI6ICJXYXRjaGRvZyIKLSAibmFtZSI6ICJDcml0aWNhbCIKLSAibmFtZSI6ICJXZWJob29rIgogICJ3ZWJob29rX2NvbmZpZ3MiOgogIC0gInVybCI6ICJodHRwOi8vd2ViaG9vay1kaW5ndGFsay9kaW5ndGFsay9zZW5kLyIKICAgICJzZW5kX3Jlc29sdmVkIjogdHJ1ZQoicm91dGUiOgogICJncm91cF9ieSI6CiAgLSAibmFtZXNwYWNlIgogICJncm91cF9pbnRlcnZhbCI6ICI1bSIKICAiZ3JvdXBfd2FpdCI6ICIzMHMiCiAgInJlY2VpdmVyIjogIldlYmhvb2siCiAgInJlcGVhdF9pbnRlcnZhbCI6ICIxMmgiCiAgInJvdXRlcyI6CiAgLSAibWF0Y2giOgogICAgICAiYWxlcnRuYW1lIjogIldhdGNoZG9nIgogICAgInJlY2VpdmVyIjogIldhdGNoZG9nIgogIC0gIm1hdGNoIjoKICAgICAgInNldmVyaXR5IjogImNyaXRpY2FsIgogICAgInJlY2VpdmVyIjogIkNyaXRpY2FsIgo="},"kind":"Secret","metadata":{"annotations":{},"creationTimestamp":"2021-09-02T15:58:11Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:alertmanager.yaml":{}},"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:type":{}},"manager":"kubectl","operation":"Update","time":"2021-09-02T18:21:15Z"}],"name":"alertmanager-main","namespace":"monitoring","resourceVersion":"455008700","selfLink":"/api/v1/namespaces/monitoring/secrets/alertmanager-main","uid":"a7339c20-1b65-49f7-8b2e-db0459f14155"},"type":"Opaque"} creationTimestamp: "2021-09-03T02:06:04Z" managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:data: .: {} f:alertmanager.yaml: {} f:metadata: f:annotations: .: {} f:kubectl.kubernetes.io/last-applied-configuration: {} f:type: {} manager: kubectl operation: Update time: "2021-09-03T02:06:04Z" name: alertmanager-main namespace: monitoring resourceVersion: "457308386" selfLink: /api/v1/namespaces/monitoring/secrets/alertmanager-main uid: 6be76878-ac5d-4b6c-8f5a-e6928c1b4d67 type: Opaque ``` 4.3 更改解码后配置 ```yaml "global": "resolve_timeout": "5m" "inhibit_rules": - "equal": - "namespace" - "alertname" "source_match": "severity": "critical" "target_match_re": "severity": "warning|info" - "equal": - "namespace" - "alertname" "source_match": "severity": "warning" "target_match_re": "severity": "info" "receivers": - "name": "Default" - "name": "Watchdog" - "name": "Critical" - "name": "Webhook" #新增的webhook "webhook_configs": - "url": "http://webhook-dingtalk/dingtalk/send/" #配置前面部署服务webhook-dingtalk "send_resolved": true "route": "group_by": - "namespace" "group_interval": "5m" "group_wait": "30s" "receiver": "Webhook" #默认Webhook "repeat_interval": "12h" "routes": - "match": "alertname": "Watchdog" "receiver": "Watchdog" - "match": "severity": "critical" "receiver": "Critical" ``` 4.4 更新配置 ```bash $ kubectl edit secret alertmanager-main -n monitoring #把上面更新的内容base64加密替换旧的内容即可 ``` 4.5 重新部署Alertmanager #### 五、问题: `a.查看node-exporter的pod为pengding状态?` _解决方法: 查看日志提示spec.template.spec.containers[1].ports[1].name: Duplicate value: "https",端口冲突,9100更改端口19100_ ```bash $ sed -i s/9100/19100/g manifests/node-exporter-daemonset.yaml $ kubectl delete -f node-exporter-daemonset.yaml $ kubectl create -f node-exporter-daemonset.yaml ```
阅读 70 评论 0 收藏 0
阅读 70
评论 0
收藏 0

兜兜    2021-08-26 18:05:39    2021-08-26 18:09:23   

kubernetes k8s
#### 1.查看pod的docker容器ID ```bash $ kubectl get pod -n kube-system nginx-ingress-controller-8489c5b8c4-fccs5 -o json ``` ```yaml ... "containerStatuses": [ { "containerID": "docker://eda6562ab8d504599016d4dba8ceea4b9b255dbf97446f044ce89ad4410ab49a", "image": "registry-vpc.cn-shenzhen.aliyuncs.com/acs/aliyun-ingress-controller:v0.44.0.3-8e83e7dc6-aliyun", "imageID": "docker-pullable://registry-vpc.cn-shenzhen.aliyuncs.com/acs/aliyun-ingress-controller@sha256:7238b6230b678b312113a891ad5f9f7bbedc7839a913eaaee0def8aa748c3313", "lastState": { "terminated": { "containerID": "docker://e9a6d774429c0385be4f3a3129860677a4380277bfe5647563ae4541335111a7", "exitCode": 255, "finishedAt": "2021-07-09T06:48:39Z", "reason": "Error", "startedAt": "2021-07-08T03:29:36Z" } }, "name": "nginx-ingress-controller", "ready": true, "restartCount": 1, "started": true, "state": { "running": { "startedAt": "2021-07-09T06:48:56Z" } } } ], ... ``` #### 2.查看网卡接口索引号 ```bash $ docker exec eda6562ab8d504599016d4dba8ceea4b9b255dbf97446f044ce89ad4410ab49a /bin/bash -c 'cat /sys/class/net/eth0/iflink' ``` ```bash 7 ``` #### 3.查看网卡接口索引对应的veth网卡 ```bash $ IFLINK_INDEX=7 #IFLINK_INDEX值为上面查询的结果 $ for i in /sys/class/net/veth*/ifindex; do grep -l ^$IFLINK_INDEX$ $i; done ``` ```bash /sys/class/net/vethfe247b7f/ifindex ``` #### 4.tcpdump抓包pod ```bash $ tcpdump -i vethfe247b7f -nnn ``` ```bash listening on vethfe247b7f, link-type EN10MB (Ethernet), capture size 262144 bytes 18:04:47.362688 IP 100.127.5.192.25291 > 10.151.0.78.80: Flags [S], seq 3419414283, win 2920, options [mss 1460,sackOK,TS val 2630688633 ecr 0,nop,wscale 0], length 0 18:04:47.362704 IP 10.151.0.78.80 > 100.127.5.192.25291: Flags [S.], seq 3153672948, ack 3419414284, win 65535, options [mss 1460,sackOK,TS val 1408463572 ecr 2630688633,nop,wscale 9], length 0 18:04:47.362857 IP 100.127.5.192.25291 > 10.151.0.78.80: Flags [.], ack 1, win 2920, options [nop,nop,TS val 2630688633 ecr 1408463572], length 0 ```
阅读 152 评论 0 收藏 0
阅读 152
评论 0
收藏 0

兜兜    2021-08-26 14:51:18    2021-09-10 12:44:37   

k8s kubernets
这里以用户访问 https://gw.example.com gateway服务为例,整个网络包的调用过程如下: `CLIENT->阿里云SLB->K8S NODE(IPVS/kube-proxy)->INGRESS POD(nginx controller)->GATEWAY POD(gateway服务)` ```BASH CLIENT IP: CLIENT_IP SLB IP: 47.107.x.x K8S NODE IP: 172.18.238.85 INGRESS POD IP: 10.151.0.78 GATEWAY POD IP: 10.151.0.107 ``` #### 1.CLIENT-->阿里云SLB ```bash 解析gw.example.com 47.107.x.x(SLB公网ip), 数据包到达阿里云SLB(CLIENT_IP:RANDOM_PORT---->47.107.x.x:443) ``` #### 2.阿里云SLB-->K8S NODE(IPVS/kube-proxy) ```bash 阿里云SLB配置后端虚拟服务: TCP:443-->172.18.238.85:30483 数据包到达K8S NODE(CLIENT_IP:RANDOM_PORT---->172.18.238.85:30483) ``` K8S NODE抓包 ```bash $ tcpdump -i eth0 ip host CLIENT_IP -n 14:39:33.043508 IP CLIENT_IP.RANDOM_PORT > 172.18.238.85.30483: Flags [S], seq 1799504552, win 29200, options [mss 1460,sackOK,TS val 1092093183 ecr 0,nop,wscale 7], length 0 ``` #### 3.K8S NODE(IPVS/kube-proxy)-->INGRESS POD(nginx controller) IPVS配置后端服务: ```BASH $ ipvsadm -L -n TCP 172.18.238.85:30483 rr -> 10.151.0.78:443 Masq 1 2 40 -> 10.151.0.83:443 Masq 1 8 42 ``` ```BASH 数据包到达nginx ingress(CLIENT_IP:RANDOM_PORT---->10.151.0.78.443) ``` K8S NODE抓包nginx ingress服务([抓包pod教程](https://ynotes.cn/blog/article_detail/260)) ```bash $ tcpdump -i vethfe247b7f -nnn |grep "\.443" #vethfe247b7f为ingress controller pod的网卡 16:45:28.687578 IP CLIENT_IP.RANDOM_PORT > 10.151.0.78.443: Flags [S], seq 2547516746, win 29200, options [mss 1460,sackOK,TS val 1099648828 ecr 0,nop,wscale 7], length 0 ``` #### 4.INGRESS POD(nginx controller)->GATEWAY POD(gateway服务) ```bash $ kubectl get pods -o wide --all-namespaces|grep 10.151.0.78 kube-system nginx-ingress-controller-8489c5b8c4-fccs5 1/1 Running 1 49d 10.151.0.78 cn-shenzhen.172.18.238.85 <none> <none> ``` ```BASH 数据包到达gateway服务(10.151.0.78.57270---->10.151.0.107.18880) ``` K8S NODE抓包gateway服务 ```bash $ tcpdump -i veth553c1000 -nnn port 18880 17:05:58.463497 IP 10.151.0.78.57270 > 10.151.0.107.18880: Flags [S], seq 3538162899, win 65535, options [mss 1460,sackOK,TS val 878505289 ecr 0,nop,wscale 9], length 0 ```
阅读 120 评论 0 收藏 0
阅读 120
评论 0
收藏 0

第 2 页 / 共 2 页
 
第 2 页 / 共 2 页