兜兜    2021-11-23 23:13:52    2021-11-23 23:20:51   

frp 内网穿透
阅读 15 评论 0 收藏 0
阅读 15
评论 0
收藏 0

兜兜    2021-11-11 11:35:05    2021-11-12 00:29:27   

kakfa
### `版本:kafka_2.11-0.10.1.0和kafka_2.12-2.2.0` ### 初始化操作 `kafka_2.11-0.10.1.0/kafka_2.12-2.2.0` ```sh $ export KAFKA_HOME=/usr/local/kafka/ $ cd $KAFKA_HOME ``` ### 启动zookeeper `kafka_2.11-0.10.1.0/kafka_2.12-2.2.0` ```sh $ bin/zookeeper-server-start.sh config/zookeeper.properties ``` ### 启动kafka `kafka_2.11-0.10.1.0/kafka_2.12-2.2.0` ```sh $ bin/kafka-server-start.sh config/server.properties ``` ### 创建topic `kafka_2.11-0.10.1.0` ```sh $ bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test ``` `kafka_2.12-2.2.0` ```sh bin/kafka-topics.sh --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 1 --topic test ``` ### 查看topic列表 `kafka_2.11-0.10.1.0` ```sh $ bin/kafka-topics.sh --list --zookeeper localhost:2181 ``` `kafka_2.12-2.2.0` ```sh $ bin/kafka-topics.sh --list --bootstrap-server localhost:9092 ``` ### 查看topic详细 `kafka_2.11-0.10.1.0` ```sh $ bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic test ``` `kafka_2.12-2.2.0` ```sh $ bin/kafka-topics.sh --describe --bootstrap-server localhost:9092 --topic test ``` ### producer发送消息 `kafka_2.11-0.10.1.0/kafka_2.12-2.2.0` ```sh $ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test ``` 输入 ```sh This is a message This is another message ``` ### consumer消费消息 `kafka_2.11-0.10.1.0/kafka_2.12-2.2.0` ```sh $ bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning ``` 输出 ```sh This is a message This is another message ```` #### 管理消费组(Consumer Groups) `kafka_2.11-0.10.1.0/kafka_2.12-2.2.0` ```sh # 查看消费组列表 $ bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --list # 查看消费组offsets详情 $ bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --describe --group group_name ``` `kafka_2.11-0.10.1.0/kafka_2.12-2.2.0` 如果使用的旧版消费者,组的元数据存在zookeeper(offsets.storage=zookeeper),请使用下面的命令 ```sh # 查看消费组列表 $ bin/kafka-consumer-groups.sh --zookeeper localhost:2181 --list # 查看消费组offsets详情 $ bin/kafka-consumer-groups.sh --zookeeper localhost:2181 --describe --group group_name ``` #### 查看topic消息统计 `kafka_2.11-0.10.1.0` ```sh $ bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list localhost:9092 --topic test ```
阅读 30 评论 0 收藏 0
阅读 30
评论 0
收藏 0

兜兜    2021-09-29 18:10:11    2021-10-27 15:15:56   

kubernetes rancher
阅读 118 评论 0 收藏 0
阅读 118
评论 0
收藏 0

兜兜    2021-09-25 11:59:33    2021-10-19 14:31:03   

kubernetes ceph rook
#### 环境介绍 ```sh kubernetes 1.18.20 rook-ceph 1.6.10 ceph version 15.2.13 ``` `注意:kubernetes的节点要准备未使用的磁盘,供ceph集群使用。kubernetes版本要匹配对应的rook-ceph版本` #### 安装rook ##### 下载rook ```sh $ git clone --single-branch --branch v1.6.10 https://github.com/rook/rook.git #当前kubernetes 1.18.20不支持1.7.x ``` ##### 安装rook ```sh $ cd cluster/examples/kubernetes/ceph $ kubectl create -f crds.yaml -f common.yaml -f operator.yaml # verify the rook-ceph-operator is in the `Running` state before proceeding $ kubectl -n rook-ceph get pod ``` #### 安装ceph集群 ```sh $ kubectl create -f cluster.yaml ``` 查看集群状态 ```sh $ kubectl -n rook-ceph get pod NAME READY STATUS RESTARTS AGE csi-cephfsplugin-6tfqr 3/3 Running 0 6m45s csi-cephfsplugin-nldks 3/3 Running 0 6m45s csi-cephfsplugin-provisioner-6c59b5b7d9-lnddw 6/6 Running 0 6m44s csi-cephfsplugin-provisioner-6c59b5b7d9-lvdtc 6/6 Running 0 6m44s csi-cephfsplugin-zkt2x 3/3 Running 0 6m45s csi-rbdplugin-2kwhg 3/3 Running 0 6m46s csi-rbdplugin-7j9hw 3/3 Running 0 6m46s csi-rbdplugin-provisioner-6d455d5677-6qcn5 6/6 Running 0 6m46s csi-rbdplugin-provisioner-6d455d5677-8xp6r 6/6 Running 0 6m46s csi-rbdplugin-qxv4m 3/3 Running 0 6m46s rook-ceph-crashcollector-k8s-master1-7bf874dc98-v9bj9 1/1 Running 0 2m8s rook-ceph-crashcollector-k8s-master2-6698989df9-7hsfr 1/1 Running 0 2m16s rook-ceph-crashcollector-k8s-node1-8578585676-9tf5w 1/1 Running 0 2m16s rook-ceph-mgr-a-5c4759947f-47kbk 1/1 Running 0 2m19s rook-ceph-mon-a-647877db88-629bt 1/1 Running 0 6m56s rook-ceph-mon-b-c44f7978f-p2fq6 1/1 Running 0 5m11s rook-ceph-mon-c-588b48b74c-bbkn8 1/1 Running 0 3m42s rook-ceph-operator-64845bd768-55dkc 1/1 Running 0 13m rook-ceph-osd-0-64f9fc6c65-m7877 1/1 Running 0 2m8s rook-ceph-osd-1-584bf986c7-xj75p 1/1 Running 0 2m8s rook-ceph-osd-2-d59cdbd7f-xcwgf 1/1 Running 0 2m8s rook-ceph-osd-prepare-k8s-master1-vstkq 0/1 Completed 0 2m16s rook-ceph-osd-prepare-k8s-master2-zdjk2 0/1 Completed 0 2m16s rook-ceph-osd-prepare-k8s-node1-wmzj9 0/1 Completed 0 2m16s ``` #### 卸载ceph集群 https://github.com/rook/rook/blob/master/Documentation/ceph-teardown.md ```sh kubectl delete -f cluster.yaml ``` 清楚rook目录 ```sh $ rm /var/lib/rook/* -rf ``` #### 安装Rook toolbox ```SH $ kubectl create -f toolbox.yaml $ kubectl -n rook-ceph rollout status deploy/rook-ceph-tools $ kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- bash ``` #### 安装dashboard ```sh $ kubectl -n rook-ceph get service ``` 获取登录密码 ```sh $ kubectl -n rook-ceph get secret rook-ceph-dashboard-password -o jsonpath="{['data']['password']}" | base64 --decode && echo ``` #### 配置外网访问 ```sh $ kubectl create -f dashboard-external-https.yaml ``` ```sh $ kubectl -n rook-ceph get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE rook-ceph-mgr ClusterIP 10.101.238.49 <none> 9283/TCP 47m rook-ceph-mgr-dashboard ClusterIP 10.107.124.177 <none> 8443/TCP 47m rook-ceph-mgr-dashboard-external-https NodePort 10.102.110.16 <none> 8443:30870/TCP 14s ``` 访问:https://172.16.100.100:30870/ #### rook提供RBD服务 创建pool和StorageClass ```sh $ kubectl create -f cluster/examples/kubernetes/ceph/csi/rbd/storageclass.yaml ``` ```sh $ kubectl apply -f rook/cluster/examples/kubernetes/mysql.yaml $ kubectl apply -f rook/cluster/examples/kubernetes/wordpress.yaml ``` ```sh kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE cephfs-test-pvc-1 Bound pvc-571ae252-b080-4a67-8f3d-40bda1304fe3 500Mi RWX cephfs 2d23h mysql-pv-claim Bound pvc-6f6d70ba-6961-42b5-aa9d-db13c1aeec7c 20Gi RWO rook-ceph-block 15m test-claim Bound pvc-c51eef14-1454-4ddf-99e1-db483dec42c6 1Mi RWX managed-nfs-storage 3d5h wp-pv-claim Bound pvc-d5eb4142-532f-4704-9d15-94ffc7f35dd1 20Gi RWO rook-ceph-block 13m ``` 查看插件日志 ```sh $ kubectl logs csi-rbdplugin-provisioner-85c58fcfb4-nfnvd -n rook-ceph -c csi-provisioner I0925 07:13:27.761137 1 csi-provisioner.go:138] Version: v2.2.2 I0925 07:13:27.761189 1 csi-provisioner.go:161] Building kube configs for running in cluster... W0925 07:13:37.769467 1 connection.go:172] Still connecting to unix:///csi/csi-provisioner.sock W0925 07:13:47.769515 1 connection.go:172] Still connecting to unix:///csi/csi-provisioner.sock W0925 07:13:57.769149 1 connection.go:172] Still connecting to unix:///csi/csi-provisioner.sock W0925 07:14:07.769026 1 connection.go:172] Still connecting to unix:///csi/csi-provisioner.sock W0925 07:14:17.768966 1 connection.go:172] Still connecting to unix:///csi/csi-provisioner.sock W0925 07:14:27.768706 1 connection.go:172] Still connecting to unix:///csi/csi-provisioner.sock W0925 07:14:37.769526 1 connection.go:172] Still connecting to unix:///csi/csi-provisioner.sock W0925 07:14:47.769001 1 connection.go:172] Still connecting to unix:///csi/csi-provisioner.sock W0925 07:14:57.769711 1 connection.go:172] Still connecting to unix:///csi/csi-provisioner.sock I0925 07:15:01.978682 1 common.go:111] Probing CSI driver for readiness I0925 07:15:01.980980 1 csi-provisioner.go:284] CSI driver does not support PUBLISH_UNPUBLISH_VOLUME, not watching VolumeAttachments I0925 07:15:01.982474 1 leaderelection.go:243] attempting to acquire leader lease rook-ceph/rook-ceph-rbd-csi-ceph-com... I0925 07:15:02.001351 1 leaderelection.go:253] successfully acquired lease rook-ceph/rook-ceph-rbd-csi-ceph-com I0925 07:15:02.102184 1 controller.go:835] Starting provisioner controller rook-ceph.rbd.csi.ceph.com_csi-rbdplugin-provisioner-85c58fcfb4-nfnvd_c5346baf-02b2-4983-9405-45bbc31c216a! I0925 07:15:02.102243 1 volume_store.go:97] Starting save volume queue I0925 07:15:02.102244 1 clone_controller.go:66] Starting CloningProtection controller I0925 07:15:02.102299 1 clone_controller.go:84] Started CloningProtection controller I0925 07:15:02.202672 1 controller.go:884] Started provisioner controller rook-ceph.rbd.csi.ceph.com_csi-rbdplugin-provisioner-85c58fcfb4-nfnvd_c5346baf-02b2-4983-9405-45bbc31c216a! I0925 07:46:47.261147 1 controller.go:1332] provision "default/mysql-pv-claim" class "rook-ceph-block": started I0925 07:46:47.261389 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"mysql-pv-claim", UID:"6f6d70ba-6961-42b5-aa9d-db13c1aeec7c", APIVersion:"v1", ResourceVersion:"2798560", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/mysql-pv-claim" I0925 07:46:49.310382 1 controller.go:1439] provision "default/mysql-pv-claim" class "rook-ceph-block": volume "pvc-6f6d70ba-6961-42b5-aa9d-db13c1aeec7c" provisioned I0925 07:46:49.310419 1 controller.go:1456] provision "default/mysql-pv-claim" class "rook-ceph-block": succeeded I0925 07:46:49.319422 1 controller.go:1332] provision "default/mysql-pv-claim" class "rook-ceph-block": started I0925 07:46:49.319446 1 controller.go:1341] provision "default/mysql-pv-claim" class "rook-ceph-block": persistentvolume "pvc-6f6d70ba-6961-42b5-aa9d-db13c1aeec7c" already exists, skipping I0925 07:46:49.319615 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"mysql-pv-claim", UID:"6f6d70ba-6961-42b5-aa9d-db13c1aeec7c", APIVersion:"v1", ResourceVersion:"2798560", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-6f6d70ba-6961-42b5-aa9d-db13c1aeec7c I0925 07:48:52.230183 1 controller.go:1332] provision "default/wp-pv-claim" class "rook-ceph-block": started I0925 07:48:52.230656 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"wp-pv-claim", UID:"d5eb4142-532f-4704-9d15-94ffc7f35dd1", APIVersion:"v1", ResourceVersion:"2799324", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/wp-pv-claim" I0925 07:48:52.289747 1 controller.go:1439] provision "default/wp-pv-claim" class "rook-ceph-block": volume "pvc-d5eb4142-532f-4704-9d15-94ffc7f35dd1" provisioned I0925 07:48:52.289790 1 controller.go:1456] provision "default/wp-pv-claim" class "rook-ceph-block": succeeded I0925 07:48:52.295778 1 controller.go:1332] provision "default/wp-pv-claim" class "rook-ceph-block": started I0925 07:48:52.295798 1 controller.go:1341] provision "default/wp-pv-claim" class "rook-ceph-block": persistentvolume "pvc-d5eb4142-532f-4704-9d15-94ffc7f35dd1" already exists, skipping I0925 07:48:52.295844 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"wp-pv-claim", UID:"d5eb4142-532f-4704-9d15-94ffc7f35dd1", APIVersion:"v1", ResourceVersion:"2799324", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-d5eb4142-532f-4704-9d15-94ffc7f35dd1 ``` #### rook提供cephFS服务 ```sh $ kubectl create -f filesystem.yaml ``` ```sh $ kubectl -n rook-ceph get pod -l app=rook-ceph-mds NAME READY STATUS RESTARTS AGE rook-ceph-mds-myfs-a-6dd59747f5-2qjtk 1/1 Running 0 38s rook-ceph-mds-myfs-b-56488764f9-n6fzf 1/1 Running 0 37s ``` ```tex $ kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- bash [root@rook-ceph-tools-5b85cb4766-8wj5c /]# ceph status cluster: id: 1c545199-7a68-46df-9b21-2a801c1ad1af health: HEALTH_OK services: mon: 3 daemons, quorum a,b,c (age 50m) mgr: a(active, since 70m) mds: myfs:1 {0=myfs-a=up:active} 1 up:standby-replay #mds元数据服务 osd: 3 osds: 3 up (since 71m), 3 in (since 71m) data: pools: 4 pools, 97 pgs objects: 132 objects, 340 MiB usage: 4.0 GiB used, 296 GiB / 300 GiB avail pgs: 97 active+clean io: client: 1.2 KiB/s rd, 2 op/s rd, 0 op/s wr ``` 创建存储类 ```sh $ cat >storageclass-cephfs.yaml <<EOF apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: rook-cephfs # Change "rook-ceph" provisioner prefix to match the operator namespace if needed provisioner: rook-ceph.cephfs.csi.ceph.com parameters: # clusterID is the namespace where operator is deployed. clusterID: rook-ceph # CephFS filesystem name into which the volume shall be created fsName: myfs # Ceph pool into which the volume shall be created # Required for provisionVolume: "true" pool: myfs-data0 # The secrets contain Ceph admin credentials. These are generated automatically by the operator # in the same namespace as the cluster. csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph csi.storage.k8s.io/controller-expand-secret-name: rook-csi-cephfs-provisioner csi.storage.k8s.io/controller-expand-secret-namespace: rook-ceph csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph reclaimPolicy: Delete EOF ``` ```sh $ kubectl create -f storageclass-cephfs.yaml ``` 测试cephfs ```sh $ cat >test-cephfs-pvc.yaml <<EOF apiVersion: v1 kind: PersistentVolumeClaim metadata: name: cephfs-pvc spec: accessModes: - ReadWriteMany resources: requests: storage: 1Gi storageClassName: rook-cephfs EOF ``` ```sh $ kubectl apply -f test-cephfs-pvc.yaml ``` ```sh $ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE cephfs-pvc Bound pvc-4e200767-9ed9-4b47-8505-3081b8de6893 1Gi RWX rook-cephfs 8s ``` #### 问题 `1.[WRN] MON_CLOCK_SKEW: clock skew detected on mon.b ?` ```tex $ kubectl -n rook-ceph edit ConfigMap rook-config-override -o yaml config: | [global] mon clock drift allowed = 0.5 $ kubectl -n rook-ceph delete pod $(kubectl -n rook-ceph get pods -o custom-columns=NAME:.metadata.name --no-headers| grep mon) $ kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- bash [root@rook-ceph-tools-75cf595688-hrtmv /]# ceph -s cluster: id: 023baa6d-8ec1-4ada-bd17-219136f656b4 health: HEALTH_OK services: mon: 3 daemons, quorum a,b,c (age 2m) mgr: a(active, since 24m) osd: 3 osds: 3 up (since 25m), 3 in (since 25m) data: pools: 1 pools, 128 pgs objects: 0 objects, 0 B usage: 19 MiB used, 300 GiB / 300 GiB avail pgs: 128 active+clean ``` `2.type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "rook-ceph-block": rpc error: code = Aborted desc = an operation with the given Volume ID pvc-34167041-eb16-4f03-b01e-6b915089488d already exists` 删除下面的pod ```sh kubectl get po -n rook-ceph | grep csi-cephfsplugin-provisioner- kubectl get po -n rook-ceph | grep csi-rbdplugin-provisioner- kubectl get po -n rook-ceph | grep csi-rbdplugin- kubectl get po -n rook-ceph | grep csi-cephfsplugin- ``` `3.创建pvc一直pengding状态,查看csi-rbdplugin-provisioner日志提示provision volume with StorageClass "rook-ceph-block": rpc error: code = DeadlineExceeded desc = context deadline exceeded` ```sh K8S和rook-ceph的版本不匹配问题,我使用的是kubernetes 1.18.20之前安装rook-ceph 1.7.x就会报这个错,改成v1.6.10就可以了 ```
阅读 62 评论 0 收藏 0
阅读 62
评论 0
收藏 0

兜兜    2021-09-22 10:20:24    2021-10-27 15:14:23   

kubernetes k8s nginx ingress
#### 环境介绍 ```sh k8s版本:1.18.20 ingress-nginx: 3.4.0 ``` #### 安装ingress-nginx ##### 下载helm安装包 ```sh $ wget https://github.com/kubernetes/ingress-nginx/releases/download/ingress-nginx-3.4.0/ingress-nginx-3.4.0.tgz $ tar xvf ingress-nginx-3.4.0.tgz $ cd ingress-nginx ``` #### 配置参数 ```sh $ cat > values.yaml <<EOF controller: image: repository: registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller #更好阿里云镜像 tag: "v0.40.1" digest: sha256:abffcf2d25e3e7c7b67a315a7c664ec79a1588c9c945d3c7a75637c2f55caec6 pullPolicy: IfNotPresent runAsUser: 101 allowPrivilegeEscalation: true containerPort: http: 80 https: 443 config: {} configAnnotations: {} proxySetHeaders: {} addHeaders: {} dnsConfig: {} dnsPolicy: ClusterFirst reportNodeInternalIp: false hostNetwork: true #开启主机网络 hostPort: enabled: true #开启主机端口 ports: http: 80 https: 443 electionID: ingress-controller-leader ingressClass: nginx podLabels: {} podSecurityContext: {} sysctls: {} publishService: enabled: true pathOverride: "" scope: enabled: false tcp: annotations: {} udp: annotations: {} extraArgs: {} extraEnvs: [] kind: DaemonSet #DaemonSet运行 annotations: {} labels: {} updateStrategy: {} minReadySeconds: 0 tolerations: [] affinity: {} topologySpreadConstraints: [] terminationGracePeriodSeconds: 300 nodeSelector: ingress: nginx #配置部署选择ingress=nginx节点 livenessProbe: failureThreshold: 5 initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 port: 10254 readinessProbe: failureThreshold: 3 initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 port: 10254 healthCheckPath: "/healthz" podAnnotations: {} #replicaCount: 1 #关闭 minAvailable: 1 resources: requests: cpu: 100m memory: 90Mi autoscaling: enabled: false minReplicas: 1 maxReplicas: 11 targetCPUUtilizationPercentage: 50 targetMemoryUtilizationPercentage: 50 autoscalingTemplate: [] enableMimalloc: true customTemplate: configMapName: "" configMapKey: "" service: enabled: true annotations: {} labels: {} externalIPs: [] loadBalancerSourceRanges: [] enableHttp: true enableHttps: true ports: http: 80 https: 443 targetPorts: http: http https: https type: LoadBalancer nodePorts: http: "" https: "" tcp: {} udp: {} internal: enabled: false annotations: {} extraContainers: [] extraVolumeMounts: [] extraVolumes: [] extraInitContainers: [] admissionWebhooks: enabled: false #关闭 failurePolicy: Fail port: 8443 service: annotations: {} externalIPs: [] loadBalancerSourceRanges: [] servicePort: 443 type: ClusterIP patch: enabled: true image: repository: docker.io/jettech/kube-webhook-certgen tag: v1.3.0 pullPolicy: IfNotPresent priorityClassName: "" podAnnotations: {} nodeSelector: {} tolerations: [] runAsUser: 2000 metrics: port: 10254 enabled: false service: annotations: {} externalIPs: [] loadBalancerSourceRanges: [] servicePort: 9913 type: ClusterIP serviceMonitor: enabled: false additionalLabels: {} namespace: "" namespaceSelector: {} scrapeInterval: 30s targetLabels: [] metricRelabelings: [] prometheusRule: enabled: false additionalLabels: {} rules: [] lifecycle: preStop: exec: command: - /wait-shutdown priorityClassName: "" revisionHistoryLimit: 10 maxmindLicenseKey: "" defaultBackend: enabled: false image: repository: k8s.gcr.io/defaultbackend-amd64 tag: "1.5" pullPolicy: IfNotPresent runAsUser: 65534 extraArgs: {} serviceAccount: create: true name: extraEnvs: [] port: 8080 livenessProbe: failureThreshold: 3 initialDelaySeconds: 30 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 5 readinessProbe: failureThreshold: 6 initialDelaySeconds: 0 periodSeconds: 5 successThreshold: 1 timeoutSeconds: 5 tolerations: [] affinity: {} podSecurityContext: {} podLabels: {} nodeSelector: {} podAnnotations: {} replicaCount: 1 minAvailable: 1 resources: {} service: annotations: {} externalIPs: [] loadBalancerSourceRanges: [] servicePort: 80 type: ClusterIP priorityClassName: "" rbac: create: true scope: false podSecurityPolicy: enabled: false serviceAccount: create: true name: imagePullSecrets: [] tcp: {} udp: {} EOF ``` #### 创建命名空间 ```sh $ kubectl create namespace ingress-nginx ``` #### 节点打标签 ```sh $ kubectl label nodes k8s-master1 ingress=nginx $ kubectl label nodes k8s-master2 ingress=nginx $ kubectl label nodes k8s-node1 ingress=nginx ``` #### 安装nginx-ingress ```sh $ helm -n ingress-nginx upgrade -i ingress-nginx . ``` #### 卸载ingress-nginx ```sh $ helm -n ingress-nginx uninstall ingress-nginx ``` #### 测试nginx-ingress #### 部署一个测试nginx服务 ```sh $ cat > nginx-deployment.yml <<EOF apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deploy spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: nginx-service spec: selector: app: nginx ports: - protocol: TCP port: 80 targetPort: 80 type: ClusterIP EOF ``` 配置ingress对象 创建TLS证书 ```sh $ kubectl create secret tls shudoon-com-tls --cert=5024509__example.com.pem --key=5024509__example.com.key ``` #### 创建ingress规则 ```sh $ cat >tnginx-ingress.yaml <<EOF apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx name: tnginx-ingress spec: rules: - host: tnginx.example.com http: paths: - path: / backend: serviceName: nginx-service servicePort: 80 # This section is only required if TLS is to be enabled for the Ingress tls: - hosts: - tnginx.example.com secretName: shudoon-com-tls EOF ``` 测试 https://tnginx.example.com/ #### 问题 `问题:创建自定义ingress报错:Internal error occurred: failed calling webhook “validate.nginx.ingress.kubernetes.io` 查看策略 ```sh $ kubectl get validatingwebhookconfigurations ``` 删除策略 ```sh $ kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission ```
阅读 46 评论 0 收藏 0
阅读 46
评论 0
收藏 0

兜兜    2021-09-22 10:18:21    2021-09-24 09:41:49   

k8s cephfs
#### 环境介绍 ```sh ceph集群节点: 172.16.100.1:6789,172.16.100.2:6789,172.16.100.11:6789 ceph version 15.2.13 (c44bc49e7a57a87d84dfff2a077a2058aa2172e2) octopus (stable) ceph client:15.2.14 (pod镜像:"elementalnet/cephfs-provisioner:0.8.0") ``` `注意:目前官网的quay.io/external_storage/cephfs-provisioner:latest镜像ceph版本为13.x,跟ceph集群的版本15不匹配,导致pv一直是pengding状态。这里使用第三方镜像:elementalnet/cephfs-provisioner:0.8.0` #### 安装ceph k8s节点安装客户端 ```sh $ yum install ceph-common -y ``` 拷贝ceph节点的key到k8s节点 ```sh $ scp /etc/ceph/ceph.client.admin.keyring 172.16.100.100:/etc/ceph ``` 配置访问ceph的secert ```sh $ ceph auth get-key client.admin | base64 QVFEa0RFTmhYQ1UzQUJBQXFmSWptMFJkSVpGaC9VR0V4M0RNc3c9PQ== ``` ```sh $ cat >cephfs-secret.yaml<<EOF apiVersion: v1 kind: Secret metadata: name: ceph-secret-admin namespace: cephfs type: "kubernetes.io/rbd" data: key: QVFEa0RFTmhYQ1UzQUJBQXFmSWptMFJkSVpGaC9VR0V4M0RNc3c9PQ== #替换上面的输出内容 EOF ``` ```sh $ kubectl create -f cephfs-secret.yaml ``` #### 安装cephfs provisioner 参考:https://github.com/kubernetes-retired/external-storage/tree/master/ceph/cephfs/deploy 创建RBAC ```sh $ cat >cephfs-rbac.yaml <<EOF --- apiVersion: v1 kind: ServiceAccount metadata: name: cephfs-provisioner namespace: cephfs --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: cephfs-provisioner namespace: cephfs rules: - apiGroups: [""] resources: ["secrets"] verbs: ["create", "get", "delete"] - apiGroups: [""] resources: ["endpoints"] verbs: ["get", "list", "watch", "create", "update", "patch"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: cephfs-provisioner namespace: cephfs roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: cephfs-provisioner subjects: - kind: ServiceAccount name: cephfs-provisioner --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: cephfs-provisioner namespace: cephfs rules: - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["create", "update", "patch"] - apiGroups: [""] resources: ["services"] resourceNames: ["kube-dns","coredns"] verbs: ["list", "get"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: cephfs-provisioner subjects: - kind: ServiceAccount name: cephfs-provisioner namespace: cephfs roleRef: kind: ClusterRole name: cephfs-provisioner apiGroup: rbac.authorization.k8s.io EOF ``` ```sh $ kubectl create -f cephfs-rbac.yaml ``` 创建cephfs-provisioner ```sh $ cat > cephfs-provisioner.yaml <<EOF apiVersion: apps/v1 kind: Deployment metadata: name: cephfs-provisioner namespace: cephfs spec: replicas: 1 selector: matchLabels: app: cephfs-provisioner strategy: type: Recreate template: metadata: labels: app: cephfs-provisioner spec: containers: - name: cephfs-provisioner #image: "quay.io/external_storage/cephfs-provisioner:latest" #ceph集群版本为15.2,该版本对应的ceph客户端太旧,镜像没有更新,替换下面的镜像 image: "elementalnet/cephfs-provisioner:0.8.0" env: - name: PROVISIONER_NAME value: ceph.com/cephfs - name: PROVISIONER_SECRET_NAMESPACE value: cephfs command: - "/usr/local/bin/cephfs-provisioner" args: - "-id=cephfs-provisioner-1" serviceAccount: cephfs-provisioner EOF ``` ```sh $ kubectl create -f cephfs-provisioner.yaml ``` #### 创建存储类 ```sh $ cat > cephfs-storageclass.yaml <<EOF kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: cephfs namespace: cephfs provisioner: ceph.com/cephfs parameters: monitors: 172.16.100.1:6789,172.16.100.2:6789,172.16.100.11:6789 adminId: admin adminSecretName: ceph-secret-admin adminSecretNamespace: cephfs claimRoot: /pvc-volumes EOF ``` ```sh $ kubectl create -f cephfs-storageclass.yaml ``` #### 创建测试pvc ```sh $ cat > cephfs-test-pvc.yaml <<EOF kind: PersistentVolumeClaim apiVersion: v1 metadata: name: cephfs-test-pvc-1 annotations: volume.beta.kubernetes.io/storage-class: "cephfs" spec: accessModes: - ReadWriteMany resources: requests: storage: 500Mi EOF ``` ```sh $ kubectl create -f cephfs-test-pvc.yaml ``` 获取pvc ```sh $ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE cephfs-test-pvc-1 Bound pvc-571ae252-b080-4a67-8f3d-40bda1304fe3 500Mi RWX cephfs 2m1s ``` 获取pv ```sh $ kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-571ae252-b080-4a67-8f3d-40bda1304fe3 500Mi RWX Delete Bound default/cephfs-test-pvc-1 cephfs 2m3s ``` #### 创建nginx挂载pvc ```sh $ cat > cephfs-test-busybox-deployment.yml <<EOF apiVersion: apps/v1 kind: Deployment metadata: name: cephfs-test-deploy-busybox spec: replicas: 3 selector: matchLabels: app: cephfs-test-busybox template: metadata: labels: app: cephfs-test-busybox spec: containers: - name: busybox image: busybox command: ["sleep", "60000"] volumeMounts: - mountPath: "/mnt/cephfs" name: cephfs-test-pvc volumes: - name: cephfs-test-pvc persistentVolumeClaim: claimName: cephfs-test-pvc-1 EOF ``` ```sh $ kubectl create -f cephfs-test-busybox-deployment.yml ``` ```sh $ kubectl get pods NAME READY STATUS RESTARTS AGE cephfs-test-deploy-busybox-56556d86ff-4dmzn 1/1 Running 0 4m28s cephfs-test-deploy-busybox-56556d86ff-6dr6v 1/1 Running 0 4m28s cephfs-test-deploy-busybox-56556d86ff-b75mw 1/1 Running 0 4m28s ``` 测试pod挂载的cephfs文件读写是否同步 ```sh $ kubectl exec -ti cephfs-test-deploy-busybox-56556d86ff-4dmzn sh / # cd /mnt/cephfs/ /mnt/cephfs # ls /mnt/cephfs # touch cephfs-test-deploy-busybox-56556d86ff-4dmzn /mnt/cephfs # exit ``` ```sh $ kubectl exec -ti cephfs-test-deploy-busybox-56556d86ff-6dr6v sh / # cd /mnt/cephfs/ /mnt/cephfs # ls cephfs-test-deploy-busybox-56556d86ff-4dmzn #pod之间同步成功 ```
阅读 94 评论 0 收藏 0
阅读 94
评论 0
收藏 0

兜兜    2021-09-17 14:20:53    2021-10-19 14:31:14   

cilium
介绍 ```sh 本文档使用cilium(eBFP)+kube-router(BGP路由)模式,etcd使用的是外部集群,K8S ApiServer做了高可用。结合实际情况做调整 此模式优缺点:性能最好,缺点pod和非BGP节点无法通信,目前是Beta阶段,生产不推荐。 ``` #### 安装cilium 配置cilium的访问etcd的secret ```sh $ cd /opt/etcd/ssl #进入etcd证书目录 $ kubectl create secret generic -n kube-system cilium-etcd-secrets \ #创建cilium访问etcd的secret --from-file=etcd-client-ca.crt=ca.pem \ --from-file=etcd-client.key=server-key.pem \ --from-file=etcd-client.crt=server.pem ``` helm安装cilium ```sh $ helm repo add cilium https://helm.cilium.io/ $ export REPLACE_WITH_API_SERVER_IP=172.16.100.111 $ export REPLACE_WITH_API_SERVER_PORT=16443 $ helm install cilium cilium/cilium \ --version 1.9.10 \ #这里选择cilium 1.9.10 --namespace kube-system \ --set ipam.mode=kubernetes \ #指定PodCIDRs IPAM管理方案使用kubernetes --set tunnel=disabled \ #关闭tunnel,开启路由模式 --set nativeRoutingCIDR=172.16.100.0/24 \ #本地网络 --set kubeProxyReplacement=strict \ #严格模式,完全用cilium替代kube-proxy --set k8sServiceHost=$REPLACE_WITH_API_SERVER_IP \ #k8s ApiServer IP --set k8sServicePort=$REPLACE_WITH_API_SERVER_PORT \ #k8s ApiServer Port --set etcd.enabled=true \ #etcd开启 --set etcd.ssl=true \ #etcd ssl证书 --set "etcd.endpoints[0]=https://172.16.100.100:2379" \ #etcd 集群节点 --set "etcd.endpoints[1]=https://172.16.100.101:2379" \ --set "etcd.endpoints[2]=https://172.16.100.102:2379" ``` 测试cilium网络 ```sh $ wget https://raw.githubusercontent.com/cilium/cilium/master/examples/kubernetes/connectivity-check/connectivity-check.yaml $ sed -i 's/google.com/baidu.com/g' connectivity-check.yaml #测试外网的地址改成baidu.com $ kubectl apply -f connectivity-check.yaml ``` 查看pod的状态 ```sh kubectl get pods NAME READY STATUS RESTARTS AGE echo-a-dc9bcfd8f-hgc64 1/1 Running 0 9m59s echo-b-5884b7dc69-bl5px 1/1 Running 0 9m59s echo-b-host-cfdd57978-dg6gw 1/1 Running 0 9m59s host-to-b-multi-node-clusterip-c4ff7ff64-m9zwz 1/1 Running 0 9m58s host-to-b-multi-node-headless-84d8f6f4c4-8b797 1/1 Running 1 9m57s pod-to-a-5cdfd4754d-jgmnt 1/1 Running 0 9m59s pod-to-a-allowed-cnp-7d7c8f9f9b-f9lpc 1/1 Running 0 9m58s pod-to-a-denied-cnp-75cb89dfd-jsjd4 1/1 Running 0 9m59s pod-to-b-intra-node-nodeport-c6d79965d-w98jx 1/1 Running 1 9m57s pod-to-b-multi-node-clusterip-cd4d764b6-gjvc5 1/1 Running 0 9m58s pod-to-b-multi-node-headless-6696c5f8cd-fvcsl 1/1 Running 1 9m58s pod-to-b-multi-node-nodeport-6cc4974fc4-6lmns 1/1 Running 0 9m57s pod-to-external-1111-d5c7bb4c4-sflfc 1/1 Running 0 9m59s pod-to-external-fqdn-allow-google-cnp-dcb4d867d-dxqzx 1/1 Running 0 7m16s ``` 删除kube-proxy ```sh $ kubectl -n kube-system delete ds kube-proxy #删除kube-proxy DaemonSet $ kubectl -n kube-system delete cm kube-proxy $ iptables-restore <(iptables-save | grep -v KUBE) ``` #### 安装kube-router(`使用BGP功能`) ```sh $ curl -LO https://raw.githubusercontent.com/cloudnativelabs/kube-router/v0.4.0/daemonset/generic-kuberouter-only-advertise-routes.yaml $ vim generic-kuberouter-only-advertise-routes.yaml #修改配置文件,只开启router ... - "--run-router=true" - "--run-firewall=false" - "--run-service-proxy=false" - "--enable-cni=false" - "--enable-pod-egress=false" - "--enable-ibgp=true" - "--enable-overlay=true" # - "--peer-router-ips=<CHANGE ME>" # - "--peer-router-asns=<CHANGE ME>" # - "--cluster-asn=<CHANGE ME>" - "--advertise-cluster-ip=true" #宣告集群IP - "--advertise-external-ip=true" #宣告svc外部ip,如果svc指定了external-ip则生效 - "--advertise-loadbalancer-ip=true" ... ``` 部署kube-router ```sh $ kubectl apply -f generic-kuberouter-only-advertise-routes.yaml ``` 查看部署pod ```sh $ kubectl get pods -n kube-system |grep kube-router kube-router-dz58s 1/1 Running 0 2d18h kube-router-vdwqg 1/1 Running 0 2d18h kube-router-wrc4v 1/1 Running 0 2d18h ``` 查看路由 ```sh $ ip route default via 172.16.100.254 dev ens192 proto static metric 100 10.0.1.0/24 via 10.0.1.104 dev cilium_host src 10.0.1.104 10.0.1.104 dev cilium_host scope link 10.244.0.0/24 via 10.244.0.81 dev cilium_host src 10.244.0.81 10.244.0.81 dev cilium_host scope link 10.244.1.0/24 via 172.16.100.101 dev ens192 proto 17 #BGP路由 10.244.2.0/24 via 172.16.100.102 dev ens192 proto 17 #BGP路由 172.16.100.0/24 dev ens192 proto kernel scope link src 172.16.100.100 metric 100 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown ```
阅读 151 评论 0 收藏 0
阅读 151
评论 0
收藏 0

兜兜    2021-09-16 00:45:44    2021-10-27 15:15:39   

kubernetes nfs
```sh Kubernetes提供了一套可以自动创建PV的机制,即:Dynamic Provisioning.而这个机制的核心在于:StorageClass这个API对象. StorageClass对象会定义下面两部分内容: 1,PV的属性.比如,存储类型,Volume的大小等. 2,创建这种PV需要用到的存储插件 有了这两个信息之后,Kubernetes就能够根据用户提交的PVC,找到一个对应的StorageClass,之后Kubernetes就会调用该StorageClass声明的存储插件,进而创建出需要的PV. 但是其实使用起来是一件很简单的事情,你只需要根据自己的需求,编写YAML文件即可,然后使用kubectl create命令执行即可 ``` 搭建StorageClass+NFS,大致有以下几个步骤: ```sh 1.创建一个可用的NFS Serve 2.创建Service Account.这是用来管控NFS provisioner在k8s集群中运行的权限 3.创建StorageClass.负责建立PVC并调用NFS provisioner进行预定的工作,并让PV与PVC建立管理 4.创建NFS provisioner.有两个功能,一个是在NFS共享目录下创建挂载点(volume),另一个则是建了PV并将PV与NFS的挂载点建立关联 ``` #### 创建StorageClass 1.创建NFS共享服务 当前环境NFS server及共享目录信息 ```sh IP: 172.16.100.100 PATH: /data/k8s ``` 2.创建ServiceAccout账号 ```sh $ cat > rbac.yaml <<EOF apiVersion: v1 kind: ServiceAccount metadata: name: nfs-client-provisioner namespace: default --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: nfs-client-provisioner-runner rules: - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["create", "update", "patch"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: run-nfs-client-provisioner subjects: - kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default roleRef: kind: ClusterRole name: nfs-client-provisioner-runner apiGroup: rbac.authorization.k8s.io --- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default rules: - apiGroups: [""] resources: ["endpoints"] verbs: ["get", "list", "watch", "create", "update", "patch"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner subjects: - kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default roleRef: kind: Role name: leader-locking-nfs-client-provisioner apiGroup: rbac.authorization.k8s.io EOF ``` ```sh $ kubectl apply -f rbac.yaml ``` 3.创建NFS provisioner ```sh $ cat > nfs-provisioner.yaml <<EOF apiVersion: apps/v1 kind: Deployment metadata: name: nfs-client-provisioner labels: app: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default #与RBAC文件中的namespace保持一致 spec: replicas: 1 selector: matchLabels: app: nfs-client-provisioner strategy: type: Recreate selector: matchLabels: app: nfs-client-provisioner template: metadata: labels: app: nfs-client-provisioner spec: serviceAccountName: nfs-client-provisioner containers: - name: nfs-client-provisioner image: quay.io/external_storage/nfs-client-provisioner:latest volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes env: - name: PROVISIONER_NAME value: nfs-storage-provisioner - name: NFS_SERVER value: 172.16.100.100 - name: NFS_PATH value: /data/k8s volumes: - name: nfs-client-root nfs: server: 172.16.100.100 path: /data/k8s EOF ``` ```sh $ kubectl apply -f nfs-provisioner.yaml ``` 4.创建storageclass ```sh $ cat >nfs-storageclass.yaml <<EOF apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: managed-nfs-storage provisioner: nfs-storage-provisioner EOF ``` ```sh $ kubectl apply -f nfs-storageclass.yaml ``` 5.测试创建pvc ```sh $ cat > test-pvc.yaml <<EOF kind: PersistentVolumeClaim apiVersion: v1 metadata: name: test-claim annotations: volume.beta.kubernetes.io/storage-class: "managed-nfs-storage" #指定上面创建的存储类 spec: accessModes: - ReadWriteMany resources: requests: storage: 1Mi EOF ``` ```sh $ kubectl apply -f test-pvc.yaml ``` 查看pv ```sh kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-66967ce3-5adf-46cf-9fa3-8379f499d254 1Mi RWX Delete Bound default/test-claim managed-nfs-storage 27m ``` 查看pvc ```sh $ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE test-claim Bound pvc-66967ce3-5adf-46cf-9fa3-8379f499d254 1Mi RWX managed-nfs-storage 27m ``` 创建nginx-statefulset.yaml ```sh $ cat >nginx-statefulset.yaml <<EOF --- apiVersion: v1 kind: Service metadata: name: nginx-headless labels: app: nginx spec: ports: - port: 80 name: web clusterIP: None selector: app: nginx --- apiVersion: apps/v1 kind: StatefulSet metadata: name: web spec: selector: matchLabels: app: nginx serviceName: "nginx" replicas: 2 template: metadata: labels: app: nginx spec: containers: - name: nginx image: ikubernetes/myapp:v1 ports: - containerPort: 80 name: web volumeMounts: - name: www mountPath: /usr/share/nginx/html volumeClaimTemplates: - metadata: name: www annotations: volume.beta.kubernetes.io/storage-class: "managed-nfs-storage" spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 20Mi EOF ``` ```sh $ kubectl apply -f nginx-statefulset.yaml ``` 查看pod ```sh $ kubectl get pods -l app=nginx NAME READY STATUS RESTARTS AGE web-0 1/1 Running 0 21m web-1 1/1 Running 0 21m ``` 查看pv ```sh $kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-2c23ca4d-7c98-4021-8099-c208b418aa5b 20Mi RWO Delete Bound default/www-web-0 managed-nfs-storage 22m pvc-4d8da9d3-cb51-49e6-bd0e-c5bad7878551 20Mi RWO Delete Bound default/www-web-1 managed-nfs-storage 21m pvc-66967ce3-5adf-46cf-9fa3-8379f499d254 1Mi RWX Delete Bound default/test-claim managed-nfs-storage 32m ``` 查看pvc ```sh $ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE test-claim Bound pvc-66967ce3-5adf-46cf-9fa3-8379f499d254 1Mi RWX managed-nfs-storage 32m www-web-0 Bound pvc-2c23ca4d-7c98-4021-8099-c208b418aa5b 20Mi RWO managed-nfs-storage 22m www-web-1 Bound pvc-4d8da9d3-cb51-49e6-bd0e-c5bad7878551 20Mi RWO managed-nfs-storage 21m ``` 参考:https://www.cnblogs.com/panwenbin-logs/p/12196286.html
阅读 117 评论 0 收藏 0
阅读 117
评论 0
收藏 0

第 1 页 / 共 10 页
 
第 1 页 / 共 10 页