Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

maya-apiserver hitting panic / runtime error when POSTing volume #3064

Closed
stuartpb opened this issue Jun 15, 2020 · 16 comments
Closed

maya-apiserver hitting panic / runtime error when POSTing volume #3064

stuartpb opened this issue Jun 15, 2020 · 16 comments
Assignees
Labels
Community Community Reported Issue

Comments

@stuartpb
Copy link

Description

I just installed OpenEBS, and I've been tracking down a bug in my cluster where pods aren't getting scheduled because their volumes aren't getting provisioned. The trail goes cold at the logs for openebs-apiserver:

+ MAYA_API_SERVER_NETWORK=eth0
+ awk '{print $2}'
+ cut -d / -f 1
+ grep inet
+ ip -4 addr show scope2020-06-15T22:25:01.679258423Z  global dev eth0
+ CONTAINER_IP_ADDR=10.32.0.176
+ exec /usr/local/bin/maya-apiserver start '--bind=10.32.0.176'
I0615 22:25:04.789032       1 start.go:148] Initializing maya-apiserver...
I0615 22:25:05.322026       1 start.go:279] Starting maya api server ...
I0615 22:25:25.902220       1 start.go:288] resources applied successfully by installer
I0615 22:25:26.105930       1 start.go:193] Maya api server configuration:
I0615 22:25:26.105988       1 start.go:195]          Log Level: INFO
I0615 22:25:26.106010       1 start.go:195]             Region: global (DC: dc1)
I0615 22:25:26.106059       1 start.go:195]            Version: 1.11.0-released
I0615 22:25:26.106078       1 start.go:201] 
I0615 22:25:26.106095       1 start.go:204] Maya api server started! Log data will stream in below:
I0615 22:25:26.380333       1 runner.go:37] Starting SPC controller
I0615 22:25:26.380376       1 runner.go:40] Waiting for informer caches to sync
I0615 22:25:26.480978       1 runner.go:45] Checking for preupgrade tasks
I0615 22:25:26.801570       1 runner.go:51] Starting SPC workers
I0615 22:25:26.801612       1 runner.go:58] Started SPC workers
I0615 22:25:45.855069       1 volume_endpoint_v1alpha1.go:78] received cas volume request: http method {GET}
I0615 22:25:45.855268       1 volume_endpoint_v1alpha1.go:168] received volume read request: pvc-46cf6742-d3ee-481c-ae06-1a1cf3890666
W0615 22:25:46.750212       1 task.go:433] notfound error at runtask {readlistsvc}: error {target service not found}
W0615 22:25:46.751233       1 runner.go:166] nothing to rollback: no rollback tasks were found
2020/06/15 22:25:46.751335 [ERR] http: Request GET /latest/volumes/pvc-46cf6742-d3ee-481c-ae06-1a1cf3890666
failed to read volume: volume {pvc-46cf6742-d3ee-481c-ae06-1a1cf3890666} not found in namespace {prometheus}
I0615 22:25:46.755719       1 volume_endpoint_v1alpha1.go:78] received cas volume request: http method {POST}
I0615 22:25:46.755743       1 volume_endpoint_v1alpha1.go:132] received volume create request
W0615 22:25:53.263339       1 runner.go:166] nothing to rollback: no rollback tasks were found
I0615 22:25:53.264231       1 logs.go:45] http: panic serving 10.32.0.174:48246: runtime error: invalid memory address or nil pointer dereference
goroutine 494 [running]:
net/http.(*conn).serve.func1(0xc000516320)
	/home/travis/.gimme/versions/go1.14.4.linux.amd64/src/net/http/server.go:1772 +0x139
panic(0x176b560, 0x292bb30)
	/home/travis/.gimme/versions/go1.14.4.linux.amd64/src/runtime/panic.go:975 +0x3e3
github.com/openebs/maya/cmd/maya-apiserver/app/server.(*HTTPServer).volumeV1alpha1SpecificRequest(0xc0002ba8c0, 0x1ca9f00, 0xc00042f420, 0xc0005b2300, 0x10, 0x16e78a0, 0x1, 0xc000564a30)
	/home/travis/gopath/src/github.com/openebs/maya/cmd/maya-apiserver/app/server/volume_endpoint_v1alpha1.go:88 +0x296
github.com/openebs/maya/cmd/maya-apiserver/app/server.(*HTTPServer).wrap.func1(0x1ca9f00, 0xc00042f420, 0xc0005b2300)
	/home/travis/gopath/src/github.com/openebs/maya/cmd/maya-apiserver/app/server/http.go:426 +0x286
net/http.HandlerFunc.ServeHTTP(0xc000464cc0, 0x1ca9f00, 0xc00042f420, 0xc0005b2300)
	/home/travis/.gimme/versions/go1.14.4.linux.amd64/src/net/http/server.go:2012 +0x44
net/http.(*ServeMux).ServeHTTP(0xc0002ba880, 0x1ca9f00, 0xc00042f420, 0xc0005b2300)
	/home/travis/.gimme/versions/go1.14.4.linux.amd64/src/net/http/server.go:2387 +0x1a5
net/http.serverHandler.ServeHTTP(0xc0003dc1c0, 0x1ca9f00, 0xc00042f420, 0xc0005b2300)
	/home/travis/.gimme/versions/go1.14.4.linux.amd64/src/net/http/server.go:2807 +0xa3
net/http.(*conn).serve(0xc000516320, 0x1cae180, 0xc000081480)
	/home/travis/.gimme/versions/go1.14.4.linux.amd64/src/net/http/server.go:1895 +0x86c
created by net/http.(*Server).Serve
	/home/travis/.gimme/versions/go1.14.4.linux.amd64/src/net/http/server.go:2933 +0x35c

Your Environment

  • kubectl get nodes:
    NAME      STATUS   ROLES    AGE     VERSION
    studtop   Ready    master   3d19h   v1.18.3
    
  • kubectl get pods --all-namespaces:
    household-system       household-dns-announcer-external-dns-68bf958cff-z9cf9         1/1     Running     0          12h
    household-system       household-dns-coredns-74c644766f-m9vqj                        1/1     Running     0          8h
    household-system       household-dns-etcd-0                                          1/1     Running     0          12h
    kube-system            coredns-5d9c4cdbb-cn79q                                       1/1     Running     0          3d12h
    kube-system            coredns-5d9c4cdbb-sw2hv                                       1/1     Running     0          3d12h
    kube-system            etcd-studtop                                                  1/1     Running     0          3d19h
    kube-system            kube-apiserver-studtop                                        1/1     Running     2          3d19h
    kube-system            kube-controller-manager-studtop                               1/1     Running     19         3d19h
    kube-system            kube-proxy-sbfnj                                              1/1     Running     0          3d19h
    kube-system            kube-scheduler-studtop                                        1/1     Running     18         3d19h
    kube-system            weave-net-r84s2                                               2/2     Running     1          3d18h
    kubeapps               apprepo-kubeapps-sync-bitnami-1592241000-p6p9b                0/1     Completed   0          5h29m
    kubeapps               apprepo-kubeapps-sync-bitnami-1592242200-bw5v8                0/1     Completed   0          5h10m
    kubeapps               apprepo-kubeapps-sync-bitnami-1592253000-4rxcc                0/1     Completed   0          131m
    kubeapps               apprepo-kubeapps-sync-bitnami-4zpqz-k58lh                     0/1     Completed   2          2d19h
    kubeapps               apprepo-kubeapps-sync-bitnami-bjbg7-qmfsb                     0/1     Completed   0          6h20m
    kubeapps               apprepo-kubeapps-sync-bitnami-dxn8p-g2s8w                     0/1     Completed   0          5h45m
    kubeapps               apprepo-kubeapps-sync-bitnami-jm6dh-9z45r                     0/1     Completed   0          7h31m
    kubeapps               apprepo-kubeapps-sync-bitnami-kxjqm-dpm25                     0/1     Completed   0          3d12h
    kubeapps               apprepo-kubeapps-sync-bitnami-nhwf7-glgxj                     0/1     Completed   0          11h
    kubeapps               apprepo-kubeapps-sync-bitnami-wt8bm-pcqvn                     0/1     Completed   0          3d17h
    kubeapps               apprepo-kubeapps-sync-bitnami-xl9bm-5cjls                     0/1     Completed   1          19h
    kubeapps               apprepo-kubeapps-sync-incubator-1592238600-2dv2p              0/1     Completed   0          6h11m
    kubeapps               apprepo-kubeapps-sync-incubator-1592239200-kcz5z              0/1     Completed   0          6h1m
    kubeapps               apprepo-kubeapps-sync-incubator-1592241000-nk6vb              0/1     Completed   0          5h29m
    kubeapps               apprepo-kubeapps-sync-incubator-4bkvp-fwvbn                   0/1     Completed   0          5h45m
    kubeapps               apprepo-kubeapps-sync-incubator-58gqn-5fzcf                   0/1     Completed   1          19h
    kubeapps               apprepo-kubeapps-sync-incubator-5bfv7-88txs                   0/1     Completed   2          2d19h
    kubeapps               apprepo-kubeapps-sync-incubator-mqbq2-czst5                   0/1     Completed   0          11h
    kubeapps               apprepo-kubeapps-sync-incubator-nxlld-7gxgd                   0/1     Completed   0          3d17h
    kubeapps               apprepo-kubeapps-sync-incubator-r6sn5-nfrcb                   0/1     Completed   0          7h53m
    kubeapps               apprepo-kubeapps-sync-incubator-whcmh-4vkf7                   0/1     Completed   0          3d12h
    kubeapps               apprepo-kubeapps-sync-incubator-zm2rf-9lnjs                   0/1     Completed   0          6h22m
    kubeapps               apprepo-kubeapps-sync-kubernetes-dashboard-1592240400-jftpg   0/1     Completed   0          5h37m
    kubeapps               apprepo-kubeapps-sync-kubernetes-dashboard-1592241000-5lrrt   0/1     Completed   0          5h29m
    kubeapps               apprepo-kubeapps-sync-kubernetes-dashboard-1592246580-z8jls   0/1     Completed   0          3h58m
    kubeapps               apprepo-kubeapps-sync-kubernetes-dashboard-2rvl8-kqtk5        0/1     Completed   3          2d19h
    kubeapps               apprepo-kubeapps-sync-kubernetes-dashboard-6n46k-bljsr        0/1     Completed   0          6h20m
    kubeapps               apprepo-kubeapps-sync-kubernetes-dashboard-7697g-th5sq        0/1     Completed   0          11h
    kubeapps               apprepo-kubeapps-sync-kubernetes-dashboard-797fw-z6cpz        0/1     Completed   0          5h45m
    kubeapps               apprepo-kubeapps-sync-kubernetes-dashboard-c4rrg-jd5nf        0/1     Completed   0          3d12h
    kubeapps               apprepo-kubeapps-sync-kubernetes-dashboard-hbpx5-wlr9p        0/1     Completed   0          7h53m
    kubeapps               apprepo-kubeapps-sync-kubernetes-dashboard-kf4jp-cbs8l        0/1     Completed   0          3d15h
    kubeapps               apprepo-kubeapps-sync-kubernetes-dashboard-qdqn5-v2n4k        0/1     Completed   0          19h
    kubeapps               apprepo-kubeapps-sync-stable-1592241000-p54lt                 0/1     Completed   0          5h17m
    kubeapps               apprepo-kubeapps-sync-stable-1592244000-ks65d                 0/1     Completed   0          4h41m
    kubeapps               apprepo-kubeapps-sync-stable-1592254800-qf7qs                 0/1     Completed   0          101m
    kubeapps               apprepo-kubeapps-sync-stable-6t6sn-fsxv9                      0/1     Completed   0          7h53m
    kubeapps               apprepo-kubeapps-sync-stable-c5jd6-9lxpj                      0/1     Completed   2          2d19h
    kubeapps               apprepo-kubeapps-sync-stable-cq4zz-lzkmz                      0/1     Completed   0          6h17m
    kubeapps               apprepo-kubeapps-sync-stable-ft6pg-lr7kf                      0/1     Completed   0          3d17h
    kubeapps               apprepo-kubeapps-sync-stable-jfsdr-7g778                      0/1     Completed   1          19h
    kubeapps               apprepo-kubeapps-sync-stable-lvrmt-tn92k                      0/1     Completed   0          11h
    kubeapps               apprepo-kubeapps-sync-stable-pnxdk-gsf7w                      0/1     Completed   0          5h45m
    kubeapps               apprepo-kubeapps-sync-stable-wr78x-7cvxl                      0/1     Completed   0          3d12h
    kubeapps               apprepo-kubeapps-sync-svc-cat-1592240400-89mlg                0/1     Completed   0          5h37m
    kubeapps               apprepo-kubeapps-sync-svc-cat-1592241000-vxpr8                0/1     Completed   0          5h17m
    kubeapps               apprepo-kubeapps-sync-svc-cat-1592243700-mswp4                0/1     Completed   0          4h46m
    kubeapps               apprepo-kubeapps-sync-svc-cat-8fzw8-xjmqb                     0/1     Completed   1          11h
    kubeapps               apprepo-kubeapps-sync-svc-cat-8n4rq-57gbx                     0/1     Completed   0          7h53m
    kubeapps               apprepo-kubeapps-sync-svc-cat-8rqcs-gf9lc                     0/1     Completed   2          2d19h
    kubeapps               apprepo-kubeapps-sync-svc-cat-8s2tv-www45                     0/1     Completed   1          19h
    kubeapps               apprepo-kubeapps-sync-svc-cat-hqlzl-h5lvt                     0/1     Completed   0          3d17h
    kubeapps               apprepo-kubeapps-sync-svc-cat-jk9qf-jk8vk                     0/1     Completed   0          3d12h
    kubeapps               apprepo-kubeapps-sync-svc-cat-sbksw-jcmtw                     0/1     Completed   0          5h45m
    kubeapps               apprepo-kubeapps-sync-svc-cat-tclpt-w9sjz                     0/1     Completed   0          6h22m
    kubeapps               apprepo-openebs-sync-openebs-1592240400-sc8w5                 0/1     Completed   0          5h37m
    kubeapps               apprepo-openebs-sync-openebs-1592241000-fdwjl                 0/1     Completed   0          5h17m
    kubeapps               apprepo-openebs-sync-openebs-1592249820-w4bmr                 0/1     Completed   0          3h4m
    kubeapps               apprepo-openebs-sync-openebs-b4htk-tq52b                      0/1     Completed   0          6h22m
    kubeapps               apprepo-openebs-sync-openebs-dj7g6-k9552                      0/1     Completed   0          5h45m
    kubeapps               apprepo-openebs-sync-openebs-dvkdl-ck4cx                      0/1     Completed   0          11h
    kubeapps               apprepo-openebs-sync-openebs-qfwrf-hkmvp                      0/1     Completed   0          7h53m
    kubeapps               kubeapps-fd5d64c59-9wpm9                                      1/1     Running     0          3d17h
    kubeapps               kubeapps-fd5d64c59-bbklr                                      1/1     Running     0          3d17h
    kubeapps               kubeapps-internal-apprepository-controller-7df45dbc68-29vdf   1/1     Running     0          3d17h
    kubeapps               kubeapps-internal-assetsvc-684f9c8574-6zlzj                   1/1     Running     0          3d17h
    kubeapps               kubeapps-internal-assetsvc-684f9c8574-ndlz4                   1/1     Running     0          3d17h
    kubeapps               kubeapps-internal-dashboard-5b77dcfd5f-b6kd6                  1/1     Running     0          3d17h
    kubeapps               kubeapps-internal-dashboard-5b77dcfd5f-c7jwx                  1/1     Running     0          3d17h
    kubeapps               kubeapps-internal-kubeops-d956f46bf-26rhm                     1/1     Running     0          3d17h
    kubeapps               kubeapps-internal-kubeops-d956f46bf-4djdq                     1/1     Running     0          3d17h
    kubeapps               kubeapps-postgresql-master-0                                  1/1     Running     0          8h
    kubeapps               kubeapps-postgresql-slave-0                                   1/1     Running     8          8h
    kubernetes-dashboard   kubernetes-dashboard-67985f44b9-7hfn8                         1/1     Running     3          3d10h
    metallb-system         metallb-controller-65f446bf5-m829v                            1/1     Running     1          3d13h
    metallb-system         metallb-speaker-6m5v4                                         1/1     Running     1          3d13h
    openebs                cstor-bulk-pool-18z9-788849df97-rwtdm                         3/3     Running     0          10h
    openebs                cstor-work-pool-tda0-5596cd6498-7zrk8                         3/3     Running     0          10h
    openebs                openebs-admission-server-65579ddcfd-n2f7t                     1/1     Running     1          11h
    openebs                openebs-apiserver-f7cb6bc49-wlcb7                             1/1     Running     8          7h15m
    openebs                openebs-localpv-provisioner-5847dfb954-vv5hd                  1/1     Running     11         11h
    openebs                openebs-ndm-operator-7fdcb6b7f4-tpcjh                         1/1     Running     0          7h10m
    openebs                openebs-ndm-zpxlw                                             1/1     Running     1          11h
    openebs                openebs-provisioner-7bc5bbdc79-m2h5t                          1/1     Running     12         7h16m
    openebs                openebs-snapshot-operator-84bb75f6b6-zvwcg                    2/2     Running     11         11h
    prometheus             alertmanager-prometheus-operator-alertmanager-0               2/2     Running     0          28m
    prometheus             prometheus-operator-grafana-c496bf9cc-drddf                   2/2     Running     0          36m
    prometheus             prometheus-operator-kube-state-metrics-689db7b754-9x4pv       1/1     Running     0          36m
    prometheus             prometheus-operator-operator-67ff5bbbf5-h4zb5                 2/2     Running     0          36m
    prometheus             prometheus-operator-prometheus-node-exporter-rrt6p            1/1     Running     0          36m
    prometheus             prometheus-prometheus-operator-prometheus-0                   0/3     Pending     0          28m
    
  • kubectl get services -A:
    NAMESPACE              NAME                                           TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)                        AGE
    default                kubernetes                                     ClusterIP      10.96.0.1        <none>          443/TCP                        3d19h
    household-system       household-dns-announcer-external-dns           ClusterIP      10.99.168.231    <none>          7979/TCP                       12h
    household-system       household-dns-coredns-metrics                  ClusterIP      10.106.193.141   <none>          9153/TCP                       8h
    household-system       household-dns-coredns-tcp                      LoadBalancer   10.105.45.19     192.168.42.53   53:32219/TCP                   9h
    household-system       household-dns-coredns-udp                      LoadBalancer   10.102.229.218   192.168.42.53   53:30548/UDP                   9h
    household-system       household-dns-etcd                             ClusterIP      10.100.230.245   <none>          2379/TCP,2380/TCP              13h
    household-system       household-dns-etcd-headless                    ClusterIP      None             <none>          2379/TCP,2380/TCP              13h
    household-system       household-dns-metrics                          ClusterIP      None             <none>          9153/TCP                       10h
    kube-system            kube-dns                                       ClusterIP      10.96.0.10       <none>          53/UDP,53/TCP,9153/TCP         3d19h
    kube-system            prometheus-operator-coredns                    ClusterIP      None             <none>          9153/TCP                       46m
    kube-system            prometheus-operator-kube-controller-manager    ClusterIP      None             <none>          10252/TCP                      46m
    kube-system            prometheus-operator-kube-etcd                  ClusterIP      None             <none>          2379/TCP                       46m
    kube-system            prometheus-operator-kube-proxy                 ClusterIP      None             <none>          10249/TCP                      46m
    kube-system            prometheus-operator-kube-scheduler             ClusterIP      None             <none>          10251/TCP                      46m
    kube-system            prometheus-operator-kubelet                    ClusterIP      None             <none>          10250/TCP,10255/TCP,4194/TCP   6h57m
    kube-system            promop-prometheus-operator-kubelet             ClusterIP      None             <none>          10250/TCP,10255/TCP,4194/TCP   9h
    kube-system            yellow-earth-prometheus-op-kubelet             ClusterIP      None             <none>          10250/TCP,10255/TCP,4194/TCP   5h26m
    kubeapps               kubeapps                                       LoadBalancer   10.107.118.138   192.168.42.75   80:32014/TCP                   3d17h
    kubeapps               kubeapps-internal-assetsvc                     ClusterIP      10.103.173.187   <none>          8080/TCP                       3d17h
    kubeapps               kubeapps-internal-dashboard                    ClusterIP      10.105.201.21    <none>          8080/TCP                       3d17h
    kubeapps               kubeapps-internal-kubeops                      ClusterIP      10.111.48.201    <none>          8080/TCP                       3d17h
    kubeapps               kubeapps-postgresql                            ClusterIP      10.96.22.198     <none>          5432/TCP                       3d17h
    kubeapps               kubeapps-postgresql-headless                   ClusterIP      None             <none>          5432/TCP                       3d17h
    kubeapps               kubeapps-postgresql-read                       ClusterIP      10.105.183.221   <none>          5432/TCP                       3d17h
    kubernetes-dashboard   kubernetes-dashboard                           LoadBalancer   10.107.158.84    192.168.42.64   443:31665/TCP                  3d12h
    openebs                admission-server-svc                           ClusterIP      10.109.165.23    <none>          443/TCP                        11h
    openebs                openebs-apiservice                             ClusterIP      10.100.119.41    <none>          5656/TCP                       11h
    prometheus             alertmanager-operated                          ClusterIP      None             <none>          9093/TCP,9094/TCP,9094/UDP     45m
    prometheus             prometheus-operated                            ClusterIP      None             <none>          9090/TCP                       45m
    prometheus             prometheus-operator-alertmanager               ClusterIP      10.99.213.79     <none>          9093/TCP                       46m
    prometheus             prometheus-operator-grafana                    LoadBalancer   10.102.194.27    192.168.32.1    80:32566/TCP                   46m
    prometheus             prometheus-operator-kube-state-metrics         ClusterIP      10.108.5.155     <none>          8080/TCP                       46m
    prometheus             prometheus-operator-operator                   ClusterIP      10.107.13.192    <none>          8080/TCP,443/TCP               46m
    prometheus             prometheus-operator-prometheus                 LoadBalancer   10.107.21.54     192.168.32.0    9090:31293/TCP                 46m
    prometheus             prometheus-operator-prometheus-node-exporter   ClusterIP      10.110.28.146    <none>          9100/TCP                       46m
    
  • kubectl get sc:
     NAME                        PROVISIONER                                                RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
     openebs-bulk                openebs.io/provisioner-iscsi                               Delete          Immediate           false                  10h
     openebs-snapshot-promoter   volumesnapshot.external-storage.k8s.io/snapshot-promoter   Delete          Immediate           false                  9h
     openebs-work                openebs.io/provisioner-iscsi                               Delete          Immediate             false                  10h
    
  • kubectl get pv -A: No resources found
  • kubectl get pvc -A:
  NAMESPACE    NAME                                                                                       STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
  prometheus   prometheus-prometheus-operator-prometheus-db-prometheus-prometheus-operator-prometheus-0   Pending                                      openebs-bulk   36m
  • OS (from /etc/os-release): openSUSE MicroOS (Kubic)
  • Kernel (from uname -a): Linux studtop 5.7.1-1-default #1 SMP Wed Jun 10 11:53:46 UTC 2020 (6a549f6) x86_64 x86_64 x86_64 GNU/Linux
@stuartpb
Copy link
Author

It might be worth noting that I'd been running OpenEBS before this without having enabled iscsid on the host (it's enabled when this error happens) - could be something to do with artifacts created under that condition.

@kmova
Copy link
Member

kmova commented Jun 16, 2020

Checking on this.

@kmova kmova self-assigned this Jun 16, 2020
@stuartpb
Copy link
Author

Here are the YAML resources for the StorageClasses / StoragePoolClaims:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: openebs-bulk
  annotations:
    openebs.io/cas-type: cstor
    cas.openebs.io/config: |
      - name: StoragePoolClaim
        value: "cstor-bulk-pool"
      - name: ReplicaCount
        value: "1"
provisioner: openebs.io/provisioner-iscsi
---
apiVersion: openebs.io/v1alpha1
kind: StoragePoolClaim
metadata:
  name: cstor-bulk-pool
  namespace: openebs
  annotations:
    cas.openebs.io/config: |
      - name: PoolResourceRequests
        value: |-
            memory: 100Mi
      - name: PoolResourceLimits
        value: |-
            memory: 4Gi
spec:
  name: cstor-bulk-pool
  type: disk
  poolSpec:
    poolType: striped
  blockDevices:
    blockDeviceList:
    - blockdevice-9d50e42f15fe83269356445eea81b1e2
---
apiVersion: openebs.io/v1alpha1
kind: StoragePoolClaim
metadata:
  name: cstor-work-pool
  namespace: openebs
  annotations:
    cas.openebs.io/config: |
      - name: PoolResourceRequests
        value: |-
            memory: 100Mi
      - name: PoolResourceLimits
        value: |-
            memory: 4Gi
spec:
  name: cstor-work-pool
  type: disk
  poolSpec:
    poolType: striped
  blockDevices:
    blockDeviceList:
    - blockdevice-50bfc93f4a03ff39b6219fed5965fe7b
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: openebs-work
  annotations:
    openebs.io/cas-type: cstor
    cas.openebs.io/config: |
      - name: StoragePoolClaim
        value: "cstor-work-pool"
      - name: ReplicaCount
        value: "1"
provisioner: openebs.io/provisioner-iscsi

And here's the blocking PVC in question:

apiVersion: v1
items:
- apiVersion: v1
  kind: PersistentVolumeClaim
  metadata:
    annotations:
      volume.beta.kubernetes.io/storage-provisioner: openebs.io/provisioner-iscsi
    creationTimestamp: "2020-06-15T22:13:22Z"
    finalizers:
    - kubernetes.io/pvc-protection
    labels:
      app: prometheus
      prometheus: prometheus-operator-prometheus
    managedFields:
    - apiVersion: v1
      fieldsType: FieldsV1
      fieldsV1:
        f:metadata:
          f:annotations:
            .: {}
            f:volume.beta.kubernetes.io/storage-provisioner: {}
          f:labels:
            .: {}
            f:app: {}
            f:prometheus: {}
        f:spec:
          f:accessModes: {}
          f:resources:
            f:requests:
              .: {}
              f:storage: {}
          f:storageClassName: {}
          f:volumeMode: {}
        f:status:
          f:phase: {}
      manager: kube-controller-manager
      operation: Update
      time: "2020-06-15T22:13:25Z"
    name: prometheus-prometheus-operator-prometheus-db-prometheus-prometheus-operator-prometheus-0
    namespace: prometheus
    resourceVersion: "787930"
    selfLink: /api/v1/namespaces/prometheus/persistentvolumeclaims/prometheus-prometheus-operator-prometheus-db-prometheus-prometheus-operator-prometheus-0
    uid: 46cf6742-d3ee-481c-ae06-1a1cf3890666
  spec:
    accessModes:
    - ReadWriteOnce
    resources:
      requests:
        storage: 50Gi
    storageClassName: openebs-bulk
    volumeMode: Filesystem
  status:
    phase: Pending
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

@kmova
Copy link
Member

kmova commented Jun 16, 2020

There are two issues:

  • OpenEBS can't provision volumes if the name of the PVC is more than 63 chars long. PVC with name longer than 63 chars fails to provision.  #2630
  • The error is causing a panic, due to a change that went in 1.11.0, which is trying to extract information from created volume object, even when the volume creation failed.

@stuartpb
Copy link
Author

I tried deleting and re-creating the storage pools / classes, and now I'm seeing errors like this from the apiserver:

I0616 02:53:56.112121       1 handler.go:314] Provisioning pool 1/1 for storagepoolclaim cstor-work-pool
E0616 02:53:56.215663       1 handler.go:317] Pool provisioning failed for 1/1 for storagepoolclaim cstor-work-pool: failed to build cas pool for spc cstor-work-pool: aborting storagepool create operation as no node qualified: type {disk} devices are not available to provision pools in manual mode
I0616 02:53:56.312904       1 handler.go:314] Provisioning pool 1/1 for storagepoolclaim cstor-bulk-pool
E0616 02:53:56.474890       1 handler.go:317] Pool provisioning failed for 1/1 for storagepoolclaim cstor-bulk-pool: failed to build cas pool for spc cstor-bulk-pool: aborting storagepool create operation as no node qualified: type {disk} devices are not available to provision pools in manual mode
I0616 02:53:57.267762       1 spc_lease.go:124] Lease removed successfully on storagepoolclaim
I0616 02:53:57.286742       1 spc_lease.go:124] Lease removed successfully on storagepoolclaim

@kmova
Copy link
Member

kmova commented Jun 16, 2020

For the Prometheus chart, can you try to set the metadata.name to allow for < 63 chars. Related issue: helm/charts#13170

@kmova
Copy link
Member

kmova commented Jun 16, 2020

Regarding the deletion and re-creation, are the block devices in unclaimed state. kubectl get bd -n openebs OR are there any csp's that were not properly deleted. kubectl get csp -n openebs

@stuartpb
Copy link
Author

stuartpb commented Jun 16, 2020

The block devices are in the "Released" state; how would I transition them to Unclaimed? Delete them and restart the ndm Pod?

@kmova
Copy link
Member

kmova commented Jun 16, 2020

they should transition from Released to Unclaimed automatically. There is a job launched to clear any traces of older cstor pool prior to marking them as unclaimed. the ndm-operator logs should help to check why they are still in the "Released" state.

@ranjithwingrider
Copy link
Member

In my case, I have assigned value of fullnameOverride to new in values.yaml. After this change my PVC looks like below.

NAME                                                       STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
prometheus-new-prometheus-db-prometheus-new-prometheus-0   Bound    pvc-8cfbb59d-683b-4d65-b539-33f427ebdbec   30Gi       RWO            openebs-sc-rep1   35m

My pod

NAME                                            READY   STATUS    RESTARTS   AGE
alertmanager-new-alertmanager-0                 2/2     Running   0          23m
new-operator-85c7df4498-nzr4l                   2/2     Running   0          24m
prometheus-grafana-64dc457ccb-r58h2             2/2     Running   0          24m
prometheus-kube-state-metrics-5496457bd-kvjkr   1/1     Running   0          24m
prometheus-new-prometheus-0                     3/3     Running   1          23m
prometheus-prometheus-node-exporter-2zhjg       1/1     Running   0          24m
prometheus-prometheus-node-exporter-gmhn9       1/1     Running   0          24m
prometheus-prometheus-node-exporter-wbx68       1/1     Running   0          24m

@stuartpb
Copy link
Author

stuartpb commented Jun 16, 2020

I used a fullnameOverride of pop, and though the PVC is now able to bind, my pod still isn't working:

Name:           prometheus-pop-prometheus-0
Namespace:      prometheus
Priority:       0
Node:           studtop/192.168.0.23
Start Time:     Tue, 16 Jun 2020 01:22:40 -0700
Labels:         app=prometheus
                controller-revision-hash=prometheus-pop-prometheus-5dc49cfbf
                prometheus=pop-prometheus
                statefulset.kubernetes.io/pod-name=prometheus-pop-prometheus-0
Annotations:    <none>
Status:         Pending
IP:             
IPs:            <none>
Controlled By:  StatefulSet/prometheus-pop-prometheus
Containers:
  prometheus:
    Container ID:  
    Image:         quay.io/prometheus/prometheus:v2.18.1
    Image ID:      
    Port:          9090/TCP
    Host Port:     0/TCP
    Args:
      --web.console.templates=/etc/prometheus/consoles
      --web.console.libraries=/etc/prometheus/console_libraries
      --config.file=/etc/prometheus/config_out/prometheus.env.yaml
      --storage.tsdb.path=/prometheus
      --storage.tsdb.retention.time=10d
      --web.enable-lifecycle
      --storage.tsdb.no-lockfile
      --web.external-url=http://pop-prometheus.prometheus:9090
      --web.route-prefix=/
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Liveness:       http-get http://:web/-/healthy delay=0s timeout=3s period=5s #success=1 #failure=6
    Readiness:      http-get http://:web/-/ready delay=0s timeout=3s period=5s #success=1 #failure=120
    Environment:    <none>
    Mounts:
      /etc/prometheus/certs from tls-assets (ro)
      /etc/prometheus/config_out from config-out (ro)
      /etc/prometheus/rules/prometheus-pop-prometheus-rulefiles-0 from prometheus-pop-prometheus-rulefiles-0 (rw)
      /prometheus from prometheus-pop-prometheus-db (rw,path="prometheus-db")
      /var/run/secrets/kubernetes.io/serviceaccount from pop-prometheus-token-vmwvc (ro)
  prometheus-config-reloader:
    Container ID:  
    Image:         quay.io/coreos/prometheus-config-reloader:v0.39.0
    Image ID:      
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/prometheus-config-reloader
    Args:
      --log-format=logfmt
      --reload-url=http://127.0.0.1:9090/-/reload
      --config-file=/etc/prometheus/config/prometheus.yaml.gz
      --config-envsubst-file=/etc/prometheus/config_out/prometheus.env.yaml
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Limits:
      cpu:     100m
      memory:  25Mi
    Requests:
      cpu:     100m
      memory:  25Mi
    Environment:
      POD_NAME:  prometheus-pop-prometheus-0 (v1:metadata.name)
    Mounts:
      /etc/prometheus/config from config (rw)
      /etc/prometheus/config_out from config-out (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from pop-prometheus-token-vmwvc (ro)
  rules-configmap-reloader:
    Container ID:  
    Image:         quay.io/coreos/configmap-reload:v0.0.1
    Image ID:      
    Port:          <none>
    Host Port:     <none>
    Args:
      --webhook-url=http://127.0.0.1:9090/-/reload
      --volume-dir=/etc/prometheus/rules/prometheus-pop-prometheus-rulefiles-0
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Limits:
      cpu:     100m
      memory:  25Mi
    Requests:
      cpu:        100m
      memory:     25Mi
    Environment:  <none>
    Mounts:
      /etc/prometheus/rules/prometheus-pop-prometheus-rulefiles-0 from prometheus-pop-prometheus-rulefiles-0 (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from pop-prometheus-token-vmwvc (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  prometheus-pop-prometheus-db:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  prometheus-pop-prometheus-db-prometheus-pop-prometheus-0
    ReadOnly:   false
  config:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  prometheus-pop-prometheus
    Optional:    false
  tls-assets:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  prometheus-pop-prometheus-tls-assets
    Optional:    false
  config-out:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  prometheus-pop-prometheus-rulefiles-0:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      prometheus-pop-prometheus-rulefiles-0
    Optional:  false
  pop-prometheus-token-vmwvc:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  pop-prometheus-token-vmwvc
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason                  Age                From                     Message
  ----     ------                  ----               ----                     -------
  Warning  FailedScheduling        <unknown>          default-scheduler        running "VolumeBinding" filter plugin for pod "prometheus-pop-prometheus-0": pod has unbound immediate PersistentVolumeClaims
  Normal   Scheduled               <unknown>          default-scheduler        Successfully assigned prometheus/prometheus-pop-prometheus-0 to studtop
  Normal   SuccessfulAttachVolume  13m                attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-e3b0c221-14bf-43ab-af4d-ec06b67b9a5f"
  Warning  FailedMount             12m (x2 over 13m)  kubelet, studtop         MountVolume.WaitForAttach failed for volume "pvc-e3b0c221-14bf-43ab-af4d-ec06b67b9a5f" : failed to get any path for iscsi disk, last err seen:
iscsi: failed to sendtargets to portal 10.101.153.159:3260 output: iscsiadm: cannot make connection to 10.101.153.159: Connection refused
iscsiadm: cannot make connection to 10.101.153.159: Connection refused
iscsiadm: cannot make connection to 10.101.153.159: Connection refused
iscsiadm: cannot make connection to 10.101.153.159: Connection refused
iscsiadm: cannot make connection to 10.101.153.159: Connection refused
iscsiadm: cannot make connection to 10.101.153.159: Connection refused
iscsiadm: connection login retries (reopen_max) 5 exceeded
iscsiadm: Could not perform SendTargets discovery: iSCSI PDU timed out
, err exit status 11
  Warning  FailedMount  11m                 kubelet, studtop  Unable to attach or mount volumes: unmounted volumes=[prometheus-pop-prometheus-db], unattached volumes=[prometheus-pop-prometheus-db prometheus-pop-prometheus-rulefiles-0 pop-prometheus-token-vmwvc config config-out tls-assets]: timed out waiting for the condition
  Warning  FailedMount  6m49s               kubelet, studtop  Unable to attach or mount volumes: unmounted volumes=[prometheus-pop-prometheus-db], unattached volumes=[prometheus-pop-prometheus-rulefiles-0 pop-prometheus-token-vmwvc config config-out tls-assets prometheus-pop-prometheus-db]: timed out waiting for the condition
  Warning  FailedMount  4m32s               kubelet, studtop  Unable to attach or mount volumes: unmounted volumes=[prometheus-pop-prometheus-db], unattached volumes=[config config-out tls-assets prometheus-pop-prometheus-db prometheus-pop-prometheus-rulefiles-0 pop-prometheus-token-vmwvc]: timed out waiting for the condition
  Warning  FailedMount  2m18s               kubelet, studtop  Unable to attach or mount volumes: unmounted volumes=[prometheus-pop-prometheus-db], unattached volumes=[tls-assets prometheus-pop-prometheus-db prometheus-pop-prometheus-rulefiles-0 pop-prometheus-token-vmwvc config config-out]: timed out waiting for the condition
  Warning  FailedMount  11s (x12 over 12m)  kubelet, studtop  MountVolume.MountDevice failed for volume "pvc-e3b0c221-14bf-43ab-af4d-ec06b67b9a5f" : format of disk "/dev/disk/by-path/ip-10.101.153.159:3260-iscsi-iqn.2016-09.com.openebs.cstor:pvc-e3b0c221-14bf-43ab-af4d-ec06b67b9a5f-lun-0" failed: type:("ext4") target:("/var/lib/kubelet/plugins/kubernetes.io/iscsi/iface-default/10.101.153.159:3260-iqn.2016-09.com.openebs.cstor:pvc-e3b0c221-14bf-43ab-af4d-ec06b67b9a5f-lun-0") options:("defaults") errcode:(executable file not found in $PATH) output:()
  Warning  FailedMount  2s (x2 over 9m8s)   kubelet, studtop  Unable to attach or mount volumes: unmounted volumes=[prometheus-pop-prometheus-db], unattached volumes=[pop-prometheus-token-vmwvc config config-out tls-assets prometheus-pop-prometheus-db prometheus-pop-prometheus-rulefiles-0]: timed out waiting for the condition

@stuartpb
Copy link
Author

stuartpb commented Jun 16, 2020

This looks a lot like #1688: I'll see if Kubic isn't missing e2fsprogs.

UPDATE: yeop

studtop:~ # zypper info e2fsprogs
Loading repository data...
Reading installed packages...


Information for package e2fsprogs:
----------------------------------
Repository     : openSUSE-Tumbleweed-Oss
Name           : e2fsprogs
Version        : 1.45.6-1.17
Arch           : x86_64
Vendor         : openSUSE
Installed Size : 4.1 MiB
Installed      : No
Status         : not installed
Source package : e2fsprogs-1.45.6-1.17.src
Summary        : Utilities for the Second Extended File System
Description    : 
    Utilities needed to create and maintain ext2 and ext3 file systems
    under Linux. Included in this package are: chattr, lsattr, mke2fs,
    mklost+found, tune2fs, e2fsck, resize2fs, and badblocks.

@stuartpb
Copy link
Author

Okay, I installed e2fsprogs and rebooted, and for a brief, beautiful moment, I think I actually did see the pod get scheduled.

Then at 09:30Z, almost on the dot, the kube API server died. Here are the kubelet logs from when that happened: I can't tell if the time was a coincidence, or if there's something in here the suggests what might have gone wrong.

After rebooting, the API server started falling into a crash loop again. Here are the logs for one recent run, containing a few panic invocations that might explain why it's getting overwhelmed (top shows the API server processes taking up a whole core).

@stuartpb stuartpb changed the title maya-apiserver hitting panic / runtime error when POSTing volume Failures with persistent volume Jun 16, 2020
@kmova kmova added the Community Community Reported Issue label Jun 16, 2020
@stuartpb
Copy link
Author

Well, I've decided I'm going to try bringing the cluster back up again from scratch and see if that somehow fixes it. If not, I'll do more digging and see if I can trace where the problem's coming from.

For now, I'll go ahead and consider the original issue addressed - thanks!

@stuartpb stuartpb changed the title Failures with persistent volume maya-apiserver hitting panic / runtime error when POSTing volume Jun 17, 2020
@ranjithwingrider
Copy link
Member

ranjithwingrider commented May 7, 2021

The following workaround is working for me.. My changes are below.

## Provide a name to substitute for the full names of resources
##
fullnameOverride: "app"

Added fullnameOverride as app.

In AlertManager PVC spec.

   ## Storage is the definition of how storage will be used by the Alertmanager instances.
    ## ref: https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/user-guides/storage.md
    ##
    storage:
      volumeClaimTemplate:
        metadata:
          name: alert
        spec:
          storageClassName: cstor-csi
          accessModes: ["ReadWriteOnce"]
          resources:
            requests:
              storage: 40Gi
             #   selector: {}

In above snippet, added below entries

metadata:
     name: alert

In Prometheus PVC spec.

    ## Prometheus StorageSpec for persistent data
    ## ref: https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/user-guides/storage.md
    ##
    storageSpec:
    ## Using PersistentVolumeClaim
      volumeClaimTemplate:
        metadata:
          name: prom
        spec:
          storageClassName: cstor-csi
          accessModes: ["ReadWriteOnce"]
          resources:
            requests:
              storage: 40Gi

In above snippet, added below entries

metadata:
     name: prom

After executing with above modification, able to provision both pods.

Output
Pods:

$ kubectl get pod -n monitoring

NAME                                             READY   STATUS    RESTARTS   AGE
alertmanager-app-alertmanager-0                  2/2     Running   0          5m30s
app-operator-7cf8fc6dc-k6wb6                     1/1     Running   0          5m40s
prometheus-app-prometheus-0                      2/2     Running   1          5m30s
prometheus-grafana-6549f869b5-7dvp4              2/2     Running   0          5m40s
prometheus-kube-state-metrics-685b975bb7-f9qt6   1/1     Running   0          5m40s
prometheus-prometheus-node-exporter-6ngps        1/1     Running   0          5m40s
prometheus-prometheus-node-exporter-fnfbt        1/1     Running   0          5m40s
prometheus-prometheus-node-exporter-mlvt6        1/1     Running   0          5m40s

PVC:

$ kubectl get pvc -n monitoring

NAME                                    STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
alert-alertmanager-app-alertmanager-0   Bound    pvc-d734b059-e80a-488f-b398-c66e0b3c208c   40Gi       RWO            cstor-csi      5m53s
prom-prometheus-app-prometheus-0        Bound    pvc-24bbb044-c080-4f66-a0f6-e51cded91286   40Gi       RWO            cstor-csi      5m53s

SVC:

$ kubectl get svc -n monitoring

NAME                                  TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
alertmanager-operated                 ClusterIP   None            <none>        9093/TCP,9094/TCP,9094/UDP   15m
app-alertmanager                      ClusterIP   10.100.21.20    <none>        9093/TCP                     16m
app-operator                          ClusterIP   10.100.95.182   <none>        8080/TCP                     16m
app-prometheus                        ClusterIP   10.100.67.27    <none>        9090/TCP                     16m
prometheus-grafana                    ClusterIP   10.100.39.64    <none>        80/TCP                       16m
prometheus-kube-state-metrics         ClusterIP   10.100.72.188   <none>        8080/TCP                     16m
prometheus-operated                   ClusterIP   None            <none>        9090/TCP                     15m
prometheus-prometheus-node-exporter   ClusterIP   10.100.250.3    <none>        9100/TCP                     16m

The above entries will help to restrict number of characters for Pods and PVCs.

@willzhang
Copy link

以下解决方法对我有用。我的更改如下。

## Provide a name to substitute for the full names of resources
##
fullnameOverride: "app"

添加fullnameOverrideapp

在 AlertManager PVC 规范中。

   ## Storage is the definition of how storage will be used by the Alertmanager instances.
    ## ref: https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/user-guides/storage.md
    ##
    storage:
      volumeClaimTemplate:
        metadata:
          name: alert
        spec:
          storageClassName: cstor-csi
          accessModes: ["ReadWriteOnce"]
          resources:
            requests:
              storage: 40Gi
             #   selector: {}

在上面的代码片中,添加了以下项目

metadata:
     name: alert

在 Prometheus PVC 规范中。

    ## Prometheus StorageSpec for persistent data
    ## ref: https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/user-guides/storage.md
    ##
    storageSpec:
    ## Using PersistentVolumeClaim
      volumeClaimTemplate:
        metadata:
          name: prom
        spec:
          storageClassName: cstor-csi
          accessModes: ["ReadWriteOnce"]
          resources:
            requests:
              storage: 40Gi

在上面的代码片中,添加了以下项目

metadata:
     name: prom

执行上所述修改后,可以配置两个吊舱。

输出

$ kubectl get pod -n monitoring

NAME                                             READY   STATUS    RESTARTS   AGE
alertmanager-app-alertmanager-0                  2/2     Running   0          5m30s
app-operator-7cf8fc6dc-k6wb6                     1/1     Running   0          5m40s
prometheus-app-prometheus-0                      2/2     Running   1          5m30s
prometheus-grafana-6549f869b5-7dvp4              2/2     Running   0          5m40s
prometheus-kube-state-metrics-685b975bb7-f9qt6   1/1     Running   0          5m40s
prometheus-prometheus-node-exporter-6ngps        1/1     Running   0          5m40s
prometheus-prometheus-node-exporter-fnfbt        1/1     Running   0          5m40s
prometheus-prometheus-node-exporter-mlvt6        1/1     Running   0          5m40s

PVC:

$ kubectl get pvc -n monitoring

NAME                                    STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
alert-alertmanager-app-alertmanager-0   Bound    pvc-d734b059-e80a-488f-b398-c66e0b3c208c   40Gi       RWO            cstor-csi      5m53s
prom-prometheus-app-prometheus-0        Bound    pvc-24bbb044-c080-4f66-a0f6-e51cded91286   40Gi       RWO            cstor-csi      5m53s

服务中心:

$ kubectl get svc -n monitoring

NAME                                  TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
alertmanager-operated                 ClusterIP   None            <none>        9093/TCP,9094/TCP,9094/UDP   15m
app-alertmanager                      ClusterIP   10.100.21.20    <none>        9093/TCP                     16m
app-operator                          ClusterIP   10.100.95.182   <none>        8080/TCP                     16m
app-prometheus                        ClusterIP   10.100.67.27    <none>        9090/TCP                     16m
prometheus-grafana                    ClusterIP   10.100.39.64    <none>        80/TCP                       16m
prometheus-kube-state-metrics         ClusterIP   10.100.72.188   <none>        8080/TCP                     16m
prometheus-operated                   ClusterIP   None            <none>        9090/TCP                     15m
prometheus-prometheus-node-exporter   ClusterIP   10.100.250.3    <none>        9100/TCP                     16m

上面所说的条目将有助于限制制造 Pod 和 PVC 的字符数。

resolved, thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Community Community Reported Issue
Projects
None yet
Development

No branches or pull requests

4 participants