Skip to content
This repository has been archived by the owner on Feb 22, 2022. It is now read-only.

[stable/prometheus-operator] PVC name must be no more than 63 characters #13170

Closed
sdelrio opened this issue Apr 20, 2019 · 11 comments
Closed

[stable/prometheus-operator] PVC name must be no more than 63 characters #13170

sdelrio opened this issue Apr 20, 2019 · 11 comments
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@sdelrio
Copy link

sdelrio commented Apr 20, 2019

Is this a request for help?: Yes


Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT

Version of Helm and Kubernetes: Helm v2.13.1, Kubernetes v1.13.5

Which chart: stable/prometheus-operator

What happened: Attemt to run helm install stable/prometheus-operator with prometheus.storageSpec (without storage same way of deployment works). When I deploy the PVC stays on pending because the generated name is too long Invalid value: "prometheus-monitoring-prometheus-oper-prometheus-db-prometheus-monitoring-prometheus-oper-prometheus-0": must be no more than 63 characters

What you expected to happen:

Get the PVC created and status Bound, for example:

$ kubectl -n monitoring get pvc
NAME                                                                                                     STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
demo-pvc-claim1                                                                                          Bound     pvc-ac8d835a-63be-11e9-8c17-f44d306aa2e4   20Gi       RWO            openebs-jiva-r2   11s

How to reproduce it (as minimally and precisely as possible):
Define storageSpec for the helm char:

$ helm install stable/prometheus-operator --name=monitoring --namespace=monitoring --values=values.yaml
$ cat values.yaml |grep ^prometheus -A 11
prometheus:
  prometheusSpec:
    replicas: 1
    storageSpec:
      volumeClaimTemplate:
        spec:
          storageClassName: openebs-jiva-r2
          accessModes: ["ReadWriteOnce"]
          resources:
            requests:
              storage: 20Gi
$ kubectl -n monitoring get pvc
NAME                                                                                                     STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS      AGE
prometheus-monitoring-prometheus-oper-prometheus-db-prometheus-monitoring-prometheus-oper-prometheus-0   Pending                                      openebs-jiva-r2   25m
$ kubectl -n monitoring describe pvc prometheus-monitoring-prometheus-oper-prometheus-db-prometheus-monitoring-prometheus-oper-prometheus-0
Name:          prometheus-monitoring-prometheus-oper-prometheus-db-prometheus-monitoring-prometheus-oper-prometheus-0
Namespace:     monitoring
StorageClass:  openebs-jiva-r2
Status:        Pending
Volume:
Labels:        app=prometheus
               prometheus=monitoring-prometheus-oper-prometheus
Annotations:   volume.beta.kubernetes.io/storage-provisioner: openebs.io/provisioner-iscsi
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode:    Filesystem
Events:
  Type       Reason                Age                  From                                                                                                    Message
  ----       ------                ----                 ----                                                                                                    -------
  Normal     Provisioning          9m59s (x7 over 25m)  openebs.io/provisioner-iscsi_openebs-provisioner-5c665fdbdd-tqlb5_056dc49d-6360-11e9-9d88-daec5dead96a  External provisioner is provisioning volume for claim "monitoring/prometheus-monitoring-prometheus-oper-prometheus-db-prometheus-monitoring-prometheus-oper-prometheus-0"
  Warning    ProvisioningFailed    9m59s (x7 over 25m)  openebs.io/provisioner-iscsi_openebs-provisioner-5c665fdbdd-tqlb5_056dc49d-6360-11e9-9d88-daec5dead96a  failed to provision volume with StorageClass "openebs-jiva-r2": Internal Server Error: failed to create volume 'pvc-ac5a9848-63ba-11e9-8c17-f44d306aa2e4': response: Service "pvc-ac5a9848-63ba-11e9-8c17-f44d306aa2e4-ctrl-svc" is invalid: metadata.labels: Invalid value: "prometheus-monitoring-prometheus-oper-prometheus-db-prometheus-monitoring-prometheus-oper-prometheus-0": must be no more than 63 characters
  Normal     ExternalProvisioning  2s (x104 over 25m)   persistentvolume-controller                                                                             waiting for a volume to be created, either by external provisioner "openebs.io/provisioner-iscsi" or manually created by system administrator
Mounted By:  prometheus-monitoring-prometheus-oper-prometheus-0

Anything else we need to know:

@sdelrio
Copy link
Author

sdelrio commented Apr 21, 2019

Adding metadata.name will fix the long name generated. But at the moment even if your helm is named with just 2 letters it will hit the 63 characters limit if you dont set the metadata.name

  • sample content on values.yaml
prometheus:
  prometheusSpec:
    replicas: 1
    storageSpec:
      volumeClaimTemplate:
        metadata:
          name: data
        spec:
          storageClassName: openebs-jiva-r2
          accessModes: ["ReadWriteOnce"]
          resources:
            requests:
              storage: 20Gi

Will generate a shorter name:

$ kubectl get pvc -n monitoring
NAME                                                      STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
data-prometheus-monitoring-prometheus-oper-prometheus-0   Bound    pvc-af038bf7-6432-11e9-9c39-f44d306aa2e4   20Gi       RWO            openebs-jiva-r2   2m50s

@Starefossen
Copy link
Contributor

This is redicilous! I ended up with a persistance volume claim name like this: prometheus-prometheus-operator-prometheus-db-prometheus-prometheus-operator-prometheus-0". That is 6x promethues!

@stale
Copy link

stale bot commented May 23, 2019

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.

@stale stale bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 23, 2019
@stale
Copy link

stale bot commented Jun 6, 2019

This issue is being automatically closed due to inactivity.

@stale stale bot closed this as completed Jun 6, 2019
@rdxmaor
Copy link

rdxmaor commented Oct 29, 2019

Anyone have an idea how to solve that issue? I'm running helm 5.10.0, and prometheus-operator 0.29.0. When adding medatata.name key the operator fails

@sdelrio
Copy link
Author

sdelrio commented Oct 29, 2019

My workaround was to select a shorter name on metadata.name at values.yaml, like I wrote on previous sample. For example I named it "data":

prometheus:
  enabled: true
  prometheusSpec:
(...)
    storageSpec:
      volumeClaimTemplate:
        metadata:
          name: data
(..)

@rdxmaor
Copy link

rdxmaor commented Oct 31, 2019

@sdelrio when I'm trying your workaround, the operator doesn't deploy the prometheus pod.
That's why I'm writing the version I use. What version you're using?

@sdelrio
Copy link
Author

sdelrio commented Oct 31, 2019

Hope this helps you @rdxmaor

$ helm ls monitoring
NAME            REVISION        UPDATED                         STATUS          CHART                           APP VERSION                                                                                NAMESPACE
monitoring      41              Fri Sep  6 19:01:15 2019        DEPLOYED        prometheus-operator-6.8.3       0.32.0                                                                                     monitoring

$ kubectl -n monitoring get deploy monitoring-prometheus-oper-operator -o yaml |grep  image
        - --config-reloader-image=quay.io/coreos/configmap-reload:v0.0.1
        image: quay.io/coreos/prometheus-operator:v0.32.0
        imagePullPolicy: IfNotPresent
        image: squareup/ghostunnel:v1.4.1
        imagePullPolicy: IfNotPresent

@vsliouniaev
Copy link
Collaborator

vsliouniaev commented Nov 18, 2019

The names for these objects arise out of helm practices for naming components interacting with prometheus-operator naming conventions. Almost all charts in this repository provide a fullNameOverride for a chart release, which allows you to control most of the name.

@wolfsoft
Copy link

wolfsoft commented Feb 2, 2020

Tried to setup the metadata in the values.yaml, still no luck, volume claims are created by prometheus-operator with these long names, deploying failed (version 0.35.0):

      volumeClaimTemplate:
        metadata:
          name: data
        spec:

@ranjithwingrider
Copy link

ranjithwingrider commented May 7, 2021

The above workaround is working fine for me.. My changes are below.

## Provide a name to substitute for the full names of resources
##
fullnameOverride: "app"

Added fullnameOverride as app.

In AlertManager PVC spec.

   ## Storage is the definition of how storage will be used by the Alertmanager instances.
    ## ref: https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/user-guides/storage.md
    ##
    storage:
      volumeClaimTemplate:
        metadata:
          name: alert
        spec:
          storageClassName: cstor-csi
          accessModes: ["ReadWriteOnce"]
          resources:
            requests:
              storage: 40Gi
             #   selector: {}

In above snippet, added below entries

metadata:
     name: alert

In Prometheus PVC spec.

    ## Prometheus StorageSpec for persistent data
    ## ref: https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/user-guides/storage.md
    ##
    storageSpec:
    ## Using PersistentVolumeClaim
      volumeClaimTemplate:
        metadata:
          name: prom
        spec:
          storageClassName: cstor-csi
          accessModes: ["ReadWriteOnce"]
          resources:
            requests:
              storage: 40Gi

In above snippet, added below entries

metadata:
     name: prom

After executing with above modification, able to provision both pods.

Output
Pods:

$ kubectl get pod -n monitoring

NAME                                             READY   STATUS    RESTARTS   AGE
alertmanager-app-alertmanager-0                  2/2     Running   0          5m30s
app-operator-7cf8fc6dc-k6wb6                     1/1     Running   0          5m40s
prometheus-app-prometheus-0                      2/2     Running   1          5m30s
prometheus-grafana-6549f869b5-7dvp4              2/2     Running   0          5m40s
prometheus-kube-state-metrics-685b975bb7-f9qt6   1/1     Running   0          5m40s
prometheus-prometheus-node-exporter-6ngps        1/1     Running   0          5m40s
prometheus-prometheus-node-exporter-fnfbt        1/1     Running   0          5m40s
prometheus-prometheus-node-exporter-mlvt6        1/1     Running   0          5m40s

PVC:

$ kubectl get pvc -n monitoring

NAME                                    STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
alert-alertmanager-app-alertmanager-0   Bound    pvc-d734b059-e80a-488f-b398-c66e0b3c208c   40Gi       RWO            cstor-csi      5m53s
prom-prometheus-app-prometheus-0        Bound    pvc-24bbb044-c080-4f66-a0f6-e51cded91286   40Gi       RWO            cstor-csi      5m53s

SVC:

$ kubectl get svc -n monitoring

NAME                                  TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
alertmanager-operated                 ClusterIP   None            <none>        9093/TCP,9094/TCP,9094/UDP   15m
app-alertmanager                      ClusterIP   10.100.21.20    <none>        9093/TCP                     16m
app-operator                          ClusterIP   10.100.95.182   <none>        8080/TCP                     16m
app-prometheus                        ClusterIP   10.100.67.27    <none>        9090/TCP                     16m
prometheus-grafana                    ClusterIP   10.100.39.64    <none>        80/TCP                       16m
prometheus-kube-state-metrics         ClusterIP   10.100.72.188   <none>        8080/TCP                     16m
prometheus-operated                   ClusterIP   None            <none>        9090/TCP                     15m
prometheus-prometheus-node-exporter   ClusterIP   10.100.250.3    <none>        9100/TCP                     16m

The above entries will help to restrict number of characters for Pods and PVCs.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

No branches or pull requests

6 participants