Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[kube-prometheus-stack] how to use persistent volumes instead of emptyDir #2816

Open
sacoco opened this issue Dec 15, 2022 · 24 comments
Open

Comments

@sacoco
Copy link

sacoco commented Dec 15, 2022

Hey fellows, i would like to use persistent volumes instead of emptyDir(by default) config, does anybody how to do that?
i would really appreciate an example, getting confused with pv creation and also the pvc

@Dragane
Copy link

Dragane commented Dec 18, 2022

Yeah I would like to know that as well please

@Dragane
Copy link

Dragane commented Dec 18, 2022

So I found the solution for the prometheus stack statefulset. You can either enable it in the values file prometheus.prometheusSpec.storageSpec or provide the external config file. For instance my config file looks like this:

prometheus:
  prometheusSpec:    
    storageSpec: 
     volumeClaimTemplate:
       spec:
         storageClassName: gp2
         accessModes: ["ReadWriteOnce"]
         resources:
           requests:
             storage: 50Gi

You can then reference this config file when installing or upgrading your helm chart like this
helm install -f prometheus-custom-values.yaml kube-prometheus-stack kube-prometheus-stack -n monitoring

Now I still have to figure out how to enable volume for the AlertManager.

@klucsik
Copy link

klucsik commented Dec 20, 2022

Here is config for alertmanager volume:

alertmanager:
  alertmanagerSpec:
      storage: 
        volumeClaimTemplate:
          spec:
            storageClassName: longhorn-2
            accessModes: ["ReadWriteOnce"]
            resources:
                requests:
                  storage: 10Gi

@Dragane
Copy link

Dragane commented Dec 21, 2022

Thank you! I think this ticket can be marked as solved.

@MerzMax
Copy link

MerzMax commented Jan 20, 2023

Did this create a PVC for you? I can't find any in my cluster after applying the prometheusSpec..

@Mihai-CMM
Copy link

Hi I also dont see any PVC created

    storageSpec:
     volumeClaimTemplate:
       spec:
         storageClassName: ceph-block
         accessModes: ["ReadWriteOnce"]
         resources:
           requests:
             storage: 2Ti

and there is just one storageSpec: under the proper prometheusSpec:

thank you

@myrondev
Copy link

Hi, when trying this way I have an error: failed to provision volume with StorageClass could not create volume in EC2: UnauthorizedOperation: You are not authorized to perform this operation . Directly, when I create a PVC I don't have this error?

@stale
Copy link

stale bot commented Apr 2, 2023

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.

@stale stale bot added the lifecycle/stale label Apr 2, 2023
@davyrod
Copy link

davyrod commented Apr 6, 2023

Same here, no PVC created after adding the volumeClaimTemplate spec.

@stale stale bot removed the lifecycle/stale label Apr 6, 2023
@lverba
Copy link

lverba commented May 5, 2023

same from my side, I specify the following

prometheusSpec:
    storageSpec:
      volumeClaimTemplate:
        spec:
          storageClassName: default
          accessModes: ["ReadWriteOnce"]
          resources:
            requests:
              storage: 10Gi

and there is no pvc created after that

@kimsehwan96
Copy link

Try this

      prometheus:
        prometheusSpec:
          storageSpec:
            volumeClaimTemplate:
              spec:
                storageClassName: foo
                accessModes:
                  - ReadWriteOnce
                resources:
                  requests:
                    storage: 30Gi

In my case, It works.

If you missed prometheus the higher layer of prometheusSpec, The helm chart's template will not make PVC. (prometheus.prometheusSpec.storageSpec.volumeClaimTemplate)

@Mihai-CMM
Copy link

Hmm strange

I have it like this and it does not . Do you think there could be conflicting statements in the [Other configs]?

prometheus:
  enabled: true
  ................................ [Other configurations from values.yaml]
  prometheusSpec:
    storageSpec:
      volumeClaimTemplate:
        spec:
          storageClassName: ceph-block
          accessModes: ["ReadWriteOnce"]
          resources:
            requests:
              storage: 2Ti
        selector:
          matchLabels:
            app: prometheus

@adamatti
Copy link

Facing the same issue here (trying to use persistent volumes for prometheus / alertmanager)

@kirbymark
Copy link

I had the same issue. Found these two issues #563 and #655 and am now good.

I'm using kube-prometheus-stack-45.29.0 helm chart. and below are relevant part of my-values

alertmanager:
  alertmanagerSpec:
    storage: 
      volumeClaimTemplate:
        metadata:
          name: data
        spec:
          storageClassName: cstor-csi-disk
          accessModes: ["ReadWriteOnce"]
          resources:
            requests:
              storage: 50Gi

adding metadata name under volumeClaimTemplate: was needed for me becuase of the name too long issue/bug

      volumeClaimTemplate:
        metadata:
          name: data

@stale
Copy link

stale bot commented Aug 10, 2023

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.

@flokain
Copy link

flokain commented Aug 21, 2023

i have a suspicion:

I saw that this solution worked only on installing a chart. If you wanted to upgrade it was ignored. i guess the prometheus operator can not handle the migration of one storage (empty dir is the default i guess) to another one and therefore ignores it, because otherwise the data would be just lost.

I do not know if there is a flag to or so to force this change but that could be the solution?

@stale stale bot removed the lifecycle/stale label Aug 21, 2023
@Rahulsharma0810
Copy link

Facing Same Issue

@throrin19
Copy link

Why not made just a parameter persistent: true in alertManager/pushgateway and promserver to simplify all this part ?

@davemaul
Copy link

Why not made just a parameter persistent: true in alertManager/pushgateway and promserver to simplify all this part ?

Because Storage is a complex topic and there's no one size fits all like solution (for example Storage Classes and Disk Sizes).

@mschaefer-gresham
Copy link

mschaefer-gresham commented Dec 15, 2023

This definitely looks like a bug. I tried installing 55.4.1 and the prometheus PVC would not get created no matter what I tried. I started successively taking lower releases (jumping several at a time), and it finally worked when I tried 48.5.0. So the bug was introduced somewhere between the two versions.

@bbkz
Copy link

bbkz commented Jan 13, 2024

Thank you for the hint, tried with version 55.7.1, but no pvc where created where as with version 48.5.0 it worked.

@Rose94t
Copy link

Rose94t commented Jan 29, 2024

Hello, I am also getting an error.

prometheus:
prometheusSpec:
storageSpec:
volumeClaimTemplate:
spec:
storageClassName: gp2
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 50Gi

alertmanager:
alertmanagerSpec:
storage:
volumeClaimTemplate:
spec:
storageClassName: longhorn-2
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 10Gi

pods and statefulset are in pending state ..no pv, pvc pending

@Rose94t
Copy link

Rose94t commented Jan 29, 2024

Bunu dene

      prometheus:
        prometheusSpec:
          storageSpec:
            volumeClaimTemplate:
              spec:
                storageClassName: foo
                accessModes:
                  - ReadWriteOnce
                resources:
                  requests:
                    storage: 30Gi

Benim durumumda işe yarıyor.

prometheusEğer üst katmanını kaçırırsanız prometheusSpec, dümen haritasının şablonu PVC olmayacaktır. ( prometheus.prometheusSpec.storageSpec.volumeClaimTemplate)

I tried your yaml but it says pvc pending..it didn't work

@mschaefer-gresham
Copy link

This is still in an issue in the latest version 56.16.0.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests