-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
prometheus-kube-stack unknown fields probenamespaceselector and probeSelector #250
Comments
+1 |
1 similar comment
+1 |
I hit the same issue when I tried to upgrade my kube-prometheus-stack production deployment from |
@matofeder I also tried to create the CRD's manually according to the instructions however I get the same error. Did you create them in a different way? |
As i understood, the problem with the |
Can you try to delete the crds before retrying to install the chart?
|
As described in your shared link it is required to import probes CRD and it fixes the issue:
Just a small note, I am migrated from the old chart prometheus-operator to new prometheus-kube-stack and experienced an issue then my new rules wasn't loaded by Prometheus Operator. To fix this it required to recreate all Prometheus CRDs.
That fixes the observed issue after migrating from the old chart to the new as well as migrating to the new chart version. |
@maxutlvl Thanks! It's working for me. |
Closing as it's solved... |
I think this problem still exists. I had a similar issue when upgrading to chart version |
You don't need to delete the CRDs, you can just upgrade CRDs without data loss: kubectl apply -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.42.0/example/prometheus-operator-crd/monitoring.coreos.com_alertmanagers.yaml
kubectl apply -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.42.0/example/prometheus-operator-crd/monitoring.coreos.com_podmonitors.yaml
kubectl apply -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.42.0/example/prometheus-operator-crd/monitoring.coreos.com_probes.yaml
kubectl apply -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.42.0/example/prometheus-operator-crd/monitoring.coreos.com_prometheuses.yaml
kubectl apply -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.42.0/example/prometheus-operator-crd/monitoring.coreos.com_prometheusrules.yaml
kubectl apply -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.42.0/example/prometheus-operator-crd/monitoring.coreos.com_servicemonitors.yaml
kubectl apply -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.42.0/example/prometheus-operator-crd/monitoring.coreos.com_thanosrulers.yaml |
Thanks @Viperoo |
I see this issue is closed as fixed but it really isn't. I had moved to prometheus-kube-stack from helm stable repo (prometheus-operator) and I had issues with CRD's as described here. So while I was upgrading cluster from old to new version this solution works fine. I mean delete old CRD's. Unfortunately this problem exists even in brand new deployment. I have a code which spins up EKS cluster along with all needed basic resources. This code deploys this helm chart and each time I create brand new EKS cluster my deployment code fails to install Prometheus Operator.
If I then go and delete CRD's in this brand new cluster which helm chart just have failed to deploy (fresh deployment not upgrade) and rerun code it will work like a charm and prometheus will work. This is weird due to the fact that same helm chart is being used and on first run where there is no CRD's and helm installs them installation will fail. Then delete CRD's which code just have deployed to cluster and run exact same helm chart installation and it's working. |
I just faced the same issue as @szpuni. I tried to run the helm chart with:
And the error occurs:
After deleting all the CRDs:
Then, running |
I sloved this problem by deleting the old crds,
and then use |
10132: Upgrade GKE prometheus set up to prometheus-community/kube-prometheus-stack r=npepinpe a=npepinpe ## Description This PR updates the `prometheus-values.yaml` we use to set up our monitoring stack on our GKE clusters. These are the latest values used, adapted for the new chart. At the same time, I've already migrated us from the old deprecated chart to the new chart (prometheus-community/kube-prometheus-stack), and upgraded from 9.x to 16.0.0. In order to migrate, I did the following (based on [this issue from our SREs](https://github.com/camunda-cloud/monitoring/issues/524)): - [x] Modify the PV reclaim policy to `retain` instead of delete; this allows us to delete the old PVC but keep the persistent volume, retaining our data - [x] Pre-create the PVC that the new chart expects; it will then pick up on creation and won't create a new one, and we keep the old PV/data intact. - [x] Follow these unofficial [upgrade instructions](prometheus-community/helm-charts#250 (comment)); essentially we need to re-create the CRDs as `helm upgrade` doesn't install CRDs, so we need to pick up the CRDs from the updated operator version. - [x] Migrate from the old chart to the new chart using `helm upgrade metrics --debug --namespace default --dependency-update -f prometheus-operator-values.yml --version 10.0.0 prometheus-community/kube-prometheus-stack` (first run with a `--dry-run` to ensure the PVC and so on will be kept) - [x] Once done, [follow the upgrade instructions](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack#upgrading-chart) for each major version upgrade as you go along, using the command above but updating the version. This was done until version 16.0.0, which removes the last component using deprecated APIs (kube-state-metrics). With that done, we could then upgrade the Kubernetes clusters to 1.23 without any issues. The next time we need to do all of this will be when upgrading to k8s 1.25, which removes further APIs. While it's possible to upgrade k8s first and then fix the Helm release, it's easier to first upgrade the charts to make sure nothing using the deprecated APIs, and then upgrade k8s. One last thing: we could upgrade to 17.x and remove our pinned version of Grafana to upgrade Grafana to 8.x (like we have in SaaS). To do that, just edit the values file, remove the pinned tag for Grafana, update the necessary CRDs as described on the chart readme (link is above), and then run `helm upgrade metrics --debug --namespace default --dependency-update -f prometheus-operator-values.yml --version 17.0.0 prometheus-community/kube-prometheus-stack`. ## Related issues closes #9074 Co-authored-by: Nicolas Pepin-Perreault <nicolas.pepin-perreault@camunda.com>
Describe the bug
Fresh installs of
prometheus-kube-stack
fail withVersion of Helm and Kubernetes:
Helm Version:
Which chart: prometheus-kube-stack
Which version of the chart: 10.1.2
What happened:
Attempted to do a fresh
helm install
of the chart, however it failed with this error:What you expected to happen:
Expected the chart to be deployed with default values
How to reproduce it (as minimally and precisely as possible):
helm install prom-kube -n <YOUR-NAMESPACE> -community/kube-prometheus-stack
Anything else we need to know:
The install works perfectly fine if you specify version 10.1.1, so this definitely seems to be specific to 10.1.2
The text was updated successfully, but these errors were encountered: