-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bump kube-prometheus-stack helm chart #3639
Bump kube-prometheus-stack helm chart #3639
Conversation
Hello teddyandrieux,My role is to assist you with the merge of this Status report is not available. |
Integration data createdI have created the integration data for the additional destination branches.
The following branches will NOT be impacted:
You can set option
|
Waiting for approvalThe following approvals are needed before I can proceed with the merge:
Peer approvals must include at least 1 approval from the following list:
|
Bump the kube-prometheus-stack helm chart to 23.2.0, various prometheus stack images also get bumped. ``` rm -rf charts/kube-prometheus-stack helm repo add prometheus-community https://prometheus-community.github.io/helm-charts helm repo update helm fetch -d charts --untar prometheus-community/kube-prometheus-stack ``` NOTE: This new helm chart deploy the grafana datasource sidecar as a container so that ConfigMap for datasource get automatically deployed and reloaded in grafana configuration Some field of the helm value file moved, we no longer pin the grafana image version in the helm value file. With this new kube-prometheus-stack helm chart version some dashboard name also get updated. Re-render the chart with the following command: ``` ./charts/render.py prometheus-operator \ charts/kube-prometheus-stack.yaml \ charts/kube-prometheus-stack/ \ --namespace metalk8s-monitoring \ --service-config grafana \ metalk8s-grafana-config \ metalk8s/addons/prometheus-operator/config/grafana.yaml \ metalk8s-monitoring \ --service-config prometheus \ metalk8s-prometheus-config \ metalk8s/addons/prometheus-operator/config/prometheus.yaml \ metalk8s-monitoring \ --service-config alertmanager \ metalk8s-alertmanager-config \ metalk8s/addons/prometheus-operator/config/alertmanager.yaml \ metalk8s-monitoring \ --service-config dex \ metalk8s-dex-config \ metalk8s/addons/dex/config/dex.yaml.j2 metalk8s-auth \ --drop-prometheus-rules charts/drop-prometheus-rules.yaml \ --patch 'PrometheusRule,metalk8s-monitoring,prometheus-operator-kubernetes-system-kubelet,spec:groups:0:rules:1:for,"5m"' \ > salt/metalk8s/addons/prometheus-operator/deployed/chart.sls ``` Extract again all Prometheus rules using: ``` ./tools/rule_extractor/rule_extractor.py \ -i <control-plane-ip> -p 8443 -t rules ``` NOTE: We also no longer care about rule order in the test
01563e4
to
a0ff639
Compare
History mismatchMerge commit #fbe05bcc1b82f1c865b3922114819a8bebe66d80 on the integration branch It is likely due to a rebase of the branch Please use the |
/reset |
Reset completeI have successfully deleted this pull request's integration branches. |
Integration data createdI have created the integration data for the additional destination branches.
The following branches will NOT be impacted:
You can set option
|
Waiting for approvalThe following approvals are needed before I can proceed with the merge:
Peer approvals must include at least 1 approval from the following list:
|
/approve |
Waiting for approvalThe following approvals are needed before I can proceed with the merge:
Peer approvals must include at least 1 approval from the following list:
The following options are set: approve |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code changes look OK, CI is green, so let's merge and I'll run a manual test before we release 2.11.0-alpha.2
(cc @JBWatenbergScality).
In the queueThe changeset has received all authorizations and has been added to the The changeset will be merged in:
The following branches will NOT be impacted:
There is no action required on your side. You will be notified here once IMPORTANT Please do not attempt to modify this pull request.
If you need this pull request to be removed from the queue, please contact a The following options are set: approve |
I have successfully merged the changeset of this pull request
The following branches have NOT changed:
Please check the status of the associated issue None. Goodbye teddyandrieux. |
Component:
'monitoring'
Context:
In order to have proper grafana datasource reload we need to bump grafana helm chart
See: grafana/helm-charts#887
Summary:
Bump kube-prometheus-stack charts version to 23.2.0
The following images have also been bumped accordingly:
And add a test to check that grafana datasource properly get added