-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove prometheus CRDs #7499
Remove prometheus CRDs #7499
Conversation
I remove my fork by accident, it replaces #7480 |
Signed-off-by: Jan-Otto Kröpke <jok@cloudeteer.de>
Signed-off-by: Jan-Otto Kröpke <mail@jkroepke.de>
3c13d59
to
7fa2f88
Compare
./tools/diff_coverage.sh ../loki-main/test_results.txt test_results.txt ingester,distributor,querier,querier/queryrange,iter,storage,chunkenc,logql,loki Change in test coverage per package. Green indicates 0 or positive change, red indicates that test coverage for a package fell. + ingester 0%
+ distributor 0%
+ querier 0%
+ querier/queryrange 0%
+ iter 0%
+ storage 0%
+ chunkenc 0%
+ logql 0%
+ loki 0% |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the input. I'll just reject it so we can discuss our options. Ideally we have an out of the box solution. The CRDs are required by the Agent Operator. Would it help if you could disable the CRDs?
There is no away to disable single CRDs. I could run On top, there are issues with having the loki chart as part of an umbrella chart. For example together with the kube-prometheus-stack. At the end, I do not get the point here. What the point to create service Monitors without having |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I for one am ok with this change. Prometheus operator is a prerequisite since it is currently the only option for PrometheusRule
resources. It should install the ServiceMonitor
CRD. Furthermore the Grafana Agent will also install the ServiceMonitor
CRD. I am opinionated about having self monitoring be enabled by default, but this PR achieves that with the API capabilities check, so I'm good with it. I'm of course open to discussion though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry. I misinterpreted this change. I thought you've removed the ServiceMonitor
CRD kind: ServiceMonitor
when you are removing kind: CustomResourceDefiniton
.
@jeschkies ahh, yes, definitely still need our |
No, I only want to remove the CRDs, since they should be owned by the kube-prometheus-stack helm chart. The kind: ServiceMonitor manifeste are totally fine here. The grafana-agent-operator helm chart has the same problem. The helm chart also contains CRDs from kube-prometheus-stack. Only the kube-prometheus-stack should hold and deploy the CRDs, because the deployed controller (prometheus-operator) will watch such CRs... |
@jkroepke I think the Grafana Agent chart is a slightly different problem, because the Grafana Agent can act on service monitor resources independent of a prometheus operator. So, in a deployment where you only are using Grafana Agent, that chart needs to include those CRDs |
In such scenarios, think about an additional sub chart and put the Prometheus Operator CRD in. This way, you can toggle the additional CRDs using the subchart.condition toggle. Unlike Helm, ArgoCD and other GitOps tools manage the CRDs manifest. If kube-promethus-stack and grafana-operator is installed in the same cluster using ArgoCD, the CRDs always ouf of sync on one application side. With the additional subchart and condition, the ServiceMonitor CRD can be toggled. |
**What this PR does / why we need it**: Remove Prometheus CRDs from loki chart. Signed-off-by: Jan-Otto Kröpke <jok@cloudeteer.de> Signed-off-by: Jan-Otto Kröpke <mail@jkroepke.de>
**What this PR does / why we need it**: Remove Prometheus CRDs from loki chart. Signed-off-by: Jan-Otto Kröpke <jok@cloudeteer.de> Signed-off-by: Jan-Otto Kröpke <mail@jkroepke.de>
**What this PR does / why we need it**: Remove Prometheus CRDs from loki chart. Signed-off-by: Jan-Otto Kröpke <jok@cloudeteer.de> Signed-off-by: Jan-Otto Kröpke <mail@jkroepke.de>
**What this PR does / why we need it**: Remove Prometheus CRDs from loki chart. Signed-off-by: Jan-Otto Kröpke <jok@cloudeteer.de> Signed-off-by: Jan-Otto Kröpke <mail@jkroepke.de>
What this PR does / why we need it:
Remove Prometheus CRDs from loki chart.
Why?
CRDs are not fully managed by helm. If loki chart is installed before kube-prometheus-stack helm chart, ServiceMonitor and PrometheusRule CRD are not in sync/still outdated.
In umbrella chart scenarios, where kube-prometheus-stack and loki are bundled to one helm charts, the versions of the installed CRDs maybe unexpectable. (No one knows, if the CRD are picked from loki or kube-prometheus-stack first)
Which issue(s) this PR fixes:
Fixes #
Special notes for your reviewer:
Checklist
CONTRIBUTING.md
guideCHANGELOG.md
updateddocs/sources/upgrading/_index.md