Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

unmanage labels for exporter ServiceMonitor #1099

Conversation

umangachapagain
Copy link
Contributor

Create ServiceMonitor with default labels but do not reconcile
on label updates. This allows users to add custom labels to
ServiceMonitor which can be used by custom Prometheus instances
for monitoring.

Signed-off-by: Umanga Chapagain chapagainumanga@gmail.com

@umangachapagain umangachapagain requested review from jarrpa and nb-ohad and removed request for egafford and obnoxxx March 1, 2021 08:33
Create ServiceMonitor with default labels but do not reconcile
on label updates. This allows users to add custom labels to
ServiceMonitor which can be used by custom Prometheus instances
for monitoring.

Signed-off-by: Umanga Chapagain <chapagainumanga@gmail.com>
@umangachapagain
Copy link
Contributor Author

/retest

@jarrpa
Copy link
Member

jarrpa commented Mar 2, 2021

Ignore the previous comment I deleted. Given that the monitoring resources are fairly static, couldn't we just achieve this with a reconcile strategy of init?

@nb-ohad
Copy link
Contributor

nb-ohad commented Mar 2, 2021

@jarrpa
It seems we can use init. But will it protect us against unintentional deletion of the resource?

In any case, this PR is just a quick workaround to the current limitation.
The proper way, for a future PR, should be to instruct ocs-operator to add a specific label to all resource monitors, this way the agent will not have to know about all the specific resources that need to be labeled

@umangachapagain
Copy link
Contributor Author

Ignore the previous comment I deleted. Given that the monitoring resources are fairly static, couldn't we just achieve this with a reconcile strategy of init?

We could. But, we don't want to init and forget. We want to reconcile on spec changes only.
Also, this is inline with how we update PrometheusRule somewhere else in the code. So this makes the implementation consistent.

@jarrpa
Copy link
Member

jarrpa commented Mar 2, 2021

@jarrpa
It seems we can use init. But will it protect us against unintentional deletion of the resource?

Yes, the init reconcile strategy will recreate the resource. Of course, the label changes would be lost, but that is true for this PR's current solution as well.

Ignore the previous comment I deleted. Given that the monitoring resources are fairly static, couldn't we just achieve this with a reconcile strategy of init?

We could. But, we don't want to init and forget. We want to reconcile on spec changes only.
Also, this is inline with how we update PrometheusRule somewhere else in the code. So this makes the implementation consistent.

Sure, and that should also be changed. We should be adding fields to the StorageCluster API to properly set and maintain any customization to the resources.

Since this is a workaround that should be fixed properly at a future date, please insert a TODO comment and open a Jira issue to track that future change.

@nb-ohad
Copy link
Contributor

nb-ohad commented Mar 2, 2021

@jarrpa There is already a Jira ticket for that as a story under the "ocs to ocs" epic

@jarrpa
Copy link
Member

jarrpa commented Mar 2, 2021

Cool, so we just need the comment in the code.

@umangachapagain
Copy link
Contributor Author

Cool, so we just need the comment in the code.

Do we need to add todo for RFE?
This PR is not a workaround. It is how we already handle rest of the monitroring resources.

Copy link
Member

@jarrpa jarrpa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, after some offline discussion, it seems there is precedent for only reconciling a managed resource's spec. This seems bad, especially if the labels and/or annotations control functionality of how the operator interacts with the resource or even the behavior of the resource itself. But, given the precedent, I won't be too stubborn about this, we can change it later if anyone actually complains.

/lgtm

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label Mar 5, 2021
@jarrpa
Copy link
Member

jarrpa commented Mar 5, 2021

/cherrypick release-4.7

@openshift-cherrypick-robot

@jarrpa: once the present PR merges, I will cherry-pick it on top of release-4.7 in a new PR and assign it to you.

In response to this:

/cherrypick release-4.7

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-ci-robot
Copy link

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: jarrpa, nb-ohad

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci-robot openshift-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Mar 5, 2021
@openshift-merge-robot openshift-merge-robot merged commit 2588ac8 into red-hat-storage:master Mar 5, 2021
@openshift-cherrypick-robot

@jarrpa: new pull request created: #1106

In response to this:

/cherrypick release-4.7

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@umangachapagain
Copy link
Contributor Author

/cherrypick release-4.6

@openshift-cherrypick-robot

@umangachapagain: #1099 failed to apply on top of branch "release-4.6":

Applying: unmanage labels for exporter ServiceMonitor
Using index info to reconstruct a base tree...
A	controllers/storagecluster/exporter.go
Falling back to patching base and 3-way merge...
Auto-merging pkg/controller/storagecluster/exporter.go
CONFLICT (content): Merge conflict in pkg/controller/storagecluster/exporter.go
error: Failed to merge in the changes.
hint: Use 'git am --show-current-patch=diff' to see the failed patch
Patch failed at 0001 unmanage labels for exporter ServiceMonitor
When you have resolved this problem, run "git am --continue".
If you prefer to skip this patch, run "git am --skip" instead.
To restore the original branch and stop patching, run "git am --abort".

In response to this:

/cherrypick release-4.6

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants