Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug 2005843: Label ODF multicluster operator pod based on deployment name #43

Closed

Conversation

GowthamShanmugam
Copy link
Contributor

Keep labels of one operator pods always unique from the other
operator pods, So that label based fetching of pods can be more
deterministic.

BZ: https://bugzilla.redhat.com/show_bug.cgi?id=2005843

Signed-off-by: Gowtham Shanmugasundaram gshanmug@redhat.com

Keep labels of one operator pods always unique from the other
operator pods, So that label based fetching of pods can be more
deterministic.

BZ: https://bugzilla.redhat.com/show_bug.cgi?id=2005843

Signed-off-by: Gowtham Shanmugasundaram <gshanmug@redhat.com>
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Sep 21, 2021

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: GowthamShanmugam
To complete the pull request process, please assign agarwal-mudit after the PR has been reviewed.
You can assign the PR to them by writing /assign @agarwal-mudit in a comment when ready.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@GowthamShanmugam GowthamShanmugam changed the title Label ODF multicluster operator pod based on deployment name Bug 2005843: Label ODF multicluster operator pod based on deployment name Sep 21, 2021
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Sep 21, 2021

@GowthamShanmugam: This pull request references Bugzilla bug 2005843, which is valid.

No validations were run on this bug

No GitHub users were found matching the public email listed for the QA contact in Bugzilla (ratamir@redhat.com), skipping review request.

In response to this:

Bug 2005843: Label ODF multicluster operator pod based on deployment name

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@GowthamShanmugam
Copy link
Contributor Author

GowthamShanmugam commented Sep 21, 2021

$ oc get pods -n openshift-operators --show-labels
NAME                                        READY     STATUS    RESTARTS      AGE       LABELS
odfmo-controller-manager-84fbbc5f88-7gn2x   1/1       Running   1 (26s ago)   10m       control-plane=odfmo-controller-manager,pod-template-hash=84fbbc5f88

$ oc get deployment -n openshift-operators 
NAME                       READY     UP-TO-DATE   AVAILABLE   AGE
odfmo-controller-manager   1/1       1            1           12m

@GowthamShanmugam
Copy link
Contributor Author

Tested token exchange flow with this change, and it was successfull

@GowthamShanmugam
Copy link
Contributor Author

How about below files?
https://github.com/red-hat-storage/odf-multicluster-orchestrator/blob/main/config/prometheus/monitor.yaml
https://github.com/red-hat-storage/odf-multicluster-orchestrator/blob/main/config/rbac/auth_proxy_service.yaml

Do we need to change the labels here as well?

@sp98 @umangachapagain I can't see this ServiceMonitor and Service in ACM under openshift-operator namespace & openshift-storage namespace, do I need to change here?

@sp98
Copy link
Contributor

sp98 commented Sep 22, 2021

How about below files?
https://github.com/red-hat-storage/odf-multicluster-orchestrator/blob/main/config/prometheus/monitor.yaml
https://github.com/red-hat-storage/odf-multicluster-orchestrator/blob/main/config/rbac/auth_proxy_service.yaml
Do we need to change the labels here as well?

@sp98 @umangachapagain I can't see this ServiceMonitor and Service in ACM under openshift-operator namespace & openshift-storage namespace, do I need to change here?

I'm not sure how kustomize works. But I guess if we need to get metrics from the operator in the future, then we need to un-comment this line. And this may not work because the service and serviceaccount labels don't match with the operator pod label.

Also, I don't see ocs-operator doing it either. It also has different labels in the servicemonitor/service and the operator pod.

This is based on my limited knowledge how ART (or other tools) deploys the manifests. So I could be wrong here.

@GowthamShanmugam
Copy link
Contributor Author

How about below files?
https://github.com/red-hat-storage/odf-multicluster-orchestrator/blob/main/config/prometheus/monitor.yaml
https://github.com/red-hat-storage/odf-multicluster-orchestrator/blob/main/config/rbac/auth_proxy_service.yaml
Do we need to change the labels here as well?

@sp98 @umangachapagain I can't see this ServiceMonitor and Service in ACM under openshift-operator namespace & openshift-storage namespace, do I need to change here?

I'm not sure how kustomize works. But I guess if we need to get metrics from the operator in the future, then we need to un-comment this line. And this may not work because the service and serviceaccount labels don't match with the operator pod label.

Also, I don't see ocs-operator doing it either. It also has different labels in the servicemonitor/service and the operator pod.

This is based on my limited knowledge how ART (or other tools) deploys the manifests. So I could be wrong here.

I can see the ODF operator also doing the same mistake, it creates a pod / ServiceMonitor / services with a label control-plane: controller-manager if anyone uses same label then its a problem.

Even Prometheus not enabled here i am modifying the label here to avoid conflict in future.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants