New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug 1897830: Fix cluster creation when using localvolume #7552
Bug 1897830: Fix cluster creation when using localvolume #7552
Conversation
- use the nodeSelector field to get the associated nodes for a PV - drops the usage of labels Signed-off-by: Afreen Rahman <afrahman@redhat.com>
@afreen23: This pull request references Bugzilla bug 1897830, which is invalid:
Comment In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/bugzilla refresh |
@afreen23: This pull request references Bugzilla bug 1897830, which is valid. The bug has been moved to the POST state. The bug has been updated to refer to the pull request using the external bug tracker. 3 validation(s) were run on this bug
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/test analyze |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just wanted to understand why we are doing this at first?
res.add(nodeName); | ||
} | ||
const matchExpressions: MatchExpression[] = | ||
pv?.spec?.nodeAffinity?.required?.nodeSelectorTerms?.[0]?.matchExpressions; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what if there are multiple node selectors? I think that was the reason we relied on PV label.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For LSO PVs, it will always have hostname based selector.
I confirmed with LSO team on that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Relying on node affinity is more reliable over label selector. The node affinity field is a mandatory one for PV and will always be present on every PV unlike labels - added as per convenience.
Thats why we had issues in supporting 4.5 OCS on OCP 4.6 where users were trying to create storage cluster via local volume CR and they are blocked, since the pVs created via local volume are not labeled by hostname
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The PV's not labeled shouldn't it be fixed in LSO operator? They should provide consistency when the PV is created and have label set.
@@ -48,10 +52,13 @@ export const getTotalDeviceCapacity = (list: Discoveries[]): number => | |||
|
|||
export const getAssociatedNodes = (pvs: K8sResourceKind[]): string[] => { | |||
const nodes = pvs.reduce((res, pv) => { | |||
const nodeName = pv?.metadata?.labels?.[HOSTNAME_LABEL_KEY]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you tell in which case the HOSTNAME_LABEL_KEY will not be present if =done from localVolumeSet?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is not any case for local volume set.
The bug is for local volume fix and an improvement over the current implementation where we identify nodes via labels.
See https://github.com/openshift/console/pull/7552/files#r543672246
@afreen23: This pull request references Bugzilla bug 1897830, which is valid. 3 validation(s) were run on this bug
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
1 similar comment
@afreen23: This pull request references Bugzilla bug 1897830, which is valid. 3 validation(s) were run on this bug
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/lgtm |
/test e2e-gcp-console |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: afreen23, cloudbehl The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@afreen23: All pull requests linked via external trackers have merged: Bugzilla bug 1897830 has been moved to the MODIFIED state. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/cherrypick release-4.6 |
@afreen23: #7552 failed to apply on top of branch "release-4.6":
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Before:
After: