-
Notifications
You must be signed in to change notification settings - Fork 173
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Populate supervisor FSS values for node plugin in guest cluster #2386
Populate supervisor FSS values for node plugin in guest cluster #2386
Conversation
Hi @akankshapanse. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
0704647
to
eb8da3d
Compare
/ok-to-test |
/ok-to-test |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/approve
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/apporve
/lgtm
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: akankshapanse, chethanv28, divyenpatel The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
…#2408) * Add PVCSI 1.25 deployment yaml (kubernetes-sigs#2331) * Populate supervisor FSS values for node plugin in guest cluster (kubernetes-sigs#2386) * Add pvcsi 1.26 & 1.27 k8s yaml (kubernetes-sigs#2406) * Add latest pvcsi 1.25 yaml changes --------- Co-authored-by: Akanksha Panse <pansea@vmware.com>
What this PR does / why we need it:
In case of online expansion of volume in guest cluster, during
NodeExpandVolume()
, we rescan the device to get updated size of the device ifonline-expand-volume
FSS is enabled. FSS check here looks at k8sOrchestrator.internalFSS value (populated from internal-feature-state configmap) as well as k8sOrchestrator.supervisorFSS value inisFSSEnabled()
call.k8sOrchestrator.supervisorFSS and k8sOrchestrator.internalFSS values are populated inside
initFSS()
where for guest cluster node plugin, only k8sOrchestrator.internalFSS is populated from internal-feature-state configmap but k8sOrchestrator.supervisorFSS is not populated at all. Due to this,isFSSEnabled()
returns false when checking for online-expand-volume FSS inNodeExpandVolume().
This step skips rescan of device during online expand and operation fails.Basically, supervisorFSS values when referred in node plugin of guest cluster, always return false value, which can fail/affect the functionality of node plugin, since it was not populated initially for node plugin in
initFSS()
.Hence the change made here populates the supervisor FSS for node plugin from supervisor FSS configmap, so that the values can be used correctly in node plugin during FSS check.
This fixes the regression caused by https://github.com/kubernetes-sigs/vsphere-csi-driver/pull/2192/files.
NOTE: Please refer this change before enabling "csi-sv-feature-states-replication" fss for changes to be reworked or revisited.
Which issue this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close that issue when PR gets merged): fixes #Testing done:
Tested online resize on GC, where the resize was consistently failing, which now succeeds with the fix.
Special notes for your reviewer:
Release note: