Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix volume detach on hotplug attachment pod delete #10165

Merged
merged 2 commits into from Jul 26, 2023

Conversation

awels
Copy link
Member

@awels awels commented Jul 24, 2023

What this PR does / why we need it:
When the hotplug attachment pod is deleted, the VMI volumestatus goes back to bound, which triggers the manager to detach the volume from the running VM
interrupting any IO on that volume. The pod is then re-created and the volume gets re-attached and operation can continue, but if the volume is mounted, it needs to be re-mounted in the VM.

This commit modifies the logic so that if the volume is ready, it cannot go back to bound if the attachment pod disappears. This prevents the detachments and issues with IO on the running VM.
Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):
Fixes #

Special notes for your reviewer:

Release note:

BugFix: deleting hotplug attachment pod will no longer detach volumes that were not removed.

@kubevirt-bot kubevirt-bot added release-note Denotes a PR that will be considered when it comes time to generate release notes. dco-signoff: yes Indicates the PR's author has DCO signed all their commits. size/L labels Jul 24, 2023
@awels awels force-pushed the dont_detach_deleted_attachment_pod branch from ff2c1cb to 203e56d Compare July 24, 2023 13:54
When the hotplug attachment pod is deleted, the VMI
volumestatus goes back to bound, which triggers the
manager to detach the volume from the running VM
interrupting any IO on that volume. The pod is then
re-created and the volume gets re-attached and operation
can continue, but if the volume is mounted, it needs
to be re-mounted in the VM.

This commit modifies the logic so that if the volume is
ready, it cannot go back to bound if the attachment pod
disappears. This prevents the detachments and issues with
IO on the running VM.

Signed-off-by: Alexander Wels <awels@redhat.com>
@awels awels force-pushed the dont_detach_deleted_attachment_pod branch from 203e56d to bc0073f Compare July 24, 2023 18:48
@awels
Copy link
Member Author

awels commented Jul 24, 2023

/test pull-kubevirt-unit-test

return currentPhase == "" || currentPhase == virtv1.VolumeBound || currentPhase == virtv1.VolumePending ||
currentPhase == virtv1.HotplugVolumeAttachedToNode
return (currentPhase == "" || currentPhase == virtv1.VolumeBound || currentPhase == virtv1.VolumePending ||
currentPhase == virtv1.HotplugVolumeAttachedToNode) && currentPhase != virtv1.VolumeReady
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems kind of weird, how does adding currentPhase != virtv1.VolumeReady change anything? currentPhase cannot be any of the explicit values checked AND virtv1.VolumeReady.

I think a switch statement with all the valid values enumerated may be the cleanest

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You make a good point, they are mutually exclusive.

status.Phase = phase
status.Message = message
status.Reason = reason
if status.Phase != virtv1.VolumeReady {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sure that's the only value we care about? also would it make sense to create a helper function instead of explicit check?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes I am sure, all the other phases the volume is not mounted in the VM and switching phase won't matter. But I can definitely make it a helper. Might explain the intent a little better.

Remove unneeded phase check, and move other check into
its own function in case we need more elaborate checks

Signed-off-by: Alexander Wels <awels@redhat.com>
@awels
Copy link
Member Author

awels commented Jul 25, 2023

/retest-required

@mhenriks
Copy link
Member

/lgtm
/approve

@kubevirt-bot kubevirt-bot added the lgtm Indicates that a PR is ready to be merged. label Jul 25, 2023
@kubevirt-bot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: mhenriks

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@kubevirt-bot kubevirt-bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jul 25, 2023
@kubevirt-bot kubevirt-bot merged commit d2f8b4c into kubevirt:main Jul 26, 2023
36 checks passed
@awels
Copy link
Member Author

awels commented Jul 26, 2023

/cherrypick release-1.0

@kubevirt-bot
Copy link
Contributor

@awels: new pull request created: #10176

In response to this:

/cherrypick release-1.0

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. dco-signoff: yes Indicates the PR's author has DCO signed all their commits. lgtm Indicates that a PR is ready to be merged. release-note Denotes a PR that will be considered when it comes time to generate release notes. size/L
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants