Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use exponential backoff for failing migrations #8530

Merged
merged 1 commit into from Nov 15, 2022

Conversation

acardace
Copy link
Member

@acardace acardace commented Sep 27, 2022

What this PR does / why we need it:

When migrating a VM fails an increasingly exponential backoff will be applied before retrying.

Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):
Fixes #
https://bugzilla.redhat.com/show_bug.cgi?id=2124528

Special notes for your reviewer:

Release note:

Use exponential backoff for failing migrations

@kubevirt-bot kubevirt-bot added release-note Denotes a PR that will be considered when it comes time to generate release notes. dco-signoff: yes Indicates the PR's author has DCO signed all their commits. size/L labels Sep 27, 2022
@acardace acardace marked this pull request as draft September 27, 2022 13:30
@kubevirt-bot kubevirt-bot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Sep 27, 2022
@acardace
Copy link
Member Author

/uncc @maiqueb
/uncc @marceloamaral

@acardace
Copy link
Member Author

/cc @vladikr
/cc @xpivarc
/cc @jean-edouard

@acardace
Copy link
Member Author

Can you guys give a look and share thoughts about the approach/default values etc?

I still have to implement unit and e2e tests.

@acardace acardace force-pushed the failed-migration-backoff branch 2 times, most recently from b8605a4 to efad57d Compare October 3, 2022 15:30
@acardace acardace marked this pull request as ready for review October 3, 2022 15:32
@kubevirt-bot kubevirt-bot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Oct 3, 2022
@larrydewey
Copy link

I will take a look at this today.

@jean-edouard
Copy link
Contributor

/retest

@kubevirt-bot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: jean-edouard

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@kubevirt-bot kubevirt-bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Oct 6, 2022
@kubevirt-bot kubevirt-bot added the lgtm Indicates that a PR is ready to be merged. label Nov 8, 2022
@acardace acardace requested review from jean-edouard and removed request for vladikr, marceloamaral and stu-gott November 8, 2022 13:31
Copy link
Contributor

@enp0s3 enp0s3 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@acardace Hi, PR looks great! I have a few questions, please see below

@@ -1113,6 +1170,12 @@ func (c *MigrationController) sync(key string, migration *virtv1.VirtualMachineI
return c.handlePreHandoffMigrationCancel(migration, vmi, pod)
}

if err = c.handleMigrationBackoff(key, vmi, migration); err != nil {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we can have an err return if the c.listMigrationsMatchingVMI(vmi.Namespace, vmi.Name) fails, in that case the error type is not MigrationBackoffReason, we will need to check the returned error type before we generate an event

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch!

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed, thank you for spotting that!

return metav1.Time{}
}

outOffBackoffTS := getFailedTS(migrations[1]).Add(backoff)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@xpivarc Hi, you mean to use status.migrtaionState.endTimestamp instead of PhaseTransitionTimestamp?

deleteEvents(eventListOpts)
})

It("[Serial] after a successful migration backoff should be cleared", func() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why wouldn't run it in parallel?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are deleting events. While with some effort we can make this parallel.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm just sticking to the existing functions.

@kubevirt-bot kubevirt-bot removed the lgtm Indicates that a PR is ready to be merged. label Nov 9, 2022
return metav1.Time{}
}

outOffBackoffTS := getFailedTS(migrations[1]).Add(backoff)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@acardace Can we address the case of a cancelled migration?

@@ -1112,6 +1172,11 @@ func (c *MigrationController) sync(key string, migration *virtv1.VirtualMachineI
if migration.DeletionTimestamp != nil {
return c.handlePreHandoffMigrationCancel(migration, vmi, pod)
}
if err = c.handleMigrationBackoff(key, vmi, migration); errors.Is(err, migrationBackoffError) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

@acardace
Copy link
Member Author

acardace commented Nov 9, 2022

@enp0s3 isn't the canceled migration case handled at

if migration.DeletionTimestamp != nil {
?

@acardace
Copy link
Member Author

/retest-required

With this patch when migrating a VM fails an increasingly exponential
backoff will be applied before retrying.

Signed-off-by: Antonio Cardace <acardace@redhat.com>
@enp0s3
Copy link
Contributor

enp0s3 commented Nov 14, 2022

/lgtm
/unhold

@acardace Awesome, thanks for addressing the comments!

@kubevirt-bot kubevirt-bot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Nov 14, 2022
@kubevirt-bot kubevirt-bot added the lgtm Indicates that a PR is ready to be merged. label Nov 14, 2022
@kubevirt-bot
Copy link
Contributor

@acardace: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
pull-kubevirt-e2e-k8s-1.23-operator-nonroot 544e68e link true /test pull-kubevirt-e2e-k8s-1.23-operator-nonroot
pull-kubevirt-e2e-k8s-1.25-sig-network 58ad400 link false /test pull-kubevirt-e2e-k8s-1.25-sig-network

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@kubevirt-bot kubevirt-bot merged commit 6701a21 into kubevirt:main Nov 15, 2022
@acardace
Copy link
Member Author

/cherrypick release-0.58

@kubevirt-bot
Copy link
Contributor

@acardace: new pull request created: #8784

In response to this:

/cherrypick release-0.58

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@rmohr
Copy link
Member

rmohr commented Nov 16, 2022

Following the bugzilla discussion, I am curious on why we did not solve the pod accumulation and instead went for a back-off.

Here some thoughts where I would be happy to hear your thought-line:

  • I could not identify a maximum back-off, do you rely in the gc for migration objects? Let's suppose the backoff kicked in 65 times (next retry would be in 10 minutes), I resolve a node problem, and for 10 minutes nothing would happen and delay my upgrade work? I would suggest to not rely on the garbage collection and add clear limits.
  • What if the migration goes to another node? Would we try it faster?
  • It feels slightly out-of-band with the default k8s behaviour. We also don't say: "user A posted now 10 times the same failing VM yaml, let's back-off". I know that maintenance with drains is tough. How fast does it retry? Did you consider it to retry too fast, compared to this back-off and just GCing the pods as insufficient?

@xpivarc
Copy link
Member

xpivarc commented Nov 16, 2022

Stating for others looking into this:

Following the bugzilla discussion, I am curious on why we did not solve the pod accumulation and instead went for a back-off.

Here some thoughts where I would be happy to hear your thought-line:

* I could not identify a maximum back-off, do you rely in the gc for migration objects? Let's suppose the backoff kicked in 65 times (next retry would be in 10 minutes), I resolve a node problem, and for 10 minutes nothing would happen and delay my upgrade work? I would suggest to not rely on the garbage collection and add clear limits.

Maximum backoff is 20 sec * 2 ^ (n-1) where n is const defaultFinalizedMigrationGarbageCollectionBuffer = 5. So we can expose this and make it tunable.

The second part. Right now the backoff would be applied even if you would fix the problem. Here we could only recommend a workaround - remove the last VMIM object.

* What if the migration goes to another node? Would we try it faster?

No. As of now we can't even know if the migration is targeted to another node.

* It feels slightly out-of-band with the default k8s behaviour. We also don't say: "user A posted now 10 times the same failing VM yaml, let's back-off".  I know that maintenance with drains is tough. How fast does it retry? Did you consider it to retry too fast, compared to this back-off and just GCing the pods as insufficient?

Eviction is often retried continuously. We can try to look into suggesting backoff to clients but they could still ignore the suggestion. Even with the pod GC the problem would remain. It seems reasonable to constrain ourselves only to VMIM created by eviction for now and revisit the overall backoff later.

@acardace
Copy link
Member Author

Following the bugzilla discussion, I am curious on why we did not solve the pod accumulation and instead went for a back-off.

Here some thoughts where I would be happy to hear your thought-line:

* I could not identify a maximum back-off, do you rely in the gc for migration objects? Let's suppose the backoff kicked in 65 times (next retry would be in 10 minutes), I resolve a node problem, and for 10 minutes nothing would happen and delay my upgrade work? I would suggest to not rely on the garbage collection and add clear limits.

Yes it probably makes sense to make it explicit in code, the limit as Lubo said is implicit and governed by the migration object GC, it anyway is 320 seconds.

* What if the migration goes to another node? Would we try it faster?

No, but what if it's the source node that has issues? not sure if it makes sense to tweak it like this.

* It feels slightly out-of-band with the default k8s behaviour. We also don't say: "user A posted now 10 times the same failing VM yaml, let's back-off".  I know that maintenance with drains is tough. How fast does it retry? Did you consider it to retry too fast, compared to this back-off and just GCing the pods as insufficient?

@stu-gott
Copy link
Member

/cherry-pick release-0.53

@kubevirt-bot
Copy link
Contributor

@stu-gott: #8530 failed to apply on top of branch "release-0.53":

Applying: migration: use exponential backoff for failing migrations
Using index info to reconstruct a base tree...
M	pkg/virt-controller/watch/migration.go
M	pkg/virt-controller/watch/migration_test.go
M	pkg/virt-controller/watch/vmi.go
M	staging/src/kubevirt.io/api/core/v1/types.go
M	tests/migration_test.go
Falling back to patching base and 3-way merge...
Auto-merging tests/migration_test.go
CONFLICT (content): Merge conflict in tests/migration_test.go
Auto-merging staging/src/kubevirt.io/api/core/v1/types.go
Auto-merging pkg/virt-controller/watch/vmi.go
Auto-merging pkg/virt-controller/watch/migration_test.go
CONFLICT (content): Merge conflict in pkg/virt-controller/watch/migration_test.go
Auto-merging pkg/virt-controller/watch/migration.go
CONFLICT (content): Merge conflict in pkg/virt-controller/watch/migration.go
error: Failed to merge in the changes.
hint: Use 'git am --show-current-patch=diff' to see the failed patch
Patch failed at 0001 migration: use exponential backoff for failing migrations
When you have resolved this problem, run "git am --continue".
If you prefer to skip this patch, run "git am --skip" instead.
To restore the original branch and stop patching, run "git am --abort".

In response to this:

/cherry-pick release-0.53

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. dco-signoff: yes Indicates the PR's author has DCO signed all their commits. kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API lgtm Indicates that a PR is ready to be merged. release-note Denotes a PR that will be considered when it comes time to generate release notes. size/L
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

8 participants