Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Timeout Unschedulable Migration Target Pods #6737

Merged
merged 7 commits into from Nov 19, 2021

Conversation

davidvossel
Copy link
Member

@davidvossel davidvossel commented Nov 4, 2021

We now observe two timeouts

  • 5m timeout will cancel any target pod stuck specifically in an "unschedulable" pending pods for > 5m
  • 15m timeout will cancel any target pod stuck in pending for any reason > 15m

I created the 15m catch all to account for all the unknown reasons for why a pod will never transition successfully to a running phase, this could be issues on the node with pulling the container image, or perhaps something new we can't predict.

Pending migration target pods timeout after 5 minutes when unschedulable

@kubevirt-bot kubevirt-bot added release-note Denotes a PR that will be considered when it comes time to generate release notes. dco-signoff: yes Indicates the PR's author has DCO signed all their commits. size/L labels Nov 4, 2021
@davidvossel
Copy link
Member Author

/hold

I want to write a functional test before this is merged.

@kubevirt-bot kubevirt-bot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Nov 4, 2021
Copy link
Contributor

@maiqueb maiqueb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a couple of comments; the most important one - imo - is to name the variable to shed some light into the re-queing mechanism.

Comment on lines 843 to 844
// Make sure we check this again after some time
c.Queue.AddAfter(migrationKey, time.Second*time.Duration(c.unschedulableTimeoutSeconds-diffSeconds))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe I'm just too thick, but I don't follow this c.unschedulableTimeoutSeconds-diffSeconds .

Would you assign it to a named variable to help people like me reason this out ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i changed this around a bit to try and help

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tks. Better now.

return nil
}

isUnschedulable := false
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this variable is required, nor the control code to exit the function if the pod is not k8sv1.PodReasonUnschedulable .

You could move all the code in https://github.com/kubevirt/kubevirt/pull/6737/files#diff-1baf192bb9c4560d7d40f008b7ba5c680ebce669b8f1b7639445abd3e7269688R820-R844 to a couple of new functions (named didPodTimeOut(pod *v1.Pod) bool {...} and func (c *MigrationController) podTimedOut(vmi, pod) error {...}) then directly call them when the right condition is met - i.e. condition.Type == k8sv1.PodScheduled && condition.Status == k8sv1.ConditionFalse && condition.Reason == k8sv1.PodReasonUnschedulable.

Not a must though, I won't insist.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I refactored this a bit to hopefully make it easier to follow

@@ -584,6 +599,43 @@ var _ = Describe("Migration watcher", func() {
shouldExpectMigrationSchedulingState(migration)
controller.Execute()
})

table.DescribeTable("should handle pod stuck in unschedulable state", func(phase virtv1.VirtualMachineInstanceMigrationPhase, shouldTimeout bool, timeLapse int64) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not fond of defining the expected test state in the test's input parameters - i.e. the shouldTimeout boolean input argument - but I don't see simpler alternatives.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is a fairly common pattern we use with tables. Pass in various input and declare what output is expected. It's a way to condense more tests into a single table.

@davidvossel
Copy link
Member Author

/hold cancel

I added a functional test

@kubevirt-bot kubevirt-bot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Nov 5, 2021
@davidvossel
Copy link
Member Author

/retest

@enp0s3
Copy link
Contributor

enp0s3 commented Nov 9, 2021

/assign @enp0s3

Copy link
Contributor

@maiqueb maiqueb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks. I think it reads better now.

@kubevirt-bot kubevirt-bot added the lgtm Indicates that a PR is ready to be merged. label Nov 10, 2021
Copy link
Contributor

@kbidarkar kbidarkar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Migration takes more than 5minutes if vm with CD-ROM.
We need to consider increasing the timeout value here, till this bug is fixed, https://bugzilla.redhat.com/show_bug.cgi?id=2014438

Both the Source and Target virt-launcher Pods would be in Running state during this long migration period of more than 5 minutes.

@davidvossel
Copy link
Member Author

Migration takes more than 5minutes if vm with CD-ROM. We need to consider increasing the timeout value here, till this bug is fixed, https://bugzilla.redhat.com/show_bug.cgi?id=2014438

Both the Source and Target virt-launcher Pods would be in Running state during this long migration period of more than 5 minutes.

The timeout in this PR only get's triggered if the target pod is stuck in pending due to scheduling issues for > 5 minutes. If the target pod is running, this time out will not occur.

@davidvossel
Copy link
Member Author

/retest

Copy link
Member

@vladikr vladikr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great to me.
/approve

@@ -103,6 +108,8 @@ func NewMigrationController(templateService services.TemplateService,
migrationStartLock: &sync.Mutex{},
clusterConfig: clusterConfig,
statusUpdater: status.NewMigrationStatusUpdater(clientset),

unschedulableTimeoutSeconds: defaultUnschedulableTimeoutSeconds,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe not for this PR, but did you think about making this configurable?
In that case, we could then override this value in a migration policy. For example, in a policy that targets workloads with a requirement for specific resources.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yep, we definitely could make it configurable. I'll give this some thought. I'm doing a follow up PR to this now where i have a "catch all" timeout for a target pod stuck in pending for any unknown reason, i'll need the timeout to be configurable for that simply to be able to functionally test it.

@kubevirt-bot kubevirt-bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Nov 11, 2021
@kubevirt-commenter-bot
Copy link

/retest
This bot automatically retries jobs that failed/flaked on approved PRs.
Silence the bot with an /lgtm cancel or /hold comment for consistent failures.

@rmohr
Copy link
Member

rmohr commented Nov 12, 2021

Looks great to me.
/approve

Also for me. 👍

/approve

@kubevirt-bot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: rmohr, vladikr

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@kubevirt-commenter-bot
Copy link

/retest
This bot automatically retries jobs that failed/flaked on approved PRs.
Silence the bot with an /lgtm cancel or /hold comment for consistent failures.

@kubevirt-bot kubevirt-bot removed the lgtm Indicates that a PR is ready to be merged. label Nov 12, 2021
@davidvossel
Copy link
Member Author

@vladikr @rmohr take another look. I made a new change.

We now observe two timeouts

  • 5m timeout will cancel any target pod stuck specifically in an "unschedulable" pending pods for > 5m
  • 15m timeout will cancel any target pod stuck in pending for any reason > 15m

I created the 15m catch all to account for all the unknown reasons for why a pod will never transition successfully to a running phase, this could be issues on the node with pulling the container image, or perhaps something new we can't predict.

@stu-gott
Copy link
Member

/retest

@kubevirt-bot kubevirt-bot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Nov 17, 2021
Signed-off-by: David Vossel <davidvossel@gmail.com>
Signed-off-by: David Vossel <davidvossel@gmail.com>
Signed-off-by: David Vossel <davidvossel@gmail.com>
Signed-off-by: David Vossel <davidvossel@gmail.com>
Signed-off-by: David Vossel <davidvossel@gmail.com>
behavior for functional tests

Signed-off-by: David Vossel <davidvossel@gmail.com>
Signed-off-by: David Vossel <davidvossel@gmail.com>
@kubevirt-bot kubevirt-bot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Nov 18, 2021
@kubevirt-bot
Copy link
Contributor

kubevirt-bot commented Nov 18, 2021

@davidvossel: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
pull-kubevirt-check-tests-for-flakes 7a99b5a link false /test pull-kubevirt-check-tests-for-flakes

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@davidvossel
Copy link
Member Author

/retest

@rmohr
Copy link
Member

rmohr commented Nov 19, 2021

/lgtm

@kubevirt-bot kubevirt-bot added the lgtm Indicates that a PR is ready to be merged. label Nov 19, 2021
@kubevirt-bot kubevirt-bot merged commit 0ec801d into kubevirt:main Nov 19, 2021
@enp0s3
Copy link
Contributor

enp0s3 commented Nov 25, 2021

/cherry-pick release-0.44

@kubevirt-bot
Copy link
Contributor

@enp0s3: #6737 failed to apply on top of branch "release-0.44":

Applying: Timeout unschedulable migration target pods
Applying: unit test for timing out of unschedulable migration pods
Using index info to reconstruct a base tree...
M	pkg/virt-controller/watch/migration_test.go
Falling back to patching base and 3-way merge...
Auto-merging pkg/virt-controller/watch/migration_test.go
CONFLICT (content): Merge conflict in pkg/virt-controller/watch/migration_test.go
error: Failed to merge in the changes.
hint: Use 'git am --show-current-patch=diff' to see the failed patch
Patch failed at 0002 unit test for timing out of unschedulable migration pods
When you have resolved this problem, run "git am --continue".
If you prefer to skip this patch, run "git am --skip" instead.
To restore the original branch and stop patching, run "git am --abort".

In response to this:

/cherry-pick release-0.44

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. dco-signoff: yes Indicates the PR's author has DCO signed all their commits. lgtm Indicates that a PR is ready to be merged. release-note Denotes a PR that will be considered when it comes time to generate release notes. size/L
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

9 participants