Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove TTL for scheduler cache to resolve the race condition when Cac… #110925

Merged

Conversation

kapiljain1989
Copy link

What type of PR is this?

/kind bug

What this PR does / why we need it:

In this PR we are removing cache timeout because of below race condition

A race condition will happen in the follow case:

pod1 is assigned to a node, scheduler cache is updated with the assignment, bind operation issued to apiserver.

if the apiserver is under huge pressure, bind takes more than 30s, scheduler expires the cached pod-to-node assignment.

bind eventually succeeds, but because the apiserver is under huge pressure, the pod update with the node name takes a long time to propagate to the scheduler.

because the pod update took a long time to propagate and the cache entry expired, the scheduler is not aware that the assignment actually happened, and so it had no problem assigning a second pod to the same node that would otherwise not fit if the scheduler was aware that the first pod was eventually assigned to the node.

Which issue(s) this PR fixes:

Fixes #106361

Special notes for your reviewer:

Does this PR introduce a user-facing change?

None

Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:


@k8s-ci-robot k8s-ci-robot added release-note-none Denotes a PR that doesn't merit a release note. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. kind/bug Categorizes issue or PR as related to a bug. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Jul 2, 2022
@k8s-ci-robot
Copy link
Contributor

@kapiljain1989: This issue is currently awaiting triage.

If a SIG or subproject determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Jul 2, 2022
@k8s-ci-robot
Copy link
Contributor

Hi @kapiljain1989. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling. and removed do-not-merge/needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Jul 2, 2022
@kerthcet
Copy link
Member

kerthcet commented Jul 4, 2022

/ok-to-test

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Jul 4, 2022
@kapiljain1989
Copy link
Author

/retest-required

@kapiljain1989
Copy link
Author

HI @kerthcet , This is my First PR, i want to know, should i remove or disable the expire test case

k8s.io/kubernetes/pkg/scheduler: TestSchedulerNoPhantomPodAfterExpire
k8s.io/kubernetes/pkg/scheduler/internal/cache: TestExpirePod/case_0
k8s.io/kubernetes/pkg/scheduler/internal/cache: TestExpirePod/case_1 
k8s.io/kubernetes/pkg/scheduler/internal/cache: TestExpirePod expand
k8s.io/kubernetes/pkg/scheduler/internal/cache: TestAddPodWillConfirm/case_0 
k8s.io/kubernetes/pkg/scheduler/internal/cache: TestAddPodWillConfirm
k8s.io/kubernetes/pkg/scheduler/internal/cache: TestAddPodAfterExpiration/case_0 
k8s.io/kubernetes/pkg/scheduler/internal/cache: TestAddPodAfterExpiration

As we are planning to remove cache expiration logic

@kerthcet
Copy link
Member

kerthcet commented Jul 4, 2022

I don't think we're going to remove the code about ttl, maybe we can follow as below:

  1. Pass time.Duration(-1) to newCache indicates that we'll never expire the cache
  2. When we calculate the podState deadline, mark it as will never expire.
  3. In background goroutine cleanupAssumedPods, we should also handle this.

If so, no need to remove the expiring tests. But we should first come to a consequence. cc @alculquicondor @ahg-g @Huang-Wei

@kapiljain1989
Copy link
Author

/retest-required

@kapiljain1989
Copy link
Author

/test pull-kubernetes-verify

@kapiljain1989
Copy link
Author

/retest-required

Copy link
Member

@alculquicondor alculquicondor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm actually more inclined to remove the TTL code entirely.

We already increased the timeout to 15 minutes and nothing wrong happened.

To be safe, we could wait one more release before doing the complete removal. Then we can submit this PR this release, as it is easy to revert.

pkg/scheduler/internal/cache/cache.go Outdated Show resolved Hide resolved
@kapiljain1989
Copy link
Author

@alculquicondor @denkensk @sanposhiho Please review and if you think it is ok Please approve the PR

Thank you,
Kapil Jain

Copy link
Member

@alculquicondor alculquicondor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please squash

pkg/scheduler/internal/cache/cache.go Outdated Show resolved Hide resolved
@kapiljain1989
Copy link
Author

/retest-required

Copy link
Member

@alculquicondor alculquicondor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks good, but please squash

@ahg-g
Copy link
Member

ahg-g commented Jul 4, 2022

+1 to keeping the logic and just setting value to 0 for now, we can remove the logic after at least two releases.

@kapiljain1989
Copy link
Author

looks good, but please squash

@alculquicondor Done

@alculquicondor
Copy link
Member

/lgtm
/approve

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Jul 4, 2022
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: alculquicondor, kapiljain1989

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jul 4, 2022
@k8s-ci-robot k8s-ci-robot merged commit 47ad357 into kubernetes:master Jul 4, 2022
@k8s-ci-robot k8s-ci-robot added this to the v1.25 milestone Jul 4, 2022
@kerthcet
Copy link
Member

kerthcet commented Jul 5, 2022

I think we still have one place unsettled, like when we set cache's ttl to zero, the deadline in podState should be nil. I patched the fix here #110954

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/bug Categorizes issue or PR as related to a bug. lgtm "Looks good to me", indicates that a PR is ready to be merged. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. release-note-none Denotes a PR that doesn't merit a release note. sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Scheduler will run into race conditions on large scale clusters
5 participants