Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

update pod state of scheduler cache when UpdatePod #64692

Merged
merged 1 commit into from Jun 20, 2018

Conversation

Projects
None yet
5 participants
@adohe
Copy link
Member

adohe commented Jun 4, 2018

update pod state map in scheduler cache when call UpdatePod. @k82cn @bsalamat

keep pod state consistent when scheduler cache UpdatePod
@adohe

This comment has been minimized.

Copy link
Member Author

adohe commented Jun 4, 2018

/assign @k82cn

@@ -317,6 +317,7 @@ func (cache *schedulerCache) UpdatePod(oldPod, newPod *v1.Pod) error {
if err := cache.updatePod(oldPod, newPod); err != nil {
return err
}
currState.pod = newPod

This comment has been minimized.

@k82cn

k82cn Jun 4, 2018

Member

Would you share the case that this PR handled? And as we're changed to use pod's UID as key, that may make cache in-consistent.

This comment has been minimized.

@k82cn

k82cn Jun 4, 2018

Member

oh, sorry for confusion: the UID did not change when updating pod. so the fix seems ok. Let me check it today :)

This comment has been minimized.

@adohe

adohe Jun 5, 2018

Author Member

@k82cn I think this PR just makes sure the cache always keep consistent. We won't have any problem now if we don't update the podStates map, but if pod resources could be updated(like in-place pod resource update), this will bring problem.

This comment has been minimized.

@k82cn

k82cn Jun 6, 2018

Member

Yes, resource update is one case. Another case is GetPod, which may return 'old' pod; can you help to add a test for it? For assumed pod, GetPod will return the correct value; but for assigned pods, it may return old pod (although no one uses this feature right now).

This comment has been minimized.

@adohe

adohe Jun 6, 2018

Author Member

sure I will add a test case asap.

@timothysc timothysc removed their request for review Jun 4, 2018

@adohe adohe force-pushed the adohe:scheduler_cache branch from be6c394 to a5158e8 Jun 6, 2018

@k8s-ci-robot k8s-ci-robot added size/M and removed size/XS labels Jun 6, 2018

@adohe adohe force-pushed the adohe:scheduler_cache branch from a5158e8 to 6116c64 Jun 6, 2018

@adohe

This comment has been minimized.

Copy link
Member Author

adohe commented Jun 6, 2018

@k82cn test added, ptal.

@k82cn

This comment has been minimized.

Copy link
Member

k82cn commented Jun 6, 2018

/lgtm
/approve no-issue

@k8s-ci-robot k8s-ci-robot added the lgtm label Jun 6, 2018

@k82cn

This comment has been minimized.

Copy link
Member

k82cn commented Jun 6, 2018

please update release note :)

@k8s-ci-robot

This comment has been minimized.

Copy link
Contributor

k8s-ci-robot commented Jun 6, 2018

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: adohe, k82cn

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k82cn

This comment has been minimized.

Copy link
Member

k82cn commented Jun 6, 2018

/approve

@adohe

This comment has been minimized.

Copy link
Member Author

adohe commented Jun 6, 2018

add release note done.

@fejta-bot

This comment has been minimized.

Copy link

fejta-bot commented Jun 6, 2018

/retest
This bot automatically retries jobs that failed/flaked on approved PRs (send feedback to fejta).

Review the full test history for this PR.

Silence the bot with an /lgtm cancel comment for consistent failures.

@adohe

This comment has been minimized.

Copy link
Member Author

adohe commented Jun 11, 2018

/test pull-kubernetes-node-e2e

@adohe

This comment has been minimized.

Copy link
Member Author

adohe commented Jun 11, 2018

/test pull-kubernetes-bazel-build

@fejta-bot

This comment has been minimized.

Copy link

fejta-bot commented Jun 16, 2018

/retest
This bot automatically retries jobs that failed/flaked on approved PRs (send feedback to fejta).

Review the full test history for this PR.

Silence the bot with an /lgtm cancel comment for consistent failures.

1 similar comment
@fejta-bot

This comment has been minimized.

Copy link

fejta-bot commented Jun 20, 2018

/retest
This bot automatically retries jobs that failed/flaked on approved PRs (send feedback to fejta).

Review the full test history for this PR.

Silence the bot with an /lgtm cancel comment for consistent failures.

@k8s-github-robot

This comment has been minimized.

Copy link
Contributor

k8s-github-robot commented Jun 20, 2018

Automatic merge from submit-queue (batch tested with PRs 64882, 64692, 64389, 60626, 64840). If you want to cherry-pick this change to another branch, please follow the instructions here.

@k8s-github-robot k8s-github-robot merged commit bb6270b into kubernetes:master Jun 20, 2018

18 checks passed

Submit Queue Queued to run github e2e tests a second time.
Details
cla/linuxfoundation adohe authorized
Details
pull-kubernetes-bazel-build Job succeeded.
Details
pull-kubernetes-bazel-test Job succeeded.
Details
pull-kubernetes-cross Skipped
pull-kubernetes-e2e-gce Job succeeded.
Details
pull-kubernetes-e2e-gce-100-performance Job succeeded.
Details
pull-kubernetes-e2e-gce-device-plugin-gpu Job succeeded.
Details
pull-kubernetes-e2e-gke Skipped
pull-kubernetes-e2e-kops-aws Job succeeded.
Details
pull-kubernetes-integration Job succeeded.
Details
pull-kubernetes-kubemark-e2e-gce Job succeeded.
Details
pull-kubernetes-kubemark-e2e-gce-big Job succeeded.
Details
pull-kubernetes-local-e2e Skipped
pull-kubernetes-local-e2e-containerized Skipped
pull-kubernetes-node-e2e Job succeeded.
Details
pull-kubernetes-typecheck Job succeeded.
Details
pull-kubernetes-verify Job succeeded.
Details
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.