Skip to content

register a terminating pod if containers running #124890

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 5 commits into from

Conversation

olyazavr
Copy link
Contributor

What type of PR is this?

/kind bug

What this PR does / why we need it:

When a pod is terminating, but its containers are still running (eg. it's running a prestop hook), and then kubelet restarts, it will re-sync but not the volumes, and that causes this error: MountVolume.SetUp failed for volume \"kube-public-cert\" : object \"browsers-dashboard\"/\"kube-public-cert\" not registered" which can only be resolved by force-deleting the pod

See this gist for logs: https://gist.github.com/olyazavr/f1980620e89dead80e72a23fcfe2014d

Basically we want this logic that happens in SyncPod but in the SyncTerminatingPod flow

Testing: I have a pod with a prestop hook to sleep for 10m, and a secret mounted on it. I kill the pod, then restart kubelet. Without this change, it will error out with the mount volume error, and with it, that error does not appear and the pod terminates gracefully

Which issue(s) this PR fixes:

Fixes #113289

Special notes for your reviewer:

Does this PR introduce a user-facing change?

Fixes a bug where if a pod is terminating and has a secret or configmap mounted and kubelet restarts, it would get MountVolume.SetUp errors and not complete terminating unless force-deleted 

Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:


@k8s-ci-robot k8s-ci-robot added release-note Denotes a PR that will be considered when it comes time to generate release notes. kind/bug Categorizes issue or PR as related to a bug. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels May 15, 2024
@k8s-ci-robot
Copy link
Contributor

Hi @olyazavr. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. area/kubelet sig/node Categorizes an issue or PR as relevant to SIG Node. and removed do-not-merge/needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels May 15, 2024
@olyazavr
Copy link
Contributor Author

/sig node

@HirazawaUi
Copy link
Contributor

/ok-to-test
I guess we should write an e2e test for it?

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. area/test sig/testing Categorizes an issue or PR as relevant to SIG Testing. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. labels May 15, 2024
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: olyazavr
Once this PR has been reviewed and has the lgtm label, please assign mrunalp for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@bart0sh
Copy link
Contributor

bart0sh commented May 20, 2024

/triage accepted
/priority important-longterm

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. labels May 20, 2024
Comment on lines 58 to 60
gomega.Eventually(ctx, func(ctx context.Context) error {
return verifyPodDisappeared(ctx, f.ClientSet, podName, ns)
}, 2*time.Minute, time.Second*1).Should(gomega.BeNil())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
gomega.Eventually(ctx, func(ctx context.Context) error {
return verifyPodDisappeared(ctx, f.ClientSet, podName, ns)
}, 2*time.Minute, time.Second*1).Should(gomega.BeNil())
gomega.Eventually(ctx, func() bool {
_, err := cl.CoreV1().Pods(ns).Get(ctx, name, metav1.GetOptions{})
return apierrors.IsNotFound(err)
}, 2*time.Minute, time.Second).Should(gomega.BeTrue())

It'll be easier.

"github.com/onsi/gomega"
)

var _ = SIGDescribe("TerminatingPod", func() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it should move to test/e2e_node/terminate_pods_test.go

Signed-off-by: Olga Shestopalova <oshestopalova1@gmail.com>
olyazavr added 4 commits May 20, 2024 12:28
Signed-off-by: Olga Shestopalova <oshestopalova1@gmail.com>
Signed-off-by: Olga Shestopalova <oshestopalova1@gmail.com>
Signed-off-by: Olga Shestopalova <oshestopalova1@gmail.com>
Signed-off-by: Olga Shestopalova <oshestopalova1@gmail.com>
@olyazavr olyazavr force-pushed the register-terminating-pod branch from 0c98f49 to 4331dd7 Compare May 20, 2024 16:28
@olyazavr
Copy link
Contributor Author

/retest

@olyazavr olyazavr requested a review from HirazawaUi May 28, 2024 14:27
@olyazavr
Copy link
Contributor Author

Could I get some review on this? We still see this issue and it would be great to have a fix that doesn't require hard deleting the pods on the node to fix

@olyazavr
Copy link
Contributor Author

olyazavr commented Jun 5, 2024

@haircommander could I get some clarity as to why this was moved to the "archive it" section of the board? This problem still exists and this patch is currently running in our production environment to address it

@HirazawaUi
Copy link
Contributor

HirazawaUi commented Jun 17, 2024

/hold
Wait for the #113289 (comment) to be answered.

@k8s-ci-robot k8s-ci-robot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Jun 17, 2024
@haircommander
Copy link
Contributor

@haircommander could I get some clarity as to why this was moved to the "archive it" section of the board? This problem still exists and this patch is currently running in our production environment to address it

sorry for the late reply, this got buried in my inbox. I archived it on the sig node ci board. it has some testing changes but it's primarily the concern of the main board (where it remains in the needs reviewer state).

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle stale
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 17, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle rotten
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Nov 16, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Reopen this PR with /reopen
  • Mark this PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closed this PR.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Reopen this PR with /reopen
  • Mark this PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/kubelet area/test cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. release-note Denotes a PR that will be considered when it comes time to generate release notes. sig/node Categorizes an issue or PR as relevant to SIG Node. sig/testing Categorizes an issue or PR as relevant to SIG Testing. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

Pod is stuck in terminating with "MountVolume.SetUp failed for volume ... not registered" error
6 participants