Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix stateful set pod recreation and event spam #123809

Merged
merged 3 commits into from Apr 18, 2024

Conversation

atiratree
Copy link
Member

What type of PR is this?

/kind bug

What this PR does / why we need it:

  • do not emit events when pod reaches terminal phase
  • do not try to recreate pod until the old pod has been removed from
    etcd storage

Which issue(s) this PR fixes:

Fixes #122709

Special notes for your reviewer:

Does this PR introduce a user-facing change?

Emition of RecreatingFailedPod and RecreatingTerminatedPod events has been removed from stateful set lifecycle.

Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:


@k8s-ci-robot k8s-ci-robot added release-note Denotes a PR that will be considered when it comes time to generate release notes. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. kind/bug Categorizes issue or PR as related to a bug. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. labels Mar 7, 2024
@k8s-ci-robot k8s-ci-robot added sig/apps Categorizes an issue or PR as relevant to SIG Apps. and removed do-not-merge/needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Mar 7, 2024
// Note that pods with phase Succeeded will also trigger this event. This is
// because final pod phase of evicted or otherwise forcibly stopped pods
// (e.g. terminated on node reboot) is determined by the exit code of the
// container, not by the reason for pod termination. We should restart the pod
// regardless of the exit code.
if isFailed(replicas[i]) || isSucceeded(replicas[i]) {
if isFailed(replicas[i]) {
ssc.recorder.Eventf(set, v1.EventTypeWarning, "RecreatingFailedPod",
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not see a use case behind this event. It is here only for tracking of the terminal phases.

Without this event we will still get the following events:

default       5s          Normal    Killing                   pod/nginx-roll-2                Stopping container nginx
default       4s          Normal    SuccessfulCreate          statefulset/nginx-roll          create Pod nginx-roll-2 in StatefulSet nginx-roll successful

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suppose we could move the event generation into the updatePod handler if we want to be backwards compatible. Thoughts?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Move or change here to logging seems acceptable.

@@ -235,6 +235,9 @@ func (ssc *StatefulSetController) updatePod(logger klog.Logger, old, cur interfa
return
}
logger.V(4).Info("Pod objectMeta updated", "pod", klog.KObj(curPod), "oldObjectMeta", oldPod.ObjectMeta, "newObjectMeta", curPod.ObjectMeta)
if oldPod.Status.Phase != curPod.Status.Phase && podutil.IsPodPhaseTerminal(curPod.Status.Phase) {
logger.V(4).Info("StatefulSet Pod reached terminal phase", "pod", klog.KObj(curPod), "statefulSet", klog.KObj(set), "podPhase", curPod.Status.Phase)
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am adding this logging in case somebody would like to debug these.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit, why not just add that pod phase to the above line?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, that might be even better for debugging

updateRevision.Name,
replicaOrd)
// New pod should be generated on the next sync after the current pod is removed from etcd.
return true, nil
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should try not to create the pods until the old one has been removed from etcd. Otherwise the kcm logs are getting spammed with:

 stateful_set.go:430] error syncing StatefulSet default/nginx-roll, requeuing: object is being deleted: pods "nginx-roll-2" already exists, the server was not able to generate a unique name for the object

@atiratree atiratree force-pushed the fix-sts-events branch 2 times, most recently from 2a42314 to fe811c5 Compare March 7, 2024 23:03
@atiratree
Copy link
Member Author

/triage accepted
/priority important-soon
/assign @soltysh

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. labels Mar 7, 2024
@atiratree atiratree force-pushed the fix-sts-events branch 3 times, most recently from a9728e2 to f952189 Compare March 11, 2024 16:09
@k8s-ci-robot k8s-ci-robot added area/e2e-test-framework Issues or PRs related to refactoring the kubernetes e2e test framework area/test sig/testing Categorizes an issue or PR as relevant to SIG Testing. labels Mar 11, 2024
@atiratree atiratree force-pushed the fix-sts-events branch 4 times, most recently from 9497927 to 4fcd99f Compare March 11, 2024 18:58
Comment on lines 258 to 260
// - try at least 3 times if there are 0 replicas
// - assume that each replica can potentially generate 3 conflicts when statefulset status is updated
// e.g. sudden pod changes between (Running, Ready, Available, Terminating, Deleted)
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

E2E's were failing without this fix; https://prow.k8s.io/view/gs/kubernetes-jenkins/pr-logs/pull/123809/pull-kubernetes-conformance-kind-ga-only-parallel/1766049762357809152

Kubernetes e2e suite: [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] expand_less 	1m56s
{ failed [FAILED] too many retries draining statefulset "ss"
In [It] at: k8s.io/kubernetes/test/e2e/framework/statefulset/rest.go:265 @ 03/08/24 10:57:02.069
}

I am not sure if this is suppose to test conflicts? If not, this could be changed to a timeout based update.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To me this looks like helper written before the backoff was introduced. How about re-writing this with wiat.ExponentialBackoff with initial duration 2s, factor 1.5 and 2 steps. wdyt?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1, RetryOnConflict might be even better for this. Also I based the backoff on DefaultRetry and increased the number of steps a bit

Copy link
Contributor

@soltysh soltysh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some minor nits, but overall this looks reasonable.

@@ -235,6 +235,9 @@ func (ssc *StatefulSetController) updatePod(logger klog.Logger, old, cur interfa
return
}
logger.V(4).Info("Pod objectMeta updated", "pod", klog.KObj(curPod), "oldObjectMeta", oldPod.ObjectMeta, "newObjectMeta", curPod.ObjectMeta)
if oldPod.Status.Phase != curPod.Status.Phase && podutil.IsPodPhaseTerminal(curPod.Status.Phase) {
logger.V(4).Info("StatefulSet Pod reached terminal phase", "pod", klog.KObj(curPod), "statefulSet", klog.KObj(set), "podPhase", curPod.Status.Phase)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit, why not just add that pod phase to the above line?

// Note that pods with phase Succeeded will also trigger this event. This is
// because final pod phase of evicted or otherwise forcibly stopped pods
// (e.g. terminated on node reboot) is determined by the exit code of the
// container, not by the reason for pod termination. We should restart the pod
// regardless of the exit code.
if isFailed(replicas[i]) || isSucceeded(replicas[i]) {
if isFailed(replicas[i]) {
ssc.recorder.Eventf(set, v1.EventTypeWarning, "RecreatingFailedPod",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Move or change here to logging seems acceptable.

Comment on lines 258 to 260
// - try at least 3 times if there are 0 replicas
// - assume that each replica can potentially generate 3 conflicts when statefulset status is updated
// e.g. sudden pod changes between (Running, Ready, Available, Terminating, Deleted)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To me this looks like helper written before the backoff was introduced. How about re-writing this with wiat.ExponentialBackoff with initial duration 2s, factor 1.5 and 2 steps. wdyt?

- do not emit events when pod reaches terminal phase
- do not try to recreate pod until the old pod has been removed from
  etcd storage
@atiratree atiratree force-pushed the fix-sts-events branch 4 times, most recently from c3dbcd7 to b904810 Compare March 15, 2024 16:07
statefulset controller does less requests per sync now and thus can
reconcile status faster, thus resulting in a higher chance for conflicts
Copy link
Contributor

@soltysh soltysh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm
/approve
/label tide/merge-method-squash

@k8s-ci-robot k8s-ci-robot added tide/merge-method-squash Denotes a PR that should be squashed by tide when it merges. lgtm "Looks good to me", indicates that a PR is ready to be merged. labels Mar 15, 2024
@k8s-ci-robot
Copy link
Contributor

LGTM label has been added.

Git tree hash: 7aa2668daacc5ea3b06841104fe670296e1a2ed9

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: atiratree, soltysh

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Mar 15, 2024
@atiratree
Copy link
Member Author

unrelated
/test pull-kubernetes-e2e-gce

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. area/e2e-test-framework Issues or PRs related to refactoring the kubernetes e2e test framework area/test cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/bug Categorizes issue or PR as related to a bug. lgtm "Looks good to me", indicates that a PR is ready to be merged. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. release-note Denotes a PR that will be considered when it comes time to generate release notes. sig/apps Categorizes an issue or PR as relevant to SIG Apps. sig/testing Categorizes an issue or PR as relevant to SIG Testing. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. tide/merge-method-squash Denotes a PR that should be squashed by tide when it merges. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
Status: Done
Development

Successfully merging this pull request may close these issues.

RecreatingTerminatedPod/RecreatingFailedPod/SuccessfulDelete events leak
3 participants