Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix test:Probing container should have monotonically increasing restart #108652

Merged
merged 1 commit into from
Mar 15, 2022

Conversation

249043822
Copy link
Member

What type of PR is this?

/kind failing-test

What this PR does / why we need it:

STEP: Creating pod liveness-e9027a63-3ab6-4aa8-8a06-999eb1cbbe34 in namespace container-probe-960
Feb 26 16:37:15.670: INFO: Started pod liveness-e9027a63-3ab6-4aa8-8a06-999eb1cbbe34 in namespace container-probe-960
STEP: checking the pod's current state and verifying that restartCount is present
Feb 26 16:37:15.674: INFO: Initial restart count of pod liveness-e9027a63-3ab6-4aa8-8a06-999eb1cbbe34 is 0
Feb 26 16:40:32.438: INFO: Restart count of pod container-probe-960/liveness-e9027a63-3ab6-4aa8-8a06-999eb1cbbe34 is now 1 (3m16.764015028s elapsed)
Feb 26 16:42:16.696: FAIL: pod container-probe-960/liveness-e9027a63-3ab6-4aa8-8a06-999eb1cbbe34 - expected number of restarts: 5, found restarts: 1
Full Stack Trace
k8s.io/kubernetes/test/e2e/common/node.glob..func2.8()
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:195 +0x117
k8s.io/kubernetes/test/e2e.RunE2ETests(0x23bf257)
	_output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x687
k8s.io/kubernetes/test/e2e.TestE2E(0x2336919)
Feb 26 17:55:32.662: INFO: Started pod liveness-f7ae0bae-3a90-4dda-9aea-06043e48b9fb in namespace container-probe-4436
STEP: checking the pod's current state and verifying that restartCount is present
Feb 26 17:55:32.668: INFO: Initial restart count of pod liveness-f7ae0bae-3a90-4dda-9aea-06043e48b9fb is 0
Feb 26 17:57:29.484: INFO: Restart count of pod container-probe-4436/liveness-f7ae0bae-3a90-4dda-9aea-06043e48b9fb is now 1 (1m56.8151682s elapsed)
Feb 26 17:59:07.965: INFO: Restart count of pod container-probe-4436/liveness-f7ae0bae-3a90-4dda-9aea-06043e48b9fb is now 2 (3m35.297103774s elapsed)
Feb 26 18:00:34.363: FAIL: pod container-probe-4436/liveness-f7ae0bae-3a90-4dda-9aea-06043e48b9fb - expected number of restarts: 5, found restarts: 2
Full Stack Trace
k8s.io/kubernetes/test/e2e/common/node.glob..func2.8()
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:195 +0x117
k8s.io/kubernetes/test/e2e.RunE2ETests(0x23bf257)
	_output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x687
k8s.io/kubernetes/test/e2e.TestE2E(0x2336919)

Saw the failed log, the sync was very slow some circumstance, so I think no need 5 restarts, maybe set to 3 and extend waiting time.

Which issue(s) this PR fixes:

Fixes #108504

Special notes for your reviewer:

/cc @SergeyKanzhelev @ehashman

Does this PR introduce a user-facing change?

NONE

Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:


@k8s-ci-robot k8s-ci-robot added release-note-none Denotes a PR that doesn't merit a release note. kind/failing-test Categorizes issue or PR as related to a consistently or frequently failing test. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Mar 11, 2022
@k8s-ci-robot
Copy link
Contributor

@249043822: This issue is currently awaiting triage.

If a SIG or subproject determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added needs-priority Indicates a PR lacks a `priority/foo` label and requires one. area/test sig/node Categorizes an issue or PR as relevant to SIG Node. sig/testing Categorizes an issue or PR as relevant to SIG Testing. and removed do-not-merge/needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Mar 11, 2022
@k8s-ci-robot k8s-ci-robot added area/conformance Issues or PRs related to kubernetes conformance tests sig/architecture Categorizes an issue or PR as relevant to SIG Architecture. labels Mar 11, 2022
@aojea
Copy link
Member

aojea commented Mar 11, 2022

is not relaxing too much the condition?

@249043822
Copy link
Member Author

is not relaxing too much the condition?

any suggestions? Thx

@@ -192,7 +192,7 @@ var _ = SIGDescribe("Probing container", func() {
FailureThreshold: 1,
}
pod := livenessPodSpec(f.Namespace.Name, nil, livenessProbe)
RunLivenessTest(f, pod, 5, time.Minute*5)
RunLivenessTest(f, pod, 3, time.Minute*10)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

only backoffs for 5 retries combined is almost 2 minutes. Plus, the default timeout for a single restart is 4 minutes. So clearly the timeout is not adequate here, should be at least 6 minutes. Since the first restart must be the longest as it involves the pod scheduling, I'd expect 14 minutes can be enough delay for 5 retries (assuming half of time is needed to restart the pod than schedule and start).

Suggested change
RunLivenessTest(f, pod, 3, time.Minute*10)
// ~2 minutes backoff timeouts + 4 minutes defaultObservationTimeout + 2 minutes for each pod restart
RunLivenessTest(f, pod, 5, 2 * time.Minute + defaultObservationTimeout + 4 * 2 * time.Minute)

I think 14 minutes is ok timeout for conformance test.

If we switching to 3 retries, following the same logic we need 30 sec + 4 minutes + 2 * 2 minutes = 8.5 minutes, no need for 10 minutes. I'd prefer 5 retries though just to be on a safe side and not change the conformance test.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@andyli029
Copy link

is not relaxing too much the condition?

?

Copy link
Member

@SergeyKanzhelev SergeyKanzhelev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

let's see if this will make it

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: 249043822, SergeyKanzhelev

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added lgtm "Looks good to me", indicates that a PR is ready to be merged. approved Indicates a PR has been approved by an approver from all required OWNERS files. labels Mar 15, 2022
@k8s-ci-robot k8s-ci-robot merged commit 8bf64e4 into kubernetes:master Mar 15, 2022
SIG Node CI/Test Board automation moved this from Triage to Done Mar 15, 2022
@k8s-ci-robot k8s-ci-robot added this to the v1.24 milestone Mar 15, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. area/conformance Issues or PRs related to kubernetes conformance tests area/test cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/failing-test Categorizes issue or PR as related to a consistently or frequently failing test. lgtm "Looks good to me", indicates that a PR is ready to be merged. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. release-note-none Denotes a PR that doesn't merit a release note. sig/architecture Categorizes an issue or PR as relevant to SIG Architecture. sig/node Categorizes an issue or PR as relevant to SIG Node. sig/testing Categorizes an issue or PR as relevant to SIG Testing. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files.
Projects
Archived in project
5 participants