New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Verify pod termination with E2E PreStop hook #94922
Conversation
Hi @knabben. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
3ec96e6
to
f0e2992
Compare
/assign @spiffxp |
test/e2e/node/pre_stop.go
Outdated
@@ -181,8 +179,8 @@ var _ = SIGDescribe("PreStop", func() { | |||
testPreStop(f.ClientSet, f.Namespace.Name) | |||
}) | |||
|
|||
ginkgo.It("graceful pod terminated should wait until preStop hook completes the process [Flaky]", func() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this PR effectively change what the test is validating. Original one was testing that if gracePeriod allows it and the pod preStop hook is configured to be long enough, pod will be running that preStop hook for the whole duration. New test checks that pod will not exist after grace period. Which doesn't confirm that preStop hook was executed. Is it correct understanding or a proposed change?
@@ -201,28 +199,8 @@ var _ = SIGDescribe("PreStop", func() { | |||
err = podClient.Delete(context.TODO(), pod.Name, *metav1.NewDeleteOptions(gracefulTerminationPeriodSeconds)) | |||
framework.ExpectNoError(err, "failed to delete pod") | |||
|
|||
//wait up to graceful termination period seconds | |||
time.Sleep(30 * time.Second) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should the original test be fixed by changing this timeout to 15 second? 30 seconds is on the edge of graceful termination timeout and pod MAY or MAY NOT run at this ponit. After 15 seconds is is still running for sure if preStop hook is being executed.
f0e2992
to
f68cf33
Compare
f68cf33
to
6e9ad57
Compare
test/e2e/node/pre_stop.go
Outdated
@@ -202,7 +202,7 @@ var _ = SIGDescribe("PreStop", func() { | |||
framework.ExpectNoError(err, "failed to delete pod") | |||
|
|||
//wait up to graceful termination period seconds |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just a small suggestion - perhaps we can explain why it is 15 seconds in a comment here.
/ok-to-test |
6e9ad57
to
2766930
Compare
Failed with another flaky - [sig-storage] HostPath should support r/w [NodeConformance] |
/test pull-kubernetes-node-e2e |
3 similar comments
/test pull-kubernetes-node-e2e |
/test pull-kubernetes-node-e2e |
/test pull-kubernetes-node-e2e |
/test pull-kubernetes-node-e2e @spiffxp good to go here? |
This mark kubernetes/test/e2e/node/pre_stop.go Line 212 in 14d380c
I am not sure what ratio of this test failure, can we know that at this time to make the test un-flake safely? |
@oomichi the test was sleeping for the entire graceful period (30 seconds), and checking if the pod is running in the extended grace period +2 seconds, reducing the sleep time in half will ensure the test is still running and is inside the grace period. /test pull-kubernetes-node-e2e |
test/e2e/node/pre_stop.go
Outdated
time.Sleep(30 * time.Second) | ||
// wait for less than the gracePeriod termination ensuring the | ||
// preStop hook is still executing. | ||
time.Sleep(15 * time.Second) | ||
|
||
ginkgo.By("verifying the pod running state after graceful termination") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@oomichi the test was sleeping for the entire graceful period (30 seconds), and checking if the pod is running in the extended grace period +2 seconds, reducing the sleep time in half will ensure the test is still running and is inside the grace period.
Thanks for your explanation. I guess I got the point.
I'd like to summarize my understanding to know that is correct:
The original e2e test expects the pod is still Running
after graceful termination (30 seconds in this test case) because of the extended 2 seconds as you said since
If one of the Pod's containers has defined a preStop hook, the kubelet runs that hook inside of the container.
If the preStop hook is still running after the grace period expires, the kubelet requests a small, one-off grace
period extension of 2 seconds.
of https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination
However that 2 seconds can make the test flake due to high workload at upstream CI systems sometimes.
Then this PR changes to verify the pod is still running before expiring the graceful termination during preStop hook is running.
If the above is correct, there are 2 points.
- The above message needs to be updated because the graceful termination doesn't happen at the time.
- Technically this test doesn't check preStop hook is running. the preStop outputs
preStop
to its stdout every 1 second but that is not checked on the test side. I don't think we need to add this check in this PR for the scope, it is better to add it with another PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, this is the rationality here.
- Maybe verifying the pod is running while in the graceful period termination
- Sure, the conformance test for the preHook follows this idea. I can propose another PR or even a new test with the stdout check logic.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, the above sounds good for me.
5e9c13f
to
d3165bc
Compare
/lgtm thank you for updating the comment |
/test pull-kubernetes-conformance-kind-ipv6-parallel |
Thanks for updating. /lgtm |
I think I would prefer to merge the change in this PR, but remove the Flaky tag in a followup PR, based on data from https://testgrid.k8s.io/google-gce#gce-cos-master-flaky-repro |
I agree, @spiffxp PTAL, I open another one to remove the flaky tag after we ensure it's been running flat. |
Nice point to keep Flaky label to get data. /lgtm |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/approve
Thank you! And thanks for your patience in getting this through
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: knabben, spiffxp The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Flaky from: go_test: //staging/src/k8s.io/legacy-cloud-providers/vsphere/go_default_test:run_2_of_2 /test pull-kubernetes-bazel-test |
What type of PR is this?
/kind cleanup
/kind flake
/sig node
/sig testing
What this PR does / why we need it:
This PR de-flaky the PreStop E2E node test by reducing the wait time period for checking the preStop hook test is being executed.
Which issue(s) this PR fixes:
Related #94918
Special notes for your reviewer:
Does this PR introduce a user-facing change?: