Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Interrupted E2E tests do not run instructions in AfterEach #96177

Closed
chrishenzie opened this issue Nov 3, 2020 · 13 comments
Closed

Interrupted E2E tests do not run instructions in AfterEach #96177

chrishenzie opened this issue Nov 3, 2020 · 13 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. sig/testing Categorizes an issue or PR as relevant to SIG Testing.

Comments

@chrishenzie
Copy link
Member

/sig testing

What happened:
When running E2E tests that use the E2E test framework and ginkgo.AfterEach(), when you SIGINT (ctrl+c) the test, the steps in ginkgo.AfterEach() do not run, but the framework's AfterEach() does run (which cleans up the test namespace).

What you expected to happen:
When you SIGINT a running E2E test, the steps in ginkgo.AfterEach() run (unless interrupted a second time).

How to reproduce it (as minimally and precisely as possible):
I made the following change to the deployment.go test.

diff --git a/test/e2e/apps/deployment.go b/test/e2e/apps/deployment.go
index 18cef313feb..868b71a3e57 100644
--- a/test/e2e/apps/deployment.go
+++ b/test/e2e/apps/deployment.go
@@ -70,6 +70,7 @@ var _ = SIGDescribe("Deployment", func() {
        var c clientset.Interface

        ginkgo.AfterEach(func() {
+               framework.Logf("I'M IN THE AFTER EACH")
                failureTrap(c, ns)
        })

@@ -249,6 +250,9 @@ func testDeleteDeployment(f *framework.Framework) {
        deploy, err := c.AppsV1().Deployments(ns).Create(context.TODO(), d, metav1.CreateOptions{})
        framework.ExpectNoError(err)

+       framework.Logf("I'M SLEEPING ONE MINUTE")
+       time.Sleep(1 * time.Minute)
+
        // Wait for it to be updated to revision 1
        err = e2edeployment.WaitForDeploymentRevisionAndImage(c, ns, deploymentName, "1", WebserverImage)
        framework.ExpectNoError(err)

And followed these steps locally. Please read the comments for more details.

# Build test binary.
make WHAT=test/e2e/e2e.test

# Run the deployment E2E test and interrupt it (see the "^C" in the output).
# Observe that the log message in the AfterEach() does not show up.
_output/bin/e2e.test \
        --kubeconfig=$HOME/.kube/config \
        -ginkgo.focus='deployment reaping should cascade to its replica sets and pods'

... Skipping lots of test output here ...

[It] deployment reaping should cascade to its replica sets and pods
Nov  3 10:35:15.191: INFO: Creating simple deployment test-new-deployment
Nov  3 10:35:15.331: INFO: I'M SLEEPING ONE MINUTE
^C
---------------------------------------------------------
Received interrupt.  Running AfterSuite...
^C again to terminate immediately
Nov  3 10:35:20.082: INFO: Running AfterSuite actions on all nodes
Nov  3 10:35:20.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2294" for this suite.
Nov  3 10:35:20.236: INFO: Running AfterSuite actions on node 1
{"msg":"Test Suite completed","total":1,"completed":0,"skipped":2887,"failed":0}

Ran 1 of 2888 Specs in 6.506 seconds
FAIL! -- 1 Passed | 0 Failed | 0 Pending | 2887 Skipped

Anything else we need to know?:
If you replace the ginkgo.AfterEach() with framework.AddAfterEach() and re-run the steps above, you will see the log message in the AfterEach() show up.

Should we be using framework.AddAfterEach() everywhere in place of ginkgo.AfterEach() for E2E tests that use this framework?

Environment:

  • Kubernetes version (use kubectl version): v1.18.10
  • Cloud provider or hardware configuration: AWS
  • OS (e.g: cat /etc/os-release): Debian GNU/Linux
  • Kernel (e.g. uname -a): 5.7.17
  • Install tools:
  • Network plugin and version (if this is a network-related bug):
  • Others:

@msau42

@chrishenzie chrishenzie added the kind/bug Categorizes issue or PR as related to a bug. label Nov 3, 2020
@k8s-ci-robot k8s-ci-robot added sig/testing Categorizes an issue or PR as relevant to SIG Testing. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Nov 3, 2020
@k8s-ci-robot
Copy link
Contributor

@chrishenzie: This issue is currently awaiting triage.

If a SIG or subproject determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@aojea
Copy link
Member

aojea commented Nov 4, 2020

xref: onsi/ginkgo#222

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 2, 2021
@chrishenzie
Copy link
Member Author

/remove-lifecycle stale

This change is in-scope for ginkgo v2 and should be available in several months once it's implemented.

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 17, 2021
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 18, 2021
@chrishenzie
Copy link
Member Author

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 19, 2021
@aojea
Copy link
Member

aojea commented May 19, 2021

/remove-lifecycle stale

This change is in-scope for ginkgo v2 and should be available in several months once it's implemented.

are we going to try to go to ginkgo v2?

@spiffxp spiffxp added this to To Triage in sig-testing issues Jul 27, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 17, 2021
@chrishenzie
Copy link
Member Author

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 26, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 24, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Dec 24, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

sig-testing issues automation moved this from To Triage to Done Jan 23, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. sig/testing Categorizes an issue or PR as relevant to SIG Testing.
Projects
Development

No branches or pull requests

5 participants