-
Notifications
You must be signed in to change notification settings - Fork 38.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
e2e/storage: Add missing GinkgoRecover call #124870
base: master
Are you sure you want to change the base?
Conversation
If you pass an invalid value for '-enabled-volume-drivers' then the test suite currently panics rather than failing correctly. Fix this. Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
This issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: stephenfin The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@stephenfin: The following test failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
/hold the switch case is failing on purpose if the entry is not known |
@@ -49,6 +51,8 @@ var testDrivers = []func() storageframework.TestDriver{ | |||
|
|||
// This executes testSuites for in-tree volumes. | |||
var _ = utils.SIGDescribe("In-tree Volumes", func() { | |||
defer ginkgo.GinkgoRecover() | |||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
line 67 says
framework.Failf("Invalid volume type %s in %v", driver, framework.TestContext.EnabledVolumeDrivers)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, and it still fails. The difference is that it fails gracefully rather than panicking.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why framework.Failf("Invalid volume type %s in %v", driver, framework.TestContext.EnabledVolumeDrivers)
should panic? do you have a stack trace? is it running in a goroutine or something?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure. I'm running the test like so:
❯ make all WHAT=test/e2e/e2e.test
❯ ginkgo run --succinct --focus '\[Driver: cinder.csi.openstack.org\] .* topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies' _output/bin/e2e.test -- -enabled-volume-drivers invalid
Without this patch, this panics like so:
Assertion or Panic detected during tree construction
k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:51
Ginkgo detected a panic while constructing the spec tree.
You may be trying to make an assertion in the body of a container node
(i.e. Describe, Context, or When).
Please ensure all assertions are inside leaf nodes such as BeforeEach,
It, etc.
Here's the content of the panic that was caught:
Your Test Panicked
k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:63
When you, or your assertion library, calls Ginkgo's Fail(),
Ginkgo panics to prevent subsequent assertions from running.
Normally Ginkgo rescues this panic so you shouldn't see it.
However, if you make an assertion in a goroutine, Ginkgo can't capture the
panic.
To circumvent this, you should call
defer GinkgoRecover()
at the top of the goroutine that caused this panic.
Alternatively, you may have made an assertion outside of a Ginkgo
leaf node (e.g. in a container node or some out-of-band function) - please
move your assertion to
an appropriate Ginkgo node (e.g. a BeforeSuite, BeforeEach, It, etc...).
Learn more at:
http://onsi.github.io/ginkgo/#mental-model-how-ginkgo-handles-failure
Learn more at: http://onsi.github.io/ginkgo/#no-assertions-in-container-nodes
Ginkgo ran 1 suite in 61.625393ms
Test Suite Failed
With this patch applied, it fails "correctly":
I0520 14:19:29.308283 65846 test_context.go:561] The --provider flag is not set. Continuing as if --provider=skeleton had been used.
I0520 14:19:29.308389 65846 e2e.go:109] Starting e2e run "9ee17e68-4e0b-4c42-814e-83b0cecebf27" on Ginkgo node 1
[1716211169] Kubernetes e2e suite - 0/2126 specs
------------------------------
[ReportBeforeSuite] [FAILED] [0.000 seconds]
[ReportBeforeSuite]
k8s.io/kubernetes/test/e2e/e2e_test.go:154
[FAILED] Invalid volume type invalid in [invalid]
In [ReportBeforeSuite] at: k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:67 @ 05/20/24 14:19:29.338
------------------------------
Summarizing 1 Failure:
[FAIL] [ReportBeforeSuite]
k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:67
Ran 0 of 2126 Specs in 0.000 seconds
FAIL! -- 0 Passed | 0 Failed | 0 Pending | 0 Skipped
--- FAIL: TestE2E (0.03s)
FAIL
Ginkgo ran 1 suite in 107.686932ms
Test Suite Failed
What type of PR is this?
/kind cleanup
What this PR does / why we need it:
If you pass an invalid value for
-enabled-volume-drivers
then the test suite currently panics rather than failing correctly. Fix this.Which issue(s) this PR fixes:
Special notes for your reviewer:
Example reproducer. Build and run tests that derive from this suite. For example:
Without this patch, this panics like so:
With this patch applied, it fails "correctly":
Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: