New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
e2e storage tests: usable out-of-tree #70862
Conversation
/test pull-kubernetes-e2e-gce-100-performance |
ecb1a04
to
ab52db2
Compare
/hold I am still testing this myself... |
af7cd36
to
7da9de2
Compare
7da9de2
to
0f70ec7
Compare
0f70ec7
to
74199a4
Compare
f2f3b55
to
bf39135
Compare
bf39135
to
0194ec9
Compare
73a5f07
to
5ce13be
Compare
/lgtm |
Exposing framework.VolumeTestConfig as part of the testsuite package API was confusing because it was unclear which of the values in it really have an effect. How it was set also was a bit awkward: a test driver had a copy that had to be overwritten at test runtime and then might have been updated and/or overwritten again by the driver. Now testsuites has its own test config structure. It contains the values that might have to be set dynamically at runtime. Instead of overwriting a copy of that struct inside the test driver, the test driver takes some common defaults (specifically, the framework pointer and the prefix) when it gets initialized and then manages its own copy. For example, the hostpath driver has to lock the pods to a single node. framework.VolumeTestConfig is still used internally and test drivers can decide to run tests with a fully populated instance if needed (for example, after setting up an NFS server).
Generated via hack/update-bazel.sh.
5ce13be
to
ac8ac8e
Compare
/test pull-kubernetes-integration |
linking PR's that had a test flake because of #71696 |
cs := f.ClientSet | ||
ns := f.Namespace | ||
n.externalPluginName = fmt.Sprintf("example.com/nfs-%s", ns.Name) | ||
|
||
// Reset config. It might have been modified by a previous CreateVolume call. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tests can run in parallel. Does each test case get it's own unique object, or share the same object? If each test case has a new object, then this is not needed. If they share the same object, then this may not work very well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Opened up #72288 to consider refactoring this later. I don't think this is a clean abstraction
Michelle Au <notifications@github.com> writes:
+ // Reset config. It might have been modified by a previous CreateVolume call.
Tests can run in parallel.
But only in different processes. This is a difference between Gingko and
"go test".
Does each test case get it's own unique object, or share the same
object?
Each process gets its own unique object, which is then shared between
all tests running in that process.
|
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: msau42, pohly The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/priority important-soon |
PR kubernetes#70862 made each driver responsible for resetting its config, but as it turned out, one place was missed in that PR: the in-tree gcepd sets a node selector. Not resetting that caused other tests to fail randomly depending on test execution order. Now the test suite resets the config by taking a copy after setting up the driver and restoring that copy before each test. Long term the intention is to separate the entire test config from the static driver info (kubernetes#72288), but for now resetting the config is the fastest way to fix the test flake. Fixes: kubernetes#72378
The entire volume testing has been refactored upstream. Instead of just one provisioning test for CSI, the full range of tests that were previously only available for in-tree volume drivers can now also be used for CSI. With some pending modifications (kubernetes/kubernetes#70862) these tests can also be used out-of-tree and replace the locally modified copy of the provisioning test. The downside is the long runtime.
What type of PR is this?
/kind cleanup
What this PR does / why we need it:
Not all CSI drivers can be tested in Kubernetes (they might not be open) or should not be (if it does not help develop and enhance Kubernetes). Therefore it is useful to make the tests available to out-of-tree drivers without pulling in Kubernetes E2E specific code and tests.
Does this PR introduce a user-facing change?: