Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add E2E stress test suite for creation / deletion of VolumeSnapshot resources #95971

Merged
merged 1 commit into from Nov 5, 2020

Conversation

chrishenzie
Copy link
Member

/sig storage

What type of PR is this?
/kind feature

What this PR does / why we need it:
This PR introduces an E2E test suite for stress testing the creation / deletion of VolumeSnapshot objects.

It works by spinning up a set of pods, and launching len(pods) goroutines which repeatedly create and delete VolumeSnapshots. Users can configure the number of pods and number of snapshots by setting NumPods and NumSnapshots in their testdriver.yaml.

Which issue(s) this PR fixes:
Fixes #95969

Special notes for your reviewer:
I split this into three distinct changes to make it easier for reviewing. I can squash everything if reviewers prefer.

Does this PR introduce a user-facing change?:

NONE

Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:


Additional details:
The testdriver.yaml described above can be supplied to the e2e.test binary via -storage.testdriver. It serializes to this struct:

// DriverInfo represents static information about a TestDriver.
type DriverInfo struct {
// Internal name of the driver, this is used as a display name in the test
// case and test objects
Name string
// Fully qualified plugin name as registered in Kubernetes of the in-tree
// plugin if it exists and is empty if this DriverInfo represents a CSI
// Driver
InTreePluginName string
FeatureTag string // FeatureTag for the driver
// Maximum single file size supported by this driver
MaxFileSize int64
// The range of disk size supported by this driver
SupportedSizeRange e2evolume.SizeRange
// Map of string for supported fs type
SupportedFsType sets.String
// Map of string for supported mount option
SupportedMountOption sets.String
// [Optional] Map of string for required mount option
RequiredMountOption sets.String
// Map that represents plugin capabilities
Capabilities map[Capability]bool
// [Optional] List of access modes required for provisioning, defaults to
// RWO if unset
RequiredAccessModes []v1.PersistentVolumeAccessMode
// [Optional] List of topology keys driver supports
TopologyKeys []string
// [Optional] Number of allowed topologies the driver requires.
// Only relevant if TopologyKeys is set. Defaults to 1.
// Example: multi-zonal disk requires at least 2 allowed topologies.
NumAllowedTopologies int
// [Optional] Scale parameters for stress tests.
StressTestOptions *StressTestOptions
}

@msau42

@k8s-ci-robot k8s-ci-robot added release-note-none Denotes a PR that doesn't merit a release note. sig/storage Categorizes an issue or PR as relevant to SIG Storage. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. kind/feature Categorizes issue or PR as related to a new feature. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. labels Oct 29, 2020
@k8s-ci-robot
Copy link
Contributor

@chrishenzie: This issue is currently awaiting triage.

If a SIG or subproject determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Oct 29, 2020
@k8s-ci-robot
Copy link
Contributor

Hi @chrishenzie. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added area/test sig/testing Categorizes an issue or PR as relevant to SIG Testing. labels Oct 29, 2020
@chrishenzie chrishenzie changed the title E2e stress snapshots Add E2E stress test suite for creation / deletion of VolumeSnapshot resources Oct 29, 2020
@msau42
Copy link
Member

msau42 commented Oct 29, 2020

/ok-to-test

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Oct 29, 2020
@msau42
Copy link
Member

msau42 commented Oct 29, 2020

/assign @xing-yang @msau42

@chrishenzie
Copy link
Member Author

Updated last commit to incorporate similar changes to here, setup / teardown of resources:
#96023

@chrishenzie
Copy link
Member Author

/test pull-kubernetes-conformance-kind-ipv6-parallel

@xing-yang
Copy link
Contributor

/retest


createPodsAndVolumes := func() {
for i := 0; i < stressTest.testOptions.NumPods; i++ {
framework.Logf("Creating resources for pod %v/%v", i, stressTest.testOptions.NumPods-1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use %d for i and stressTest.testOptions.NumPods-1


if _, err := cs.CoreV1().Pods(pod.Namespace).Create(context.TODO(), pod, metav1.CreateOptions{}); err != nil {
stressTest.cancel()
framework.Failf("Failed to create pod-%v [%+v]. Error: %v", i, pod, err)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use %d for i


if err := e2epod.WaitForPodRunningInNamespace(cs, pod); err != nil {
stressTest.cancel()
framework.Failf("Failed to wait for pod-%v [%+v] turn into running status. Error: %v", i, pod, err)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use %d for i


var errs []error
for _, snapshot := range stressTest.snapshots {
framework.Logf("Deleting snapshot %v", snapshot.Vs.GetName())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you print out namespace as well? namespace/name

Use %s

errs = append(errs, snapshot.CleanupResource())
}
for _, pod := range stressTest.pods {
framework.Logf("Deleting pod %v", pod.Name)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

%v -> %s

errs = append(errs, e2epod.DeletePodWithWait(cs, pod))
}
for _, volume := range stressTest.volumes {
framework.Logf("Deleting volume %v", volume.Pvc.GetName())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

%v -> %s

errs = append(errs, volume.CleanupResource())
}
errs = append(errs, tryFunc(stressTest.driverCleanup))
framework.ExpectNoError(errors.NewAggregate(errs), "While cleaning up resources")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While -> while

case <-stressTest.ctx.Done():
return
default:
framework.Logf("Pod-%v [%v], Iteration %v/%v", podIndex, pod.Name, j, stressTest.testOptions.NumSnapshots-1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use %d for podIndex, use %s for name, and %d for j and NumSnapshots


if err := snapshot.CleanupResource(); err != nil {
stressTest.cancel()
framework.Failf("Failed to delete snapshot for pod-%v [%+v]. Error: %v", podIndex, pod, err)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use %d for podIndex

stressTest.snapshots = append(stressTest.snapshots, snapshot)
stressTest.snapshotsMutex.Unlock()

if err := snapshot.CleanupResource(); err != nil {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For testpatterns.DynamicSnapshotRetain, can you change the policy to Delete before deleting the VolumeSnapshot? This is to make sure we don't leave physical snapshot resources behind after cleaning up.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the CleanupResource method should be responsible for it instead of each test case

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like this is already handled inside of CleanupResource():

if boundVsContent.Object["spec"].(map[string]interface{})["deletionPolicy"] != "Delete" {
// The purpose of this block is to prevent physical snapshotContent leaks.
// We must update the SnapshotContent to have Delete Deletion policy,
// or else the physical snapshot content will be leaked.
boundVsContent.Object["spec"].(map[string]interface{})["deletionPolicy"] = "Delete"
boundVsContent, err = dc.Resource(SnapshotContentGVR).Update(context.TODO(), boundVsContent, metav1.UpdateOptions{})
framework.ExpectNoError(err)
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure

@@ -788,6 +788,11 @@ func InitHostPathDriver() testsuites.TestDriver {
testsuites.CapSingleNodeVolume: true,
testsuites.CapTopology: true,
},
StressTestOptions: &testsuites.StressTestOptions{
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be added for csi hostpath, not intree hostpath

// Name must be unique, so let's base it on namespace name
"name": ns + "-" + suffix,
// Name must be unique, so let's base it on namespace name and use GenerateName
"name": names.SimpleNameGenerator.GenerateName(ns + "-" + suffix),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you create a followup issue to clean this up later?

tsInfo: TestSuiteInfo{
Name: "snapshottable-stress",
TestPatterns: []testpatterns.TestPattern{
testpatterns.DynamicSnapshotDelete,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should also add raw block support once the pattern is added in the other pr. cc @Jiawei0227

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SG

// Check preconditions before setting up namespace via framework below.
ginkgo.BeforeEach(func() {
driverInfo = driver.GetDriverInfo()
if driverInfo.StressTestOptions == nil {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm maybe fail? Right now it's really easy to miss that a test case got skipped.

Also can you add similar validation to the volume_stress suite?

createPodsAndVolumes()
})

f.AddAfterEach("cleanup", func(f *framework.Framework, failed bool) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! Can you reference the bug in the comment so we can followup on it later?

@@ -21,7 +21,8 @@ spec:
serviceAccountName: csi-gce-pd-controller-sa
containers:
- name: csi-snapshotter
image: gcr.io/gke-release/csi-snapshotter:v2.1.1-gke.0
# TODO: Replace this with the gke image once available.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's fine to have the OSS tests use the OSS images.

@@ -21,7 +21,8 @@ spec:
serviceAccountName: csi-gce-pd-controller-sa
containers:
- name: csi-snapshotter
image: gcr.io/gke-release/csi-snapshotter:v2.1.1-gke.0
# TODO: Replace this with the gke image once available.
image: k8s.gcr.io/sig-storage/csi-snapshotter:v3.0.2
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you also update the pdcsi image to v1.0.1-gke.0? There were some fixes related to snapshots I think.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@chrishenzie
Copy link
Member Author

chrishenzie commented Nov 4, 2020

Is it possible we need to increase the snapshot timeout or make it configurable? The last failed test logs indicated some snapshots were ready but not all of them.

err = WaitForSnapshotReady(dc, r.Vs.GetNamespace(), r.Vs.GetName(), framework.Poll, framework.SnapshotCreateTimeout)

// SnapshotCreateTimeout is how long for snapshot to create snapshotContent.
SnapshotCreateTimeout = 5 * time.Minute

@chrishenzie
Copy link
Member Author

chrishenzie commented Nov 4, 2020

Synced offline with @xing-yang, who found this in the logs:

E1104 19:12:18.485645    1 snapshot_controller.go:106] createSnapshot for content 
[snapcontent-34d9d19d-036e-4c97-b9c1-8eb9d0011181]: error occurred in createSnapshotWrapper: 
failed to take snapshot of the volume,
projects/k8s-boskos-gce-project-10/zones/us-west1-b/disks/pvc-1d6c49a5-5c72-4fef-b6f0-5c03a8335de5: 
"rpc error: code = Aborted desc = An operation with the given Volume ID 
projects/k8s-boskos-gce-project-10/zones/us-west1-b/disks/pvc-1d6c49a5-5c72-4fef-b6f0-5c03a8335de5 
already exists"

it seems that GCE only allows for one snapshot to be created for a volume at a time, which is a cause of slowness in this test. I am going to test with fewer snapshots and see if it succeeds in time.

@xing-yang
Copy link
Contributor

Yes, I think we need to either make the timeout longer or reduce the number of snapshots being created from the same volume at the same time.

For example, it took more than 7 minutes for readyToUse field of the VolumeSnapshotContent snapcontent-34d9d19d-036e-4c97-b9c1-8eb9d0011181 to become true.

I1104 19:12:17.488887       1 snapshot_controller.go:583] setAnnVolumeSnapshotBeingCreated: volume snapshot content &{TypeMeta:{Kind: APIVersion:} ObjectMeta:{Name:snapcontent-34d9d19d-036e-4c97-b9c1-8eb9d0011181 GenerateName: Namespace: SelfLink: UID:05ce86f7-b244-4ccc-a768-5000fe4bb520 ResourceVersion:1902 Generation:1 CreationTimestamp:2020-11-04 19:12:13 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[snapshot.storage.kubernetes.io/volumesnapshot-being-created:yes] OwnerReferences:[] Finalizers:[snapshot.storage.kubernetes.io/volumesnapshotcontent-bound-protection] ClusterName: ManagedFields:[{Manager:snapshot-controller Operation:Update APIVersion:snapshot.storage.k8s.io/v1beta1 Time:2020-11-04 19:12:15 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:finalizers":{".":{},"v:\"snapshot.storage.kubernetes.io/volumesnapshotcontent-bound-protection\"":{}}},"f:spec":{".":{},"f:deletionPolicy":{},"f:driver":{},"f:source":{".":{},"f:volumeHandle":{}},"f:volumeSnapshotClassName":{},"f:volumeSnapshotRef":{".":{},"f:apiVersion":{},"f:kind":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}}}}} {Manager:csi-snapshotter Operation:Update APIVersion:snapshot.storage.k8s.io/v1beta1 Time:2020-11-04 19:12:17 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:snapshot.storage.kubernetes.io/volumesnapshot-being-created":{}}}}}]} Spec:{VolumeSnapshotRef:{Kind:VolumeSnapshot Namespace:snapshottable-stress-4994 Name:snapshot-p2h8p UID:34d9d19d-036e-4c97-b9c1-8eb9d0011181 APIVersion:snapshot.storage.k8s.io/v1beta1 ResourceVersion:1543 FieldPath:} DeletionPolicy:Delete Driver:pd.csi.storage.gke.io VolumeSnapshotClassName:0xc000642ea0 Source:{VolumeHandle:0xc000642e90 SnapshotHandle:<nil>}} Status:<nil>}
I1104 19:12:17.489006       1 snapshotter.go:56] CSI CreateSnapshot: snapshot-34d9d19d-036e-4c97-b9c1-8eb9d0011181
……
I1104 19:12:18.485620       1 snapshot_controller.go:166] updating VolumeSnapshotContent[snapcontent-34d9d19d-036e-4c97-b9c1-8eb9d0011181] error status failed Operation cannot be fulfilled on volumesnapshotcontents.snapshot.storage.k8s.io "snapcontent-34d9d19d-036e-4c97-b9c1-8eb9d0011181": the object has been modified; please apply your changes to the latest version and try again
E1104 19:12:18.485645       1 snapshot_controller.go:106] createSnapshot for content [snapcontent-34d9d19d-036e-4c97-b9c1-8eb9d0011181]: error occurred in createSnapshotWrapper: failed to take snapshot of the volume, projects/k8s-boskos-gce-project-10/zones/us-west1-b/disks/pvc-1d6c49a5-5c72-4fef-b6f0-5c03a8335de5: "rpc error: code = Aborted desc = An operation with the given Volume ID projects/k8s-boskos-gce-project-10/zones/us-west1-b/disks/pvc-1d6c49a5-5c72-4fef-b6f0-5c03a8335de5 already exists"
E1104 19:12:18.485691       1 snapshot_controller_base.go:261] could not sync content "snapcontent-34d9d19d-036e-4c97-b9c1-8eb9d0011181": failed to take snapshot of the volume, projects/k8s-boskos-gce-project-10/zones/us-west1-b/disks/pvc-1d6c49a5-5c72-4fef-b6f0-5c03a8335de5: "rpc error: code = Aborted desc = An operation with the given Volume ID projects/k8s-boskos-gce-project-10/zones/us-west1-b/disks/pvc-1d6c49a5-5c72-4fef-b6f0-5c03a8335de5 already exists"
……
I1104 19:12:45.371504       1 snapshot_controller.go:384] updateSnapshotContentStatus: updating VolumeSnapshotContent [snapcontent-34d9d19d-036e-4c97-b9c1-8eb9d0011181], snapshotHandle projects/k8s-boskos-gce-project-10/global/snapshots/snapshot-34d9d19d-036e-4c97-b9c1-8eb9d0011181, readyToUse false, createdAt 1604517156388000000, size 5368709120
……
I1104 19:19:50.185945       1 snapshot_controller.go:384] updateSnapshotContentStatus: updating VolumeSnapshotContent [snapcontent-34d9d19d-036e-4c97-b9c1-8eb9d0011181], snapshotHandle projects/k8s-boskos-gce-project-10/global/snapshots/snapshot-34d9d19d-036e-4c97-b9c1-8eb9d0011181, readyToUse true, createdAt 1604517156388000000, size 5368709120

@@ -490,6 +495,11 @@ func InitGcePDCSIDriver() testsuites.TestDriver {
StressTestOptions: &testsuites.StressTestOptions{
NumPods: 10,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we increase the number of pods here?

Maybe we need the snapshot test to use its own set of options, so it won't impact the volume stress test settings

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wanted to rename this to VolumeStressTestOptions but it seems like we'd need to mark this as deprecated if users depend on this in their testdriver.yaml files. Maybe we can add some custom serialization logic and log a deprecation warning? What is the timeline for fully deprecating something like this?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe just mark this with a TODO and a tracking issue for the time being, adding a custom decoder seems more involved.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about we create a new VolumeSnapshotStressTestOptions?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Filed #96241 for this work.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed, added. I can rename it to that to be even more specific.

@chrishenzie chrishenzie force-pushed the e2e-stress-snapshots branch 2 times, most recently from 7272818 to bfc2e29 Compare November 5, 2020 00:52
@chrishenzie
Copy link
Member Author

/test pull-kubernetes-conformance-kind-ga-only-parallel

@xing-yang
Copy link
Contributor

Can you squash your commits?

Introduces a new test suite that creates and deletes many
VolumeSnapshots simultaneously to test snapshottable storage plugins
under load.
@xing-yang
Copy link
Contributor

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Nov 5, 2020
@xing-yang
Copy link
Contributor

@msau42 do you have more comments?

@msau42
Copy link
Member

msau42 commented Nov 5, 2020

/approve
Thanks Chris, this is really great! I think as a followup, it would be nice to also get stress testing on the restore flow.

@xing-yang I noticed that hostpath stress tests are running longer than the gce tests. That's a bit surprising, as hostpath driver doesn't do anything and should be much faster. Can you take a look?

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: chrishenzie, msau42

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Nov 5, 2020
@xing-yang
Copy link
Contributor

@xing-yang I noticed that hostpath stress tests are running longer than the gce tests. That's a bit surprising, as hostpath driver doesn't do anything and should be much faster. Can you take a look?

Maybe because hostpath driver creates 10 snapshots while gce creates 2 per volume? I'll take a look.

@msau42
Copy link
Member

msau42 commented Nov 5, 2020

Xing Yang I noticed that hostpath stress tests are running longer than the gce tests. That's a bit surprising, as hostpath driver doesn't do anything and should be much faster. Can you take a look?

Maybe because hostpath driver creates 10 snapshots while gce creates 2 per volume? I'll take a look.

Ah I did my math wrong. Yes, I think the hostpath test creates 100 snapshots whereas the gce test creates 40. That makes sense, thanks!

@chrishenzie
Copy link
Member Author

If that is an issue let me know and I can reduce it to maybe 4 pods and 10 snapshots to be consistent.

@k8s-ci-robot k8s-ci-robot merged commit 0bb7328 into kubernetes:master Nov 5, 2020
@k8s-ci-robot k8s-ci-robot added this to the v1.20 milestone Nov 5, 2020
@chrishenzie chrishenzie deleted the e2e-stress-snapshots branch November 5, 2020 22:36
@xing-yang
Copy link
Contributor

If that is an issue let me know and I can reduce it to maybe 4 pods and 10 snapshots to be consistent.

@chrishenzie I don't think you need to change anything. That's fine. Thanks for the great work!

k8s-ci-robot added a commit that referenced this pull request Dec 17, 2020
…-#95971-upstream-release-1.19

Automated cherry pick of #95971: E2E stress test suite for VolumeSnapshots
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. area/provider/gcp Issues or PRs related to gcp provider area/test cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/feature Categorizes issue or PR as related to a new feature. lgtm "Looks good to me", indicates that a PR is ready to be merged. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. release-note-none Denotes a PR that doesn't merit a release note. sig/cloud-provider Categorizes an issue or PR as relevant to SIG Cloud Provider. sig/storage Categorizes an issue or PR as relevant to SIG Storage. sig/testing Categorizes an issue or PR as relevant to SIG Testing. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Create an E2E stress test suite for creation / deletion of VolumeSnapshot resources
5 participants