Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix vsphere util method - disksAreAttached #92805

Merged

Conversation

chethanv28
Copy link
Contributor

@chethanv28 chethanv28 commented Jul 5, 2020

What type of PR is this?

/kind failing-test

What this PR does / why we need it:
This PR is addressing a panic while running the e2e tests for vsphere cloud provider.
The method - disksAreAttached is returning a nil map and hence the vsphere cloud provider performance tests are failing. With this fix the test cases are running with no failures.

Which issue(s) this PR fixes:

Fixes #92804

Special notes for your reviewer:
Test log before this change:

•! Panic [50.165 seconds]
[sig-storage] vcp-performance [Feature:vsphere]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  vcp performance tests [It]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere/vsphere_volume_perf.go:96

  Test Panicked
  assignment to entry in nil map
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55

  Full Stack Trace
  k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
  	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55 +0x105
  panic(0x4b3a0e0, 0x5c329d0)
  	/usr/local/go/src/runtime/panic.go:969 +0x166
  k8s.io/kubernetes/test/e2e/storage/vsphere.disksAreAttached(0xc0032a3140, 0x0, 0x0, 0x0)
  	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere/vsphere_utils.go

Test log after this change:

[sig-storage] vcp-performance [Feature:vsphere] 
  vcp performance tests
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere/vsphere_volume_perf.go:96
[BeforeEach] [sig-storage] vcp-performance [Feature:vsphere]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jul  4 19:05:05.994: INFO: >>> kubeConfig: /Users/chethanv/.kube/config
STEP: Building a namespace api object, basename vcp-performance
Jul  4 19:05:06.191: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
Jul  4 19:05:06.255: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] vcp-performance [Feature:vsphere]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere/vsphere_volume_perf.go:69
Jul  4 19:05:06.346: INFO: Initializing vc server 10.160.197.68
Jul  4 19:05:06.346: INFO: ConfigFile &{{administrator@vsphere.local Admin!23 443 true  0} map[10.160.197.68:0xc0036d6a80] {} {pvscsi} {10.160.197.68 k8s-dc kubernetes vsanDatastore k8s-cluster/Resources}} 
 vSphere instances map[10.160.197.68:0xc0036c9d50]
Jul  4 19:05:06.859: INFO: Search candidates vc=10.160.197.68 and datacenter=k8s-dc
Jul  4 19:05:06.859: INFO: Searching for node with UUID: 422aedff-3d10-7e9b-3ebc-6d47b192b725
Jul  4 19:05:06.859: INFO: Searching for node with UUID: 422a9520-e2a7-ed33-2fe4-4cf25e41045d
Jul  4 19:05:06.859: INFO: Searching for node with UUID: 422a0cad-e0c2-f72a-aad0-5c9ff4ca2bb5
Jul  4 19:05:06.859: INFO: Searching for node with UUID: 422ae6cd-74cb-2079-ea1a-36c72501209c
Jul  4 19:05:06.859: INFO: Searching for node with UUID: 422a5ca2-fec1-1574-e6b2-d34bc0502cf7
Jul  4 19:05:10.569: INFO: Found node k8s-node3 as vm=VirtualMachine:vm-95 placed on host=HostSystem:host-58 under zones [] in vc=10.160.197.68 and datacenter=k8s-dc
Jul  4 19:05:10.803: INFO: Found node k8s-node4 as vm=VirtualMachine:vm-96 placed on host=HostSystem:host-16 under zones [] in vc=10.160.197.68 and datacenter=k8s-dc
Jul  4 19:05:11.387: INFO: Found node k8s-master as vm=VirtualMachine:vm-76 placed on host=HostSystem:host-34 under zones [] in vc=10.160.197.68 and datacenter=k8s-dc
Jul  4 19:05:11.387: INFO: Found node k8s-node2 as vm=VirtualMachine:vm-94 placed on host=HostSystem:host-40 under zones [] in vc=10.160.197.68 and datacenter=k8s-dc
Jul  4 19:05:11.499: INFO: Found node k8s-node1 as vm=VirtualMachine:vm-100 placed on host=HostSystem:host-22 under zones [] in vc=10.160.197.68 and datacenter=k8s-dc
Jul  4 19:05:11.994: INFO: Zone to datastores map : map[]
[It] vcp performance tests
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere/vsphere_volume_perf.go:96
STEP: Creating Storage Class : sc-default
STEP: Creating Storage Class : sc-vsan
STEP: Creating Storage Class : sc-spbm
STEP: Creating Storage Class : sc-user-specified-ds
STEP: Creating 2 PVCs
Jul  4 19:05:12.613: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-d7mxt] to have phase Bound
Jul  4 19:05:12.658: INFO: PersistentVolumeClaim pvc-d7mxt found but phase is Pending instead of Bound.
Jul  4 19:05:14.713: INFO: PersistentVolumeClaim pvc-d7mxt found but phase is Pending instead of Bound.
Jul  4 19:05:16.770: INFO: PersistentVolumeClaim pvc-d7mxt found but phase is Pending instead of Bound.
Jul  4 19:05:18.823: INFO: PersistentVolumeClaim pvc-d7mxt found and phase=Bound (6.209741057s)
Jul  4 19:05:18.924: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-ksfcc] to have phase Bound
Jul  4 19:05:18.995: INFO: PersistentVolumeClaim pvc-ksfcc found and phase=Bound (71.477737ms)
STEP: Creating pod to attach PVs to the node
Jul  4 19:05:41.354: INFO: diskIsAttached vm : "VirtualMachine:vm-100"
Jul  4 19:05:41.406: INFO: VirtualDisk backing filename "[local-ds-2-0] k8s-node1/k8s-node1.vmdk" does not match with diskPath "[sharedVmfs-0] fcd/ffdfde86596b47e3a1194d8b8837810c.vmdk"
Jul  4 19:05:41.406: INFO: VirtualDisk backing filename "[nfs0-1] fcd/2417ea66448e45efb862ebab346464cd.vmdk" does not match with diskPath "[sharedVmfs-0] fcd/ffdfde86596b47e3a1194d8b8837810c.vmdk"
Jul  4 19:05:41.406: INFO: Found VirtualDisk backing with filename "[sharedVmfs-0] fcd/ffdfde86596b47e3a1194d8b8837810c.vmdk" for diskPath "[sharedVmfs-0] fcd/ffdfde86596b47e3a1194d8b8837810c.vmdk"
Jul  4 19:05:41.406: INFO: diskIsAttached found the disk "[sharedVmfs-0] fcd/ffdfde86596b47e3a1194d8b8837810c.vmdk" attached on node "k8s-node1"
Jul  4 19:05:41.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/Users/chethanv/.kube/config exec pvc-tester-dzhzd --namespace=vcp-performance-9721 -- /bin/touch /mnt/volume1/emptyFile.txt'
Jul  4 19:05:43.172: INFO: stderr: ""
Jul  4 19:05:43.172: INFO: stdout: ""
Jul  4 19:05:43.217: INFO: diskIsAttached vm : "VirtualMachine:vm-100"
Jul  4 19:05:43.270: INFO: VirtualDisk backing filename "[local-ds-2-0] k8s-node1/k8s-node1.vmdk" does not match with diskPath "[nfs0-1] fcd/2417ea66448e45efb862ebab346464cd.vmdk"
Jul  4 19:05:43.270: INFO: Found VirtualDisk backing with filename "[nfs0-1] fcd/2417ea66448e45efb862ebab346464cd.vmdk" for diskPath "[nfs0-1] fcd/2417ea66448e45efb862ebab346464cd.vmdk"
Jul  4 19:05:43.270: INFO: diskIsAttached found the disk "[nfs0-1] fcd/2417ea66448e45efb862ebab346464cd.vmdk" attached on node "k8s-node1"
Jul  4 19:05:43.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/Users/chethanv/.kube/config exec pvc-tester-dzhzd --namespace=vcp-performance-9721 -- /bin/touch /mnt/volume2/emptyFile.txt'
Jul  4 19:05:44.084: INFO: stderr: ""
Jul  4 19:05:44.084: INFO: stdout: ""
STEP: Deleting pods
Jul  4 19:05:44.084: INFO: Deleting pod "pvc-tester-dzhzd" in namespace "vcp-performance-9721"
Jul  4 19:05:44.141: INFO: Wait up to 5m0s for pod "pvc-tester-dzhzd" to be fully deleted
Jul  4 19:05:48.239: INFO: nodeVolumeMap map[k8s-node1:[[sharedVmfs-0] fcd/ffdfde86596b47e3a1194d8b8837810c.vmdk [nfs0-1] fcd/2417ea66448e45efb862ebab346464cd.vmdk]]
Jul  4 19:05:48.239: INFO: nodeVolumes map[k8s-node1:[[sharedVmfs-0] fcd/ffdfde86596b47e3a1194d8b8837810c.vmdk [nfs0-1] fcd/2417ea66448e45efb862ebab346464cd.vmdk]]
W0704 19:05:58.754490   62952 vsphere_utils.go:502] QueryVirtualDiskUuid failed for diskPath: "[sharedVmfs-0] fcd/kube-dummyDisk.vmdk". err: ServerFaultCode: File [sharedVmfs-0] fcd/kube-dummyDisk.vmdk was not found
W0704 19:05:58.841923   62952 vsphere_utils.go:502] QueryVirtualDiskUuid failed for diskPath: "[nfs0-1] fcd/kube-dummyDisk.vmdk". err: ServerFaultCode: File [nfs0-1] fcd/kube-dummyDisk.vmdk was not found
Jul  4 19:05:58.841: INFO: vmVolumes: map[k8s-node1:[[sharedVmfs-0] fcd/ffdfde86596b47e3a1194d8b8837810c.vmdk [nfs0-1] fcd/2417ea66448e45efb862ebab346464cd.vmdk]]
Jul  4 19:05:58.841: INFO: volumes: [[sharedVmfs-0] fcd/ffdfde86596b47e3a1194d8b8837810c.vmdk [nfs0-1] fcd/2417ea66448e45efb862ebab346464cd.vmdk]
Jul  4 19:05:58.890: INFO: diskIsAttached vm : "VirtualMachine:vm-100"
Jul  4 19:05:58.942: INFO: VirtualDisk backing filename "[local-ds-2-0] k8s-node1/k8s-node1.vmdk" does not match with diskPath "[sharedVmfs-0] fcd/ffdfde86596b47e3a1194d8b8837810c.vmdk"
Jul  4 19:05:58.989: INFO: diskIsAttached vm : "VirtualMachine:vm-100"
Jul  4 19:05:59.041: INFO: VirtualDisk backing filename "[local-ds-2-0] k8s-node1/k8s-node1.vmdk" does not match with diskPath "[nfs0-1] fcd/2417ea66448e45efb862ebab346464cd.vmdk"
Jul  4 19:05:59.041: INFO: Volume are successfully detached from all the nodes: map[k8s-node1:[[sharedVmfs-0] fcd/ffdfde86596b47e3a1194d8b8837810c.vmdk [nfs0-1] fcd/2417ea66448e45efb862ebab346464cd.vmdk]]
STEP: Deleting the PVCs
Jul  4 19:05:59.041: INFO: Deleting PersistentVolumeClaim "pvc-d7mxt"
Jul  4 19:05:59.095: INFO: Deleting PersistentVolumeClaim "pvc-ksfcc"
Jul  4 19:05:59.167: INFO: Deleting pod "pvc-tester-dzhzd" in namespace "vcp-performance-9721"
STEP: Creating 2 PVCs
Jul  4 19:05:59.312: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-xzt5h] to have phase Bound
Jul  4 19:05:59.383: INFO: PersistentVolumeClaim pvc-xzt5h found but phase is Pending instead of Bound.
Jul  4 19:06:01.467: INFO: PersistentVolumeClaim pvc-xzt5h found and phase=Bound (2.154828776s)
Jul  4 19:06:01.562: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-jpxgs] to have phase Bound
Jul  4 19:06:01.610: INFO: PersistentVolumeClaim pvc-jpxgs found but phase is Pending instead of Bound.
Jul  4 19:06:03.660: INFO: PersistentVolumeClaim pvc-jpxgs found but phase is Pending instead of Bound.
Jul  4 19:06:05.719: INFO: PersistentVolumeClaim pvc-jpxgs found but phase is Pending instead of Bound.
Jul  4 19:06:07.770: INFO: PersistentVolumeClaim pvc-jpxgs found but phase is Pending instead of Bound.
Jul  4 19:06:09.826: INFO: PersistentVolumeClaim pvc-jpxgs found but phase is Pending instead of Bound.
Jul  4 19:06:11.872: INFO: PersistentVolumeClaim pvc-jpxgs found but phase is Pending instead of Bound.
Jul  4 19:06:13.922: INFO: PersistentVolumeClaim pvc-jpxgs found and phase=Bound (12.360104843s)
STEP: Creating pod to attach PVs to the node
Jul  4 19:06:26.319: INFO: diskIsAttached vm : "VirtualMachine:vm-100"
Jul  4 19:06:26.373: INFO: VirtualDisk backing filename "[local-ds-2-0] k8s-node1/k8s-node1.vmdk" does not match with diskPath "[nfs0-1] fcd/22352955defc464b94b02a1462f88055.vmdk"
Jul  4 19:06:26.373: INFO: Found VirtualDisk backing with filename "[nfs0-1] fcd/22352955defc464b94b02a1462f88055.vmdk" for diskPath "[nfs0-1] fcd/22352955defc464b94b02a1462f88055.vmdk"
Jul  4 19:06:26.373: INFO: diskIsAttached found the disk "[nfs0-1] fcd/22352955defc464b94b02a1462f88055.vmdk" attached on node "k8s-node1"
Jul  4 19:06:26.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/Users/chethanv/.kube/config exec pvc-tester-brxnz --namespace=vcp-performance-9721 -- /bin/touch /mnt/volume1/emptyFile.txt'
Jul  4 19:06:27.263: INFO: stderr: ""
Jul  4 19:06:27.263: INFO: stdout: ""
Jul  4 19:06:27.312: INFO: diskIsAttached vm : "VirtualMachine:vm-100"
Jul  4 19:06:27.364: INFO: VirtualDisk backing filename "[local-ds-2-0] k8s-node1/k8s-node1.vmdk" does not match with diskPath "[sharedVmfs-0] fcd/23d2004be7b64ba78b7d1838dc4637da.vmdk"
Jul  4 19:06:27.364: INFO: VirtualDisk backing filename "[nfs0-1] fcd/22352955defc464b94b02a1462f88055.vmdk" does not match with diskPath "[sharedVmfs-0] fcd/23d2004be7b64ba78b7d1838dc4637da.vmdk"
Jul  4 19:06:27.364: INFO: Found VirtualDisk backing with filename "[sharedVmfs-0] fcd/23d2004be7b64ba78b7d1838dc4637da.vmdk" for diskPath "[sharedVmfs-0] fcd/23d2004be7b64ba78b7d1838dc4637da.vmdk"
Jul  4 19:06:27.364: INFO: diskIsAttached found the disk "[sharedVmfs-0] fcd/23d2004be7b64ba78b7d1838dc4637da.vmdk" attached on node "k8s-node1"
Jul  4 19:06:27.364: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/Users/chethanv/.kube/config exec pvc-tester-brxnz --namespace=vcp-performance-9721 -- /bin/touch /mnt/volume2/emptyFile.txt'
Jul  4 19:06:28.145: INFO: stderr: ""
Jul  4 19:06:28.145: INFO: stdout: ""
STEP: Deleting pods
Jul  4 19:06:28.145: INFO: Deleting pod "pvc-tester-brxnz" in namespace "vcp-performance-9721"
Jul  4 19:06:28.196: INFO: Wait up to 5m0s for pod "pvc-tester-brxnz" to be fully deleted
Jul  4 19:06:38.296: INFO: nodeVolumeMap map[k8s-node1:[[nfs0-1] fcd/22352955defc464b94b02a1462f88055.vmdk [sharedVmfs-0] fcd/23d2004be7b64ba78b7d1838dc4637da.vmdk]]
Jul  4 19:06:38.296: INFO: nodeVolumes map[k8s-node1:[[nfs0-1] fcd/22352955defc464b94b02a1462f88055.vmdk [sharedVmfs-0] fcd/23d2004be7b64ba78b7d1838dc4637da.vmdk]]
W0704 19:06:48.522637   62952 vsphere_utils.go:502] QueryVirtualDiskUuid failed for diskPath: "[nfs0-1] fcd/kube-dummyDisk.vmdk". err: ServerFaultCode: File [nfs0-1] fcd/kube-dummyDisk.vmdk was not found
W0704 19:06:48.625485   62952 vsphere_utils.go:502] QueryVirtualDiskUuid failed for diskPath: "[sharedVmfs-0] fcd/kube-dummyDisk.vmdk". err: ServerFaultCode: File [sharedVmfs-0] fcd/kube-dummyDisk.vmdk was not found
Jul  4 19:06:48.625: INFO: vmVolumes: map[k8s-node1:[[nfs0-1] fcd/22352955defc464b94b02a1462f88055.vmdk [sharedVmfs-0] fcd/23d2004be7b64ba78b7d1838dc4637da.vmdk]]
Jul  4 19:06:48.625: INFO: volumes: [[nfs0-1] fcd/22352955defc464b94b02a1462f88055.vmdk [sharedVmfs-0] fcd/23d2004be7b64ba78b7d1838dc4637da.vmdk]
Jul  4 19:06:48.711: INFO: diskIsAttached vm : "VirtualMachine:vm-100"
Jul  4 19:06:48.780: INFO: VirtualDisk backing filename "[local-ds-2-0] k8s-node1/k8s-node1.vmdk" does not match with diskPath "[nfs0-1] fcd/22352955defc464b94b02a1462f88055.vmdk"
Jul  4 19:06:48.833: INFO: diskIsAttached vm : "VirtualMachine:vm-100"
Jul  4 19:06:48.888: INFO: VirtualDisk backing filename "[local-ds-2-0] k8s-node1/k8s-node1.vmdk" does not match with diskPath "[sharedVmfs-0] fcd/23d2004be7b64ba78b7d1838dc4637da.vmdk"
Jul  4 19:06:48.888: INFO: Volume are successfully detached from all the nodes: map[k8s-node1:[[nfs0-1] fcd/22352955defc464b94b02a1462f88055.vmdk [sharedVmfs-0] fcd/23d2004be7b64ba78b7d1838dc4637da.vmdk]]
STEP: Deleting the PVCs
Jul  4 19:06:48.888: INFO: Deleting PersistentVolumeClaim "pvc-xzt5h"
Jul  4 19:06:48.968: INFO: Deleting PersistentVolumeClaim "pvc-jpxgs"
Jul  4 19:06:49.020: INFO: Deleting pod "pvc-tester-brxnz" in namespace "vcp-performance-9721"
Jul  4 19:06:49.065: INFO: Average latency for below operations
Jul  4 19:06:49.065: INFO: Creating 2 PVCs and waiting for bound phase: 10.691536448 seconds
Jul  4 19:06:49.065: INFO: Creating 1 Pod: 17.235866408 seconds
Jul  4 19:06:49.065: INFO: Deleting 1 Pod and waiting for disk to be detached: 7.1532182115000005 seconds
Jul  4 19:06:49.065: INFO: Deleting 2 PVCs: 0.129097815 seconds
[AfterEach] [sig-storage] vcp-performance [Feature:vsphere]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jul  4 19:06:49.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "vcp-performance-9721" for this suite.

• [SLOW TEST:103.463 seconds]
[sig-storage] vcp-performance [Feature:vsphere]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  vcp performance tests
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere/vsphere_volume_perf.go:96
------------------------------
{"msg":"PASSED [sig-storage] vcp-performance [Feature:vsphere] vcp performance tests","total":1,"completed":1,"skipped":892,"failed":0}

Does this PR introduce a user-facing change?:

"NONE"

@divyenpatel @SandeepPissay @xing-yang Can you help review this PR

@k8s-ci-robot k8s-ci-robot added kind/failing-test Categorizes issue or PR as related to a consistently or frequently failing test. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. do-not-merge/release-note-label-needed Indicates that a PR should not merge because it's missing one of the release note labels. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. labels Jul 5, 2020
@k8s-ci-robot
Copy link
Contributor

Welcome @chethanv28!

It looks like this is your first PR to kubernetes/kubernetes 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes/kubernetes has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. 😃

@k8s-ci-robot
Copy link
Contributor

Hi @chethanv28. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. area/test sig/storage Categorizes an issue or PR as relevant to SIG Storage. sig/testing Categorizes an issue or PR as relevant to SIG Testing. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Jul 5, 2020
@k8s-ci-robot k8s-ci-robot added release-note-none Denotes a PR that doesn't merit a release note. and removed do-not-merge/release-note-label-needed Indicates that a PR should not merge because it's missing one of the release note labels. labels Jul 5, 2020
@oomichi
Copy link
Member

oomichi commented Jul 6, 2020

/ok-to-test
/cc @oomichi

@k8s-ci-robot k8s-ci-robot requested a review from oomichi July 6, 2020 22:32
@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Jul 6, 2020
@chethanv28
Copy link
Contributor Author

/retest

2 similar comments
@chethanv28
Copy link
Contributor Author

/retest

@chethanv28
Copy link
Contributor Author

/retest

Copy link
Member

@oomichi oomichi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Jul 7, 2020
@chethanv28
Copy link
Contributor Author

/retest

@divyenpatel
Copy link
Member

/approve

@oomichi
Copy link
Member

oomichi commented Jul 7, 2020

/approve

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: chethanv28, divyenpatel, oomichi

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jul 7, 2020
@spiffxp spiffxp added this to the v1.19 milestone Jul 9, 2020
@k8s-ci-robot k8s-ci-robot merged commit 42ae200 into kubernetes:master Jul 10, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. area/test cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/failing-test Categorizes issue or PR as related to a consistently or frequently failing test. lgtm "Looks good to me", indicates that a PR is ready to be merged. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. release-note-none Denotes a PR that doesn't merit a release note. sig/storage Categorizes an issue or PR as relevant to SIG Storage. sig/testing Categorizes an issue or PR as relevant to SIG Testing. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

vsphere cloud provider e2e util function panics
5 participants