Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

vSphere Cloud Provider should support implement Zones() interface #64021

Closed
embano1 opened this issue May 18, 2018 · 27 comments
Closed

vSphere Cloud Provider should support implement Zones() interface #64021

embano1 opened this issue May 18, 2018 · 27 comments
Assignees
Labels
area/provider/vmware Issues or PRs related to vmware provider kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@embano1
Copy link
Member

embano1 commented May 18, 2018

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug

/kind feature
/sig vmware

What happened:
Currently, the vSphere Cloud Provider (VCP) does not implement discovering Zones/Failure Domains based on well-known labels out of the box. To quote the docs:

On the Node: Kubelet populates this with the zone information as defined by the cloud provider. It will not be set if not using a cloud provider, but you should consider setting it on the nodes if it makes sense in your topology.

The consequences for users are:

  • Nodes (Kubelets) not correctly being distributed on the vSphere cluster (risk of cascading failure)
  • Manual configuration is needed, which can be error-prone (as a workaround, govc can be used in deployment scripts, see this PR)
  • Configuration drift and possible Kubernetes scheduling violations (e.g. breaking pod anti-affinity rules by vMotioning two kubelet VMs on the same ESXi host) when the cluster is HA/DRS enabled
  • Possible implications with persistent volume scheduling decisions and performance impact due to day 2 drift (depending on underlying storage implementation)
  • Possible implications with higher-level orchestration tools, like BOSH, where labelling and placement decisions are violated

What you expected to happen:
VCP should populate Kubernetes well-known labels in order for the scheduler (default scheduling policy) and end user (affinity/anti-affinity settings) to work out of the box in a vSphere environment.

VCP should also be able to reconcile labels, e.g. in case of a VM failover (HA) or DRS being enabled in a vSphere cluster, e.g. "should" rules to balance host utilization within a rack/failure domain.

How to reproduce it (as minimally and precisely as possible):

  • Create a VCP enabled Kubernetes cluster
  • Create pods with anti-affinity
  • Observe how Kubernetes scheduler places pods, not being aware of the underlying topology (especially in case of DRS/HA)

Anything else we need to know?:
Discussed the following two-step approach with VMware Kubernetes engineering team:

Phase 1:
Implement basic functionality for VCP to support zones. This requires consensus on what a zone and region maps to in a vSphere environment. E.g. a region could map to a vSphere data center and a zone to a vSphere DRS/HA enabled cluster. VMware Cloud on AWS (VMC) multi-AZ deployments would nicely map.

But that might conflict with customer on-premises environments, where a single vSphere cluster is stretched between two sites/data center buildings, i.e. from a logical perspective single vSphere data center and cluster.

This is why a labelling (tagging) mechanism must be employed. A vSphere or Kubernetes cluster operator would define and apply vSphere tags to data centers, clusters and ESXi hosts. For example:

  • Category "region"
    • Tag:
      • EMEA
      • US
  • Category "zone"
    • Tag:
      • DE-K8s
      • DE-K8s-1
      • DE-K8s-2
      • CA-K8s
      • CA-K8s-1
      • WA-K8s
      • WA-1

In the stretched cluster example mentioned above that tagging/labeling scheme would translate to:

  • vSphere Data Center: Region tag "EMEA"
  • vSphere Kubernetes Cluster: Zone tag "DE-K8s"
  • vSphere ESXi hosts in cluster "DE-K8s" stretched site 1: Zone tag "DE-K8s-1"
  • vSphere ESXi hosts in cluster "DE-K8s" stretched site 2: Zone tag "DE-K8s-2"

Each Kubelet VM running on ESXi, through a to be defined local (i.e. 169.x.y.z, no external network call) metadata service, would query the pre-defined tags and translate them into region/zone labels. To continue with the stretched cluster example:

  • Kubelet VM-1 running on ESXi-1 in site 1:
    • failure-domain.beta.kubernetes.io/region=EMEA
    • failure-domain.beta.kubernetes.io/zone=DE-K8s-1
  • Kubelet VM-2 running on ESXi-2 in site 2:
    • failure-domain.beta.kubernetes.io/region=EMEA
    • failure-domain.beta.kubernetes.io/zone=DE-K8s-2
  • Note: I've intentionally skipped conflict handling, e.g. multiple zone tags on different hierarchy levels (cluster, host) as well as DRS affinity/anti-affinity VM-to-Host/VM-to-VM settings

Phase 2:
The in phase 1 suggested improvements on VCP zone support would solve initial placement and correct labelling of VMs and kubelets in a vSphere environment. Since many commercial Kubernetes distributions, e.g. OpenShift, test and certify against VCP, enriching the VCP with this functionality would be a benefit for all vSphere customers running any Kubernetes distribution.

However, vSphere offers advanced features like dynamic cluster rebalancing and high availability for VMs, which even in a Kubernetes environment provide a lot of value. That's why on "day 2", the initial labelling applied by the Kubelet could change (e.g. after HA or vMotion/DRS) and thus break Kubernetes scheduling assumptions/decisions.

This is why an monitoring/reconciliation control loop is needed. This could be implemented as a controller inside Kubernetes and in fact was demonstrated by @anfernee during a recent SIG VMware community call.

From a testing/certification perspective, VMware would need to work jointly with vendors of commercial Kubernetes distributions so that this controller would be recommended and shipped out of the box for production environments and customers continue to gain the benefits of the vSphere platform, protecting their investment.

@frapposelli @cantbewong

Environment:

  • Kubernetes version (use kubectl version): all versions affected
  • Cloud provider or hardware configuration: vSphere Cloud Provider
  • OS (e.g. from /etc/os-release): n/a
  • Kernel (e.g. uname -a): n/a
  • Install tools: n/a
  • Others: DRS/HA enabled, no manual labelling applied, DRS/HA rules (affinity, restart priority, etc.) not tuned/aligned with Kubernetes topology

Related Issues/Discussions

@k8s-ci-robot k8s-ci-robot added needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. kind/feature Categorizes issue or PR as related to a new feature. area/provider/vmware Issues or PRs related to vmware provider and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels May 18, 2018
@hoegaarden
Copy link
Member

As briefly discussed via slack you want to avoid communication to the vSphere API while the kubelet is bootstrapping; thus the metadata service and or other options. I wonder if there is some work going regarding that metadata service or other options for the kubelet to the get some data from the ESXi host or other parts of the system. If so, could you share some links to PRs, branches or that like?

@dougm
Copy link
Member

dougm commented May 18, 2018

@embano1 sounds good on the first pass, need to think through it some more, but a few notes for now:

We need Go bindings for the tagging API. There have been several requests to add support in govmomi and govc: vmware/govmomi#957

As for metadata, there are at least two ways for vCenter and guests to share data without a network connection: "guestinfo" and "namespaceDB". Both are key-val like stores, where from within the guest data can be read/written via guest RPC over VMCI or vm backdoor. Standard vmware-tools ships with vmware-rpctool and vmware-namespace-cmd that can communicate over these channels and Go programs can also use the vmware/vmw-guestinfo package to avoid depending on those programs.
The API can read/write the same data outside of the guest. API support for guestinfo already exists in govmomi, namespaceDB not yet: vmware/govmomi#1123

@hoegaarden
Copy link
Member

@dougm Are you saying that a metadata-service is not needed, but guestinfo/namespaceDB should be used instead?

@dougm
Copy link
Member

dougm commented May 21, 2018

@hoegaarden yes I think guestinfo/namespaceDB could solve this problem rather than having to define+build a new metadata-service.

@hoegaarden
Copy link
Member

@dougm So you think about configuring the region & zone info into e.g. the guestinfo of each created VM, so that it can be queried from within the VM via vmware-rpctool, right? In other words: each VM would need to be configured with the region/zone individually.

If I understand the initial proposal correctly, @embano1 thought about tagging only the host (and/or the cluster and/or the DC) but not each individual VM. And then have the metadata service collect all those tags and flatten them.

My questions here:

  • Is it possible for the guestinfo to be automatically populated with some data from the host, the cluster or the DC the VM was created on?
  • When a VM gets migrated to a different host, can the guestinfo be updated automatically too (-> phase 2)?

@dougm
Copy link
Member

dougm commented May 22, 2018

Populating the guestinfo and keeping it in sync is one option. We can use property collector notifications to sync after a migration for example. I need to take a closer look at the namespaceDB option, but there is an event queue designed for this type of interaction. George updated vmware/govmomi#1123 with some of the advantages of namespaceDB over guestinfo.

@embano1
Copy link
Member Author

embano1 commented May 23, 2018

I am little bit hesitant on changing the labels of the kubelet VM after a vMotion/HA operation. Some questions that come to my mind:

  • What happens to the scheduler logic (cache) if you update kubelet labels? Is there a unit test already proving it works as expected?
  • What happens to pods if placement policies, e.g. anti-affinity are violated after a vMotion and label adjustment? We have to keep changes to K8s scheduler and its consequences for customers (also running different K8s versions) in mind when we make these decisions. Quoting the docs:

“IgnoredDuringExecution” means that the pod will still run if labels on a node change and affinity rules are no longer met. There are future plans to offer requiredDuringSchedulingRequiredDuringExecution which will evict pods from nodes as soon as they don’t satisfy the node affinity rule(s).

  • After HA, changing the labels of a failed over VM ("should" soft rule) could lead to pods which need a restart not being scheduled bc of anti-affinity violation - this might not be what customers want in case of HA. vSphere HA also has a nice advantage because it typically reacts faster than Kubernetes API declares a node as unrecoverable (default 5min). If the kubelet VM comes up quick, pods will typically restart on that kubelet (at least in my tests, might need proper validation).

  • Custom community/unknown controllers like descheduler could potentially act based on changed labels, which again in case of HA or vMotion (maintenance) would not be what customers expect.

For the reasons listed above, I'd rather leave the labels as applied initially and go with a recommendation engine for phase 2 which reports out of compliance status and allows an admin to trigger reconciliation actions that ship with the controller (CRD). Comparable to DRS partially automated mode, i.e. initial placement and then recommendations only. E.g. (unfinished thoughts)

# query current compliance status (CRD) 
$ kubectl status vSphere

+-----------+-------------------+----------------------------------+
|    VM     |      Status       |             Details              |
+-----------+-------------------+----------------------------------+
| worker-01 | Compliant         |                                  |
| worker-02 | Out of compliance | DRS Anti-Affinity rule violated  |
+-----------+-------------------+----------------------------------+

# get details
$ kubectl describe vSphere worker-02
....
Status: Out of compliance
Recommendation: Migration to ESXi host ESX-1
....

# apply recommendation
$ kubectl rebalance vSphere worker-02
worker-02 successfully migrated to ESXi-1

@embano1
Copy link
Member Author

embano1 commented May 28, 2018

On the issue of updating Kubelet labels after initialization: #59314

kubelet ownership of its own labels is not deterministic and is problematic. Kubelet updating labels on an existing Node API object on start is not the direction we want to go, since it removes the ability to centrally manage those labels via the API

@fdhex
Copy link

fdhex commented May 29, 2018

Thanks @embano1 for raising this issue. I also support the approach where we do not update the labels as in my understanding that would create more issues than resolve in a DRS-activated cluster.

@hoegaarden
Copy link
Member

I think the labels are important to manage somehow, especially with vMotion or that like in mind. However, I believe that is definitely phase 2 and I am not to worried about that right now. I also like the idea of that CRD-managed suggestions / rebalance.

I think right now, for phase 1 I'd be important to figure out how we actually share the region/zone information from the hosts to the guests.
I am not sure if there is currently consensus about the way forward:

  • metadata service the CP calls out to
  • using guestinfo or namespaceDB directly from the CP

I'd love to come to an agreement on how the information of region/zone is passed into the VM / can be queried by the VM, so we can start to work on that. Having said that, chances are good I am missing some information and people are already working on that. In that case, please let me know :)

@dougm
Copy link
Member

dougm commented Jun 22, 2018

/assign @jiatongw

@k8s-ci-robot
Copy link
Contributor

@dougm: GitHub didn't allow me to assign the following users: jiatongw.

Note that only kubernetes members and repo collaborators can be assigned.

In response to this:

/assign @jiatongw

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@dougm
Copy link
Member

dougm commented Jun 22, 2018

/assign

@embano1
Copy link
Member Author

embano1 commented Jul 16, 2018

Update after discussion with @jiatongw on open questions for phase 1:

vSphere categories map to Kubernetes well-known labels (region and zone), e.g.

[
  {
    "category": [
      {
        "description": "Represents a well-known label in Kubernetes mapping to failure-domain.beta.kubernetes.io/region",
        "allowEmpty": false,
        "name": "k8s-io-region",
        "tags": [
          "EMEA",
          "US"
        ]
      },
      {
        "description": "Represents a well-known label in Kubernetes mapping to failure-domain.beta.kubernetes.io/zone",
        "allowEmpty": false,
        "name": "k8s-io-zone",
        "tags": [
          "Cluster-ABC-Site-A",
          "Cluster-ABC-Site-B"
        ]
      }
    ]
  }
]

These categories would be applied on a VM-level to allow drift detection, e.g. after vMotion. Also, these tags may only be associated with VMs and only one tag per category is allowed (options in vSphere when creating categories).

Categorie and tag names have to be configurable by the vSphere admin. Recommendation could be "k8s-io-<...>" to avoid parsing errors with some special characters.
Configuration could happen in vSphere.conf (cloud provider configuration file), which is a static file on the control plane VMs (could also be created as ConfigMap).

If the vSphere admin does not create the tags/spelling error, the implementation can be configured to warn or fail when the Kubelet starts (see allowEmpty).

The implementation would also add another failure-domain (and thus topologyKey), specific to vSphere. The ESXi host is also a failure domain (multiple VMs on one host). The Kubernetes scheduler should be aware of this failure domain by adding a custom topologyKey, e.g. beta.cna.vmware.com/host (the exact label needs discussion with SIG VMware).

By having the zone locality tag associated on a VM-level, a controller could check for drift, e.g. after HA or vMotion (phase 2). Reconcilation behavior is customer specific. One approach w/out disruption to existing workloads could be taints. The controller would taint the migrated VM (kubelet) so that existing workloads continue to run (effect: NoSchedule) but no new pods can be scheduled (since the scheduler still sees the old host/zone label). The advantage is that this would work nicely with HA because after HA the kubelet would register with the correct zone/host label. (More brainstorming needed)

Also todo: double-check with @tusharnt on whether we need modifications for persistent volume controllers. Since vSphere storage (VMFS, vSAN) typically is shared across all nodes in the cluster, expecting no issues out of the box. Any concerns for non-uniform stretched storage clusters?

From Kubernetes docs:

If PersistentVolumeLabel does not support automatic labeling of your PersistentVolumes, you should consider adding the labels manually (or adding support to PersistentVolumeLabel), if you want the scheduler to prevent pods from mounting volumes in a different zone. If your infrastructure doesn’t have this constraint, you don’t need to add the zone labels to the volumes at all.

@jiatongw
Copy link
Member

Latest upate:

The tagging would be applied on host-level. We can use VM.Runtime.Summary.Host to detect drift of VMs.
We decided to maintain phase 1 as an opt-in feature, and keep it as simple as possible. Changes are:

  • Add a new field Labels into vSphere configuration file
  • Add zone, region property to field [Labels]
  • legacyMode, allowEmpty will not be considered

Example of vSphere configuration file shows as below:

[Global] 
       ...
[WorkSpace]
       ...
[Network]
       ...
[Labels]
       zone = "k8s-io-zone"
       region = "k8s-io-region"

If users don't provide [Labels] field, then the behavior will be the same as the old version.

@embano1
Copy link
Member Author

embano1 commented Jul 27, 2018

Update from a chat with @hoegaarden on CFCR implications with these changes:

Phase 1

  • Only minor changes would be needed on CFCR side to adopt this change in VCP
  • The benefit is to move away from hard coding it into kubectl --node-labels when joining the cluster (which has several disadvantages)
  • Since we provide fallback, i.e. don't set [Labels], the current AZ implementation in CFCR can be carried forward and used as well (e.g. for Kubernetes clusters not supporting this new feature)
  • Besides the changes in CFCR, BOSH vSphere CPI docs would have to be updated (vSphere admin needs to configure categories/tags for the ESXi hosts unless BOSH will do this in the future)
  • The team is happy to assist in E2E testing as well as integrating namespaceDB support to avoid RPC to vCenter

Phase 2

  • The team is happy to assist in writing a monitoring controller (CRD) which would report drift and could optionally taint nodes as NoSchedule that have drifting labels

k8s-github-robot pushed a commit that referenced this issue Aug 8, 2018
Automatic merge from submit-queue (batch tested with PRs 67052, 67094, 66795). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Add zones support for vSphere cloud provider(in-tree)

**What this PR does / why we need it**:
This PR added zones(built-in node labels) support for vSphere cloud provider(in-tree).  More details can be found in the issue as below.

**Which issue(s) this PR fixes** :
Partially fixes phase 1 of issue #64021 

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
dougm added a commit to dougm/kubernetes that referenced this issue Aug 22, 2018
Update required to continue work on kubernetes#64021

- The govmomi tag API changed

- Pulling in the new vapi/simulator package for testing the VCP Zones impl
k8s-github-robot pushed a commit that referenced this issue Aug 23, 2018
Automatic merge from submit-queue (batch tested with PRs 66973, 67704, 67722, 67723, 63512). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

godeps: update vmware/govmomi

**What this PR does / why we need it**:

Update required to continue work on #64021

- The govmomi tag API changed

- Pulling in the new vapi/simulator package for testing the VCP Zones impl

**Release note**:

```release-note
NONE
```
dougm added a commit to dougm/kubernetes that referenced this issue Aug 23, 2018
- Add tests for GetZones()

- Fix bug where a host tag other than region or zone caused an error

- Fix bug where GetZones() errored if zone tag was set, but region was not

Follow up to PR kubernetes#66795 / towards kubernetes#64021
k8s-github-robot pushed a commit that referenced this issue Aug 23, 2018
Automatic merge from submit-queue (batch tested with PRs 66980, 67604, 67741, 67715). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

vsphere: add tests for Cloud Provider Zones implementation

**What this PR does / why we need it**:

- Add tests for GetZones()

- Fix bug where a host tag other than region or zone caused an error

- Fix bug where GetZones() errored if zone tag was set, but region was not

Follow up to PR #66795 / towards #64021

**Release note**:

```release-note
NONE
```
dougm added a commit to dougm/kubernetes that referenced this issue Aug 23, 2018
Rather than just looking for zone tags at the VM's Host level, traverse up the hierarchy.
This allows zone tags to be attached at host level, along with cluster, datacenter, root folder
and any inventory folders in between.

Issue kubernetes#64021
hh pushed a commit to ii/kubernetes that referenced this issue Aug 27, 2018
Automatic merge from submit-queue (batch tested with PRs 54935, 67768, 67896, 67787). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

vsphere: support zone tags at any level in the hierarchy

**What this PR does / why we need it**:

Rather than just looking for zone tags at the VM's Host level, traverse up the hierarchy.
This allows zone tags to be attached at host level, along with cluster, datacenter, root folder
and any inventory folders in between.

Issue kubernetes#64021

Example log output from the tests, with tags attached at host level:
```console
Found "k8s-region" tag (k8s-region-US) for e85df495-93b9-4b0e-96f1-dc9d56e97263 attached to HostSystem:host-19
Found "k8s-zone" tag (k8s-zone-US-CA1) for e85df495-93b9-4b0e-96f1-dc9d56e97263 attached to HostSystem:host-19
```
And region tag at Datacenter level and zone tag at Cluster level:
```console
Found "k8s-zone" tag (k8s-zone-US-CA1) for e85df495-93b9-4b0e-96f1-dc9d56e97263 attached to ComputeResource:computeresource-21
Found "k8s-region" tag (k8s-region-US) for e85df495-93b9-4b0e-96f1-dc9d56e97263 attached to Datacenter:datacenter-2
```

**Release note**:

```release-note
NONE
```
@kacole2
Copy link
Member

kacole2 commented Sep 6, 2018

I was recently made aware of this issue, however, it's not being tracked for Kubernetes 1.12 release. Is there a reason why an issue in kubernetes/features was never opened? I only ask because it doesn't have any visibility to folks on the release team. So it may not be added to blogs, release notes, etc.

@dougm
Copy link
Member

dougm commented Sep 6, 2018

We were missing a release note, but #66795 has it now and it'll be included in the next generation of CHANGELOG-1.12.md

I think we skipped k/features for the same reason as kubernetes/enhancements#501 (comment)

"Since this is an entirely VMWare feature, it does not need to be tracked here."

Zones feature already existed, this was just the vSphere implementation - so we assumed approval was not required. But had not considered how a k/features issue would be used in docs, etc.
I see there's a k/features issue for Azure Zones, is it too late to add one for vSphere Zones?

@kacole2
Copy link
Member

kacole2 commented Sep 6, 2018

@dougm thanks for the clarification. I'm still learning over here too. Let me chat with some release folks and see what's gray area.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 5, 2018
@kacole2
Copy link
Member

kacole2 commented Dec 5, 2018

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 5, 2018
@kacole2
Copy link
Member

kacole2 commented Dec 5, 2018

@dougm @embano1 would this be a good time to submit a KEP to k/enhancements that will be filed under SIG-VMware?

@davidkarlsen
Copy link
Member

how about using dmidecode to get UUID? For smaller sites it is interesting enough to not land on the . same ESX host

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 8, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 7, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/provider/vmware Issues or PRs related to vmware provider kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

9 participants