-
Notifications
You must be signed in to change notification settings - Fork 287
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Backport node anti affinity and node labeling features #1638
Backport node anti affinity and node labeling features #1638
Conversation
This patch introduces a change to the VSphereClusterStatus by adding a field that reports the version of the vCenter instance referred by the VSphereCluster object. Signed-off-by: Sagar Muchhal <muchhals@vmware.com> (cherry picked from commit 30e9ea4)
This patch exposes the logic which creates/deletes a cluster module construct for every cluster-api object responsible for creating new Machine/VSphereVM/K8s node objects. For each such object, CAPV creates a cluster module which best effort guarantees anti affinity placement of VMs belonging to this CAPI object on the ESXi hosts. This patch introduces a new feature flag which gates the anti affinity functionality. The name of the feature flag is `NodeAntiAffinity` and can be set/reset by setting the `EXP_NODE_ANTI_AFFINITY` to true/false. Signed-off-by: Sagar Muchhal <muchhals@vmware.com> (cherry picked from commit 714af1d)
Signed-off-by: Sagar Muchhal <muchhals@vmware.com> (cherry picked from commit 71b4f73)
This patch introduces a new field to the Status of the VSphereVM object which contains the information of the host that the VM is placed on. This information is propagated to the VSphereMachine object and then added to the Machine object as a label with key node.cluster.x-k8s.io/esxi-host. Signed-off-by: Sagar Muchhal <muchhals@vmware.com> (cherry picked from commit e3c99d5)
The purpose of this controller is to label all the Kubernetes nodes whose machines have labels prefixed with a specific label prefix. This functioanlity is being proposed in CAPI, and this controller will be removed once such functionality exists. Signed-off-by: Sagar Muchhal <muchhals@vmware.com> (cherry picked from commit 9967e2d)
This patch introduces a new feature flag which gates the node labeling functionality. It controls whether the control responsible for labeling the node objects with special prefixed labels is added to the manager on startup. The name of the feature flag is `NodeLabeling` and can be set/reset by setting the `EXP_NODE_LABELING` to true/false. Signed-off-by: Sagar Muchhal <muchhals@vmware.com> (cherry picked from commit 2497cef)
The e2e test checks for the presence of the node.cluster.x-k8s.io/esxi-host label on the nodes of the workload cluster and confirms the value of the label with the variable set on the corresponding VSphereVM object Signed-off-by: Sagar Muchhal <muchhals@vmware.com> (cherry picked from commit 5ef4ef5)
/retest |
/cc @yastij |
This is a subset of the changes in the PR kubernetes-sigs#1602 that was merged in the main branch. This specifically updates the feature gate import so that we are not by default exposing the CAPI feature flags on CAPV which are not usable. This is needed since node anti affinity and node labeling functionalities are under separate and 2 new feature flags Signed-off-by: Sagar Muchhal <muchhals@vmware.com>
021175d
to
9c17235
Compare
/test pull-cluster-api-provider-vsphere-full-e2e-release-1-3 |
This patch adds the logic to fall back to the default resource pool, if no resource pool is specified, during cluster module creation. It also checks whether the resource pool is owned by a compute cluster or a standalone host. In case of the owner being a standalone host, a warning is introduced on the VSphereCluster object and the cluster creation proceeds. Signed-off-by: Sagar Muchhal <muchhals@vmware.com> (cherry picked from commit 1260057)
/retest |
/retest |
@srm09: The following test failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/approve
/lgtm
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: yastij The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
What this PR does / why we need it:
Backport for PRs #1628, #1629, #1641 and part of #1602
Which issue(s) this PR fixes:
Fixes n/a
Special notes for your reviewer:
n/a
Release note: