-
Notifications
You must be signed in to change notification settings - Fork 287
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🐛 Delete node explicitly upon deletion of vm #1551
🐛 Delete node explicitly upon deletion of vm #1551
Conversation
Skipping CI for Draft Pull Request. |
bf9b91e
to
d8f9588
Compare
This patch adds the logic to delete the k8s node from the cluster corresponding to the VM. This is necessary in cases where the node is never in the Ready state which leads to the node never getting removed from the cluster. This leads to a lot of stale nodes showing on the cluster with an active MHC. Co-authored by: Sagar Muchhal <muchhals@vmware.com>
d8f9588
to
0030f1f
Compare
/retest |
4594d99
to
6657871
Compare
Signed-off-by: Sagar Muchhal <muchhals@vmware.com>
6657871
to
c35aeb1
Compare
/lgtm |
/assign @yastij |
/lgtm |
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: srm09 The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
…lease-1.3 Automated cherry pick of #1551: Adds API group to e2e script
What this PR does / why we need it:
Reaped nodes (which are Not Ready) by MHC show up in the output of kubectl get nodes. The name of the k8s node is same to the name of the owning CAPI Machine. This PR is a workaround which explicitly attempts to delete the node after the
VSphereVM
deletion. This PR builds on top of #1521Which issue(s) this PR fixes:
Fixes #1519
Special notes for your reviewer:
Please confirm that if this PR changes any image versions, then that's the sole change this PR makes.
Release note: