-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug 1804596: Continue deleting the instance if its ports haven't been destroyed #89
Conversation
… destroyed If the instance is not Active, then when we try to delete it, Neutron may fail to delete its ports, and therefore we can't delete the instance after that. This patch accepts the fact that ports may not be removed, and continues deleting the instance anyway.
@Fedosin: This pull request references Bugzilla bug 1804596, which is valid. The bug has been moved to the POST state. The bug has been updated to refer to the pull request using the external bug tracker. 3 validation(s) were run on this bug
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
The code looks good, but is there a way to handle the deletion hanging without orphaning resources? Maybe a retry loop or something like that? |
/hold |
In Slack we had agreed that a good middle ground would be to add a custom tag to the ports so the admin knows they were created by CAPO. This should make it much easier to find them in a clean up script. |
I put some more thought into this. I am going to make an additional pull request to the installer that adds a tag for all resources created by CAPO: $clusterID-machine-infrastructure. On top of that, this PR should be modified to add a tag: |
@@ -756,7 +756,7 @@ func (is *InstanceService) InstanceCreate(clusterName string, name string, clust | |||
return serverToInstance(server), nil | |||
} | |||
|
|||
func (is *InstanceService) InstanceDelete(id string) error { | |||
func (is *InstanceService) deleterInstancePorts(id string) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the deleterInstancePorts
function name a typo? should it be deleteInstancePorts
instead?
I understand that tagging makes sense in this particular case, but in my (limited) experience many resources that we fail to delete we would also fail to tag. I am reluctant to give the impression that running a script based on "tag:orphaned" will reliably collect all the orphaned ports |
That is a good point. The alternative is that we just leave the resources orphaned in their cluster until this gets fixed in Nova. :) not ideal either way |
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: Fedosin, pierreprinetti The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/hold cancel |
/hold |
/hold cancel |
@Fedosin: All pull requests linked via external trackers have merged: openshift/cluster-api-provider-openstack#89. Bugzilla bug 1804596 has been moved to the MODIFIED state. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
* Implemented bootstrap tokens via cluster-bootstrap Therefore, I added new dependency to k8s.io/cluster-bootstrap. Shelling out to kubeadm from token generation is now removed from machineactuator.go. I used the tools of cluster-bootstrap to generate a kubeadm compliant token name and create a secret out of it. This implementation does not use the ways kubeadm currently uses, because it would pull a hole bunch of other dependencies in, which are IMHO not needed at that point. Fixes openshift#78 * Renamed TokenExpiration to TokenTTL * Increased TokenTTL to 60 minutes * Added link to inspiration of bootstrap/token.go * Changed error handling to panic and removed needless lines
If the instance is not Active, then when we try to delete it, Neutron may fail to delete its ports, and therefore we can't delete the instance after that.
This patch accepts the fact that ports may not be removed, and continues deleting the instance anyway.