-
Notifications
You must be signed in to change notification settings - Fork 4.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pick the right OS server group when creating cloud groups #13461
Pick the right OS server group when creating cloud groups #13461
Conversation
Hi @ederst. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
This fixes an issue where kops picks the last server group found on OpenStack instead of the right one when getting the cloud groups. For example, lets assume that kops created those server groups and they are returned in the order as shown here by the OpenStack API: ``` cluster-name-bastion cluster-name-cp-0 cluster-name-worker ```` Now kops looks for nodes associated with the IG "bastion" and the expected behavior would be that it ends up using "cluster-name-bastion". However, it will actually end up associating the cloud group with the last server group, which is in this case "cluster-name-worker" due to the reference switching to the last item when the loop is done. In the worst case this could lead to kops deleting the wrong instances when deleting an IG. Not using the server group as a "by reference" argument when building the cloud group fixes this behavior.
3c5d8fd
to
f97d86e
Compare
/ok-to-test |
/retest |
Thanks for fixing @ederst Ideally we'd have a linter to pick up on this, but it's surprisingly difficult to create a linter rule that doesn't have a lot of false positives (and I've tried a few times!) I think this is fine to merge without a test - I agree it's relatively hard to test. Another idiom BTW for this is to do something like this:
You're effectively doing the same thing, so your change looks good. Ideally @zetaab would lgtm, but I'll approve & lgtm and mark for hold. @zetaab (or other OpenStack folk): if you get to this in the next few days, please cancel the hold, otherwise I'll just cancel it. /approve |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks good, thanks!
/hold cancel
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: justinsb, zetaab The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
…61-origin-release-1.23 Automated cherry pick of #13461: Pick the right OS server group when creating cloud groups
…61-origin-release-1.22 Automated cherry pick of #13461: Pick the right OS server group when creating cloud groups
This fixes an issue where kops picks the last server group found on OpenStack instead of the right one when getting the cloud groups.
For example, lets assume that kops created those server groups and they are returned in the order as shown here by the OpenStack API:
Now kops looks for nodes associated with the IG "bastion" and the expected behavior would be that it ends up using "cluster-name-bastion". However, it will actually end up associating the cloud group with the last server group, which is in this case "cluster-name-worker" due to the reference switching to the last item when the loop is done.
In the worst case this could lead to kops deleting the wrong instances when deleting an IG.
Not using the server group as a "by reference" argument when building the cloud group fixes this behavior.
PS: I wanted to do tests for this, but TBH i do not know how. I'd appreciate help in that regard or just ignore it and merge it ;). Thx