-
-
Notifications
You must be signed in to change notification settings - Fork 214
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use node name for service election and lease holder name instead of hostname #811
Conversation
Signed-off-by: Danil Uzlov <36223296+d-uzlov@users.noreply.github.com>
@@ -29,6 +29,7 @@ func findSelf(c *packngo.Client, projectID string) *packngo.Device { | |||
// Go through devices | |||
dev, _, _ := c.Devices.List(projectID, &packngo.ListOptions{}) | |||
for _, d := range dev { | |||
// TODO do we need to replace os.Hostname with config.NodeName here? | |||
me, _ := os.Hostname() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe here we also want to use node name as reported by k8s, but since I don't have BGP setup to test it, I left it as is, to avoid breaking stuff.
if id == address.Hostname { | ||
log.Debugf("[%s] found local endpoint - address: %s, hostname: %s", ep.label, address.IP, address.Hostname) | ||
localEndpoints = append(localEndpoints, address.IP) | ||
continue |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here and in watch_endpointslices.go
I removed short-hostname-to-node-name matching because k8s is already providing the correct value.
First I check address.NodeName
because documentation states "This can be used to determine endpoints local to a node".
address.Hostname
doesn't seem relevant to me, but I left it here as a second check, just in case this is relevant in some setups. Maybe it should also be removed.
Unfortunately this is going to break all kube-vip installations and upgrades moving forward, I think defaulting to detecting to hostname detection in the case of manually specifying one should probably remain. |
I would suggest keeping the old behavior and skip it if your new environment variable is mounted. Additionally when the code runs in control-plane mode host name is needed as there is no node object. |
This would be great, I think we'll just need to ensure that default/expected functionality remains :) |
When making these changes I thought of using kube-vip to generate new deployment files. This way there will be no issues during the upgrade. I'll push an update commit for this.
What do you mean by this? |
When kube-vip is managing the HA VIP for the control plane (when being brought up by |
Signed-off-by: Danil Uzlov <36223296+d-uzlov@users.noreply.github.com>
Well, kube-vip is already using |
Signed-off-by: Danil Uzlov <36223296+d-uzlov@users.noreply.github.com>
@thebsdbox |
Yes please, that way we can be sure that it's working as expected 😀 |
Signed-off-by: Danil Uzlov <36223296+d-uzlov@users.noreply.github.com>
Signed-off-by: Danil Uzlov <36223296+d-uzlov@users.noreply.github.com>
Signed-off-by: Danil Uzlov <36223296+d-uzlov@users.noreply.github.com>
Signed-off-by: Danil Uzlov <36223296+d-uzlov@users.noreply.github.com>
@thebsdbox I made sure that currently existing tests run fine for me locally, and tried to add tests for node names. Ideally we would want to create nodes with modified names but kind doesn't seem to allow it, so I tried to at least test something. For e2e tests I modified the main manifest and the etcd manifest to use the new node name env, and then added a new e2e test that uses manifest without this env, to check that old deployments will still work. For service tests I figured out I can modify cluster a bit before running kube-vip. |
I also wanted to update documentation but turns out it's in a separate repo, so I guess maybe later, after this PR is merged. |
@thebsdbox we need to make sure that new features get also properly documented, I think we already got some drift between code and docs. |
100% my plan to separate the repos to make things easier, sadly made it worse. Perhaps updating the templates for a corresponding website PR would make sense. |
Signed-off-by: Danil Uzlov <36223296+d-uzlov@users.noreply.github.com>
Signed-off-by: Danil Uzlov <36223296+d-uzlov@users.noreply.github.com>
Signed-off-by: Danil Uzlov <36223296+d-uzlov@users.noreply.github.com>
All tests are passing, I'll have one more look over but is this ready to merge? |
For me this PR is ready to merge, I don't have anything more to change. |
Addresses this issue:
nodename == hostname
for service election, which easily breaks #810After this change kube-vip should be able to work with any node names. I tried to remove all notions of hostnames from the code.
This PR adds
vip_nodename
env andnodeName
command line argument.kube-vip will fall back to hostname if it is not provided.
By default node name is injected via k8s downward API to all yaml deployments generated by kube-vip.
This PR also removes the logic for short name and FQDN matching, because it is now handled by k8s. Now kubevip only matches the exact node name value from config to exact node name value from endpoint/endpointslice.
This is a quite simple change but it affects a lot of things in the project.
I tested this change locally via this image:
docker.io/daniluzlov/k8s-snippets:kube-vip-0.7.2-nodename2
.I tested both static pods for apiserver HA, and service loadbalancer with and without service election.
However, I can only test with ARP announcements.
I don't have the setup to test BGP.