-
Notifications
You must be signed in to change notification settings - Fork 712
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kubeadm init and kubeadm join auto-detects external and internal IPs incorrectly #1987
Comments
hi, i've googled this guide which covers the same problem: also this is a duplicate of: both suggest in your case it finds
unless you have NAT exposing the machine ports externally this is a non-issue. i'm going to assume the workers joined the cluster on the external address, so this means the master machine allows traffic on the api-server port at least. you might want to do a security evaluation of your network setup on these machines before running the cluster in production. googling for related guides should give plenty of results.
the Node "ExternalIP" address type is managed by cloud providers. i cannot find where this is documented, so it might be tribal knowledge and i do agree that the hope this helps. closing as not a kubeadm bug or FR. |
@neolit123: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Hi neolit123, Thanks for looking into this, I genuinely appreciate you getting back to me so quickly especially during the holidays and in such detail. I appreciate that you must have to filter through a lot of crud on here with things that are not issues, and I fully appreciate that Github Issues is not a support forum. In fact, I had already googled this extensively, and had already worked around the issue by specifying --node-ip. Here's why I still believe this is, if not a bug, at least simply "user unfriendly behaviour", worthy of consideration as a feature request or documentation improvement; if this issue belongs elsewhere I'm happy to file it elsewhere - please let me know the best place to put this. Time permitting I'd even be willing to contribute code or revised documentation, should such a contribution be welcomed. The environment I'm using is not in any way special - it's a Joyent Triton Private Cloud environment, and I'm deploying the nodes using Terraform + cloud-config to bootstrap, with official Canonical Ubuntu Certified 18.04 images, using the official Kubernetes Getting Started documentation found here: https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/ The primary interface (net0) has a public 185.x.x.x address, and the secondary interface (net1) has a RFC1918 address (10.28.0.0/24). The default gateway is via net0 on the public IP. This is not, I would assume, unusual - I have always placed the external IP on the first interface, and the internal on the secondary. First, there is no mention in the requirements section that interface ordering is important and that the external IP must be on the secondary interface, by convention, for kubernetes to select the internal/external IPs correctly. Second, there is no way to specify the node-ip to "kubeadm init", despite there being a way to, for example, specify --control-plane-endpoint, --pod-network-cidr or --service-cidr. This requires a post-init kludge, such as the kind I have now implemented in my cloud-config code:
Third, after doing this, ExternalIP remains blank, and there's seemingly no way to inform kubelet as to what it actually is. Even if this is of no consequence, it may seem concerning to new users - since my nodes do have both internal and external IPs, does this mean I forfeit some feature or ability later? Should I google for a solution? If this is expected, perhaps it should be officially documented:
It seems entirely reasonable, and hopefully I'm not the only one, that any product aiming for user-friendlyness should attempt, out of the box, to calculate sane defaults, based on sane logic. It would also seem reasonable, that if on a public cloud InternalIP and ExternalIP are both populated with the correct addresses, that a private cloud installation should be the same. The fact I can't get ExternalIP to display the correct value, leaves me feeling short-changed and unfairly cheated. This is currently a baptism of fire for new users, and seemingly unnecessary with some seemingly simple adjustments:
I hope this is useful and I apologise if I am coming across as obtuse, it's not the intention. I imagine most new users simply figure out you use "--node-ip" and leave it at that, but this did add several hours of head scratching and frustration to an otherwise very pleasant setup experience, and I hope you can see that I am by no means an inexperienced user. |
I re-created the master node with the interfaces swapped in the Terraform code (and removed the --node-ip), and it made no difference:
We still have an external IP on the internal IP, for unknown reasons:
We do have net1 before net0 with the ip addr output, for unknown reasons:
I suppose "by convention" doesn't work in all environments. |
kubeadm as a user facing deployer often times has to filter bugs and documentation issues in other k8s components. in this case - kubelet and kubectl.
the appropriate issue tracker for kubelet and kubectl is kubernetes/kubernetes. but you are outlining separate issues:
not on the CLI at least. we are not adding more flags to kubeadm at this point. search for
as i've mentioned EXTERNAL-IP in the case of Node objects is filled by cloud providers.
seems that i was wrong and the logic for that is documented with source code comments: in any case this is something that has to be documented in the user facing kubelet docs - e.g. here: |
Thanks for following up Lubomir especially during the festive period, especially with such detail - it's much appreciated. I will try and find some time to follow up your suggestions regarding documentation improvements. The InitConfiguration and JoinConfiguration work perfectly, which is a neat/clean solution. However after reading the logic here it became apparent that there's a simpler workaround, which is that the local hostname should resolve to the internal IP address instead of the external one. On my nodes, it was resolving to the external one. And indeed, adding that mapping in /etc/hosts prior to the kubeadm init/join command solved the issue. I achieved this with this command:
I'm writing up a blog post about this and will append the link to this issue once done. Thanks again! |
i was assuming that this will work for you. i did not find the precedence documented in the k8s.io website, so if you are willing you can also contribute documentation changes. the website repository is kubernetes/website. |
Thanks - I'll try and find some time to do so, I'm keen to contribute to the project in some form. Here's a post about it to help new users hitting the same thing: https://medium.com/@aleverycity/kubeadm-init-join-and-externalip-vs-internalip-519519ddff89 |
@alaslums Great thanks for so useful article - it resolved my slow cluster performance problem! It's very not obvious that Kubernetes uses external IPs by default, instead of internals! So with default settings - all nodes start using external network interface for connectivity instead of LAN, and can got the external channel limits like "100 Mbit" limit from data centers. As result, before rebuilding the cluster, I had very low performance of SNI because it worked over external network, not internal LAN! |
Relatively new user. On a fresh bare metal installation (v1.17.0), I end up with this:
The machines have a single public IP on net0, and a single internal IP on net1.
Expected behaviour: public IP ends up listed as the external IP, internal IP ends up listed as the internal IP, based on whether the range is in rfc1918, or based on which subnet the default route matches against.
Workaround: Can't find one - there doesn't seem to be a way to configure this? --node-ip seems insufficiently granular.
As a new user, I am unsure what the consequences of this is. Is it safe to continue with the external IPs listed as internal IPs? Is it safe to run without external IPs listed? What will happen? Not intuitive :-/ Perhaps I'm misunderstanding something fundamental here.
The text was updated successfully, but these errors were encountered: