Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubeadm should make the --node-ip option available #203

Closed
nkratzke opened this issue Mar 10, 2017 · 30 comments

Comments

Projects
None yet
@nkratzke
Copy link

commented Mar 10, 2017

FEATURE REQUEST

If kubeadm is used to deploy an K8S cluster it seems that by default cloud provider internal IP addresses are used. However, it would be really helpful (for cross-cloud deployment use cases) to provide an option to set the --node-ip option of the kubelet (see https://kubernetes.io/docs/admin/kubelet/).

So, a kubeadm init call could on node with <public_master_ip> look like that:

kubeadm init --token=<token> --api-advertise-addresses=<public_master_ip> --node-ip=<public_master_ip>

And a kubeadm join on a node with <public_worker_ip> would look like that:

kubeadm join --token=<token> --node-ip=<public_worker_ip>

Having this, kubeadm could be easily used for cross-cloud provider deployments. If there are other options I am not aware of, I would like to hear. But my search did not turn up a solution (using kubeadm).

@mongrelion

This comment has been minimized.

Copy link

commented May 5, 2017

I'm currently having issues setting up a Kubernetes cluster in DigitalOcean because of this. By default, kubelet will bind and expose/broadcast the ip of the default gateway, which in these cases is the public IP facing Internet traffic. Then when getting to the point of setting up a pod networking add-on (like Weave) the whole hell breaks loose because the master's advertise IP is the internal network's IP address but the worker nodes are trying to expose the public one :/

The solution is to update the unit file dropped under /etc/systemd/system/kubelet.service.d/10-kubeadm.conf to add the --node-ip=<private_worker_ip>, reload units and restart kubelet to make it work.

If kubeadm can do this by default by the passing of the option like @nkratzke suggested that would be great!

@mongrelion

This comment has been minimized.

Copy link

commented May 8, 2017

I just wanted to confirm that adding the --node-ip=<private-worker-ip> option to the settings in the /etc/systemd/system/kubelet.service.d/10-kubeadm.conf file won't fix this issue for the setup that I explained before. Even though the nodes are listening on this interface Kubernetes keeps on using the default gateway to communicate between nodes, which in this case is the public IP address.

@seralogar

This comment has been minimized.

Copy link

commented May 26, 2017

I'm having the same problem. I tried your method and it didn't work for me either. Did you manage to make it work?

@mongrelion

This comment has been minimized.

Copy link

commented Jun 9, 2017

@agsergi I was trying to setup a k8s cluster in DigitalOcean with the Private Networking option enabled, which lead me to this issue. Disabling that feature did it for me but I'm not sure if you're on the same boat.

I guess this issue will still be present if you have more than two NICs attached to the machine.

@evocage

This comment has been minimized.

Copy link

commented Jun 23, 2017

Guys,
I have two interfaces on my VM one is for NAT and second one if for hostonly-adaptor. By default it kubeadmin was taking the default interface IP(NAT in my case) if you want to use other interface then use :
$ sudo kubeadm init --apiserver-advertise-address=<host_only_adaptop_IP (in my case)>

And it worked for me.

@discordianfish

This comment has been minimized.

Copy link

commented Aug 2, 2017

@luxas Why is this tagged as kind/support? Is there any way to use kubeadm with a node ip which isn't the source of the default route?

@jianlianggao

This comment has been minimized.

Copy link

commented Aug 10, 2017

@evocage there is no --apiserver-advertise-address=<host_only_adaptop_IP for my kubeadm . How did you get that option?

many thanks.

@tuarrep

This comment has been minimized.

Copy link

commented Aug 29, 2017

I ran into the same issue on Scaleway.

When I've initialized the master I've passed --apiserver-advertise-address=<private_net_IP> but when I want to add a node (kubeadm join --toke=<token> <master_private_IP>:6443) kube-proxy and weave-net pods won't start (error syncing pod)

But when I attach a public IP to my node et reboot it everything goes well 🤔

Any idea?

@J0s3f

This comment has been minimized.

Copy link

commented Sep 26, 2017

Same problem when trying to setup kubernetes on a VM host with single IP where only some ports can be forwarded to the VMs. Any workaround to get it to work?

@Karunamon

This comment has been minimized.

Copy link

commented Oct 4, 2017

Just wanted to +1. I'm trying to run across a set of Digitalocean VMs with a private IP on all of them, yet the public facing address keeps working its way into the cluster somehow.

@jamiehannaford

This comment has been minimized.

Copy link
Member

commented Oct 10, 2017

I managed to get private IPs working by running this on master:

kubeadm init --apiserver-advertise-address=<private-master-ip> init

Adding --node-ip=<private-node-ip> to /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, reloading daemon, restarting kubelet, then running:

kubeadm join --token <token> <private-master-ip>:6443 --discovery-token-ca-cert-hash sha256:<hash>

@mongrelion What type of communication between master<->node is still using public interfaces? I wasn't able to replicate this so I'd be interested to know if kubernetes is behaving unexpectedly.

@Karunamon

This comment has been minimized.

Copy link

commented Oct 10, 2017

That did it! The combination of --apiserver-advertise-address to ensure the master starts in the right place, and --node-ip in the kubelet config was the magic combination.

Per this request though, having that --node-ip option directly in kubeadm so the config files are initialized correctly would be helpful for clueless newbies like me trying to spin up clusters :)

@luxas

This comment has been minimized.

Copy link
Member

commented Oct 20, 2017

Thank you @jamiehannaford for that summary. Do we think we should document this more visibly?

@jamiehannaford

This comment has been minimized.

Copy link
Member

commented Oct 20, 2017

@luxas Yeah I think having this use case explicitly documented would be useful

@jamiehannaford jamiehannaford self-assigned this Oct 20, 2017

@fabriziopandini

This comment has been minimized.

Copy link
Contributor

commented Nov 6, 2017

@jamiehannaford If you want to add also this to the document reshuffle for v1.9, please send me a the paragraph to add to kubernetes/website#6103

@jamiehannaford

This comment has been minimized.

Copy link
Member

commented Nov 6, 2017

@fabriziopandini Sure! Done

@ieugen

This comment has been minimized.

Copy link

commented Nov 7, 2017

Hi,

I wish to give some feedback as well. I'm trying to use kubeadm to build a secure single node cluster.
I would like all kubernetes services to bind to localhost however that does not work.

I'm using this command and the cluster is created:

kubeadm init \
	--pod-network-cidr=10.244.0.0/16 \
	--apiserver-advertise-address=127.0.0.1 \
	--apiserver-cert-extra-sans=127.0.0.1,staging.my-server.net

However /etc/kubernetes/admin.conf and friends contain the public IP address of the master.

  server: https://75.xx.yy.zz:6443

I will try the --node approach, but I would appreciate if you can help me to find a solution for this.

My use case:

I have a beefy machine that I wish to use as a staging environment and maybe as a production environment for small projects where I don't care about HA. I can use SSH and use kubectl to control the cluster.

Thanks,

@lloeki

This comment has been minimized.

Copy link

commented Jan 23, 2018

I had exactly the same issue as @Mosho1 over here and got down to the bottom of it.

I use DO and CoreOS but this really is related to neither and could happen on other providers and distros. It is also unrelated to DO's private networks being enabled or disabled: I reproduced the issue in both cases.

What happens is that kubelet as set up by kubeadm looks at interfaces and decides to bring its own private subnet to the mix, regardless of the assigned IPs or interfaces available, and wants to do so on the same one it considers "main" (first one? WAN one? I dunno). So it picks the WAN one (eth0), sees (or ignores) a public IP, and decides to add a second, private subnet (like 10.19.0.0/255 in my case), probably under the belief that all the nodes eth0 are on the same link. This one is not visible via ifconfig but readily so via ip addr, and a route is set up as well, but there's zero chance this will fly over the DO network that connects the nodes eth0.

EDIT: Thanks to @klausenbusk, it seems kubelet picks the anchor IP up under the assumption that it could be useful when it's not. See details below.

The solution is indeed to tell kubelet what is the IP to use. It can be the public one or the private one if you use the optional private network.

Here's how I made use of --node-ip. Watch out, this assumes KUBELET_EXTRA_ARGS hasn't already been set in the unit file.

$ DROPLET_IP_ADDRESS=$(ip addr show dev eth0 | awk 'match($0,/inet (([0-9]|\.)+).* scope global eth0$/,a) { print a[1]; exit }')
$ echo $DROPLET_IP_ADDRESS  # check this, jus tin case
$ echo "Environment=\"KUBELET_EXTRA_ARGS=--node-ip=$DROPLET_IP_ADDRESS\"" >> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
$ systemctl daemon-reload
$ systemctl restart kubelet
@jamiehannaford

This comment has been minimized.

Copy link
Member

commented Jan 23, 2018

@lloeki Thanks for the write-up. Would you mind updating the docs, possibly here: https://github.com/kubernetes/website/blob/master/docs/setup/independent/troubleshooting-kubeadm.md

@klausenbusk

This comment has been minimized.

Copy link

commented Jan 23, 2018

So it picks the WAN one (eth0), sees (or ignores) a public IP, and decides to add a second, private subnet (like 10.19.0.0/255 in my case), probably under the belief that all the nodes eth0 are on the same link.

Are you sure about that? It could just be the anchor ip (compare with curl http://169.254.169.254/metadata/v1/interfaces/public/0/anchor_ipv4/address)

@lloeki

This comment has been minimized.

Copy link

commented Jan 23, 2018

@klausenbusk you're absolutely correct, that was entertaining speculation on my part, sorry! The following is from the master node, now using --node-ip.

So it seems kubelet picks that one up under the assumption that it could be useful when it's not?

$ curl http://169.254.169.254/metadata/v1/interfaces/public/0/anchor_ipv4/address
10.19.0.39
$ ip addr show dev eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff
    inet yyy.yyy.yyy.yyy/20 brd yyy.yyy.yyy.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet 10.19.0.39/16 brd 10.19.255.255 scope global eth0
       valid_lft forever preferred_lft forever
@lloeki

This comment has been minimized.

Copy link

commented Jan 23, 2018

Would you mind updating the docs

@jamiehannaford Sounds like I can do that :)

lloeki added a commit to lloeki/website that referenced this issue Jan 24, 2018

Add a section about routing errors
A common source of beffudlement when using local hypervisors or cloud
providers with peculiar interface setups, IP adressing, or network
policies for which kubelet cannot guess the right IP to use.

`awk` is being used so that the example works in CoreOS Container Linux
too.

Requested in kubernetes/kubeadm#203.

@timothysc timothysc added the triaged label Jan 31, 2018

lloeki added a commit to lloeki/website that referenced this issue Feb 7, 2018

Add a section about routing errors
A common source of beffudlement when using local hypervisors or cloud
providers with peculiar interface setups, IP adressing, or network
policies for which kubelet cannot guess the right IP to use.

`awk` is being used so that the example works in CoreOS Container Linux
too.

Requested in kubernetes/kubeadm#203.

k8s-ci-robot added a commit to kubernetes/website that referenced this issue Mar 3, 2018

Add a section about routing errors (#7078)
A common source of beffudlement when using local hypervisors or cloud
providers with peculiar interface setups, IP adressing, or network
policies for which kubelet cannot guess the right IP to use.

`awk` is being used so that the example works in CoreOS Container Linux
too.

Requested in kubernetes/kubeadm#203.

tehut added a commit to tehut/website that referenced this issue Mar 8, 2018

Add a section about routing errors (kubernetes#7078)
A common source of beffudlement when using local hypervisors or cloud
providers with peculiar interface setups, IP adressing, or network
policies for which kubelet cannot guess the right IP to use.

`awk` is being used so that the example works in CoreOS Container Linux
too.

Requested in kubernetes/kubeadm#203.

@timothysc timothysc removed the triaged label Apr 7, 2018

@timothysc

This comment has been minimized.

Copy link
Member

commented Apr 7, 2018

/assign @liztio

@neolit123

This comment has been minimized.

Copy link
Member

commented May 10, 2018

i had a look at this and i think the consensus even if the users request the --node-ip argument to be added to kubeadm, is that modifying the kubelet config and using the parameters like @jamiehannaford suggested here is the advised approach:
#203 (comment)

(or perhaps appending to $KUBELET_EXTRA_ARGS before restarting the kublet).

given the decision to move away from adding extra cmd arguments to kubeadm, i think it might be safe to close this issue....unless there are plans to enable this with kubeadm MasterConfig options (somehow??...as we rely on the user editing the kubelet config and resting manually on changes).

edit: or perhaps with the dynamic kubelet config if that's possible?

all suggestions for documentation changes above seem to be merged.

@timstclair @liztio

@timothysc timothysc removed this from the v1.11 milestone May 14, 2018

@timothysc

This comment has been minimized.

Copy link
Member

commented May 14, 2018

I'm ok with closing this one.

@stepin

This comment has been minimized.

Copy link

commented Jul 23, 2018

Just note that in Kubernetes 1.11 setting of KUBELET_EXTRA_ARGS in /etc/systemd/system/kubelet.service.d/20-custom.conf don't work anymore: it should be set in /etc/sysconfig/kubelet (a bit different syntax of these files).

@jazoom

This comment has been minimized.

Copy link

commented Aug 11, 2018

@stepin I just set up a 1.11 cluster using kubeadm and got this afterwards cat: /etc/sysconfig/kubelet: No such file or directory

/etc/systemd/system/kubelet.service.d/20-custom.conf also doesn't exist so I'm not sure what you were doing there.

If what you said was true, it appears the game of config hot potato continues.

I was able to find yet another location for kubelet config here: /etc/default/kubelet

For future travellers (for the next week at least), this seems to work:

PRIVATE_IP=10.99.0.0
echo "KUBELET_EXTRA_ARGS=--node-ip=$PRIVATE_IP" > /etc/default/kubelet
systemctl daemon-reload
systemctl restart kubelet

Obviously you'll need to change the IP to whatever yours is on that particular node.

Disclaimer: I'm just checking out Kubernetes so I don't guarantee this isn't doing something terrible. Though, /etc/systemd/system/kubelet.service.d/10-kubeadm.conf does point towards /etc/default/kubelet, so I guess this is the right thing to do.

@geerlingguy

This comment has been minimized.

Copy link

commented Sep 7, 2018

@jazoom - Thanks for your comment, it finally led me to read the systemd unit file more closely. I thought I was going crazy as I could bring up the same config in 1.10, and everything worked... bring up the config in 1.11 and the custom --node-ip I was setting was not applying at all. Switching to adding the extra args in /etc/default/kubelet fixed the issue for me.

@jazoom

This comment has been minimized.

Copy link

commented Sep 7, 2018

@geerlingguy You're welcome.

At least it wasn't a case of "it works every second time I bring up a 1.11 cluster". Those irreproducible issues will really make you go crazy.

@jayakody

This comment has been minimized.

Copy link

commented Mar 30, 2019

Just ran into this in "Kubeadm 1.13". Fixed it using the following:

  1. Add "--node-ip" to '/var/lib/kubelet/kubeadm-flags.env':
[root@Node-18121 ~]# cat /var/lib/kubelet/kubeadm-flags.env
KUBELET_KUBEADM_ARGS=--cgroup-driver=systemd --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.1 --node-ip=10.10.10.1
  1. Restart Kubelet:
systemctl daemon-reload && systemctl restart kubelet
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.