-
Notifications
You must be signed in to change notification settings - Fork 76
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
worker node fails to join the multi-master HA cluster #178
Comments
I am having similar issues. I had a similar issue with k3s running on virtualbox nodes and setting the interface to eth1 instead of the default eth0 fixed that. Is there a way in the k0sctl configuration to override auto-detection and tell it what interface to use? I couldn't find anything. INFO ==> Running phase: Gather host facts
INFO [ssh] 192.100.0.41:22: discovered eth0 as private interface
INFO [ssh] 192.100.0.41:22: discovered 10.0.2.15 as private address As you can see Virtualbox (vagrant) sets up a NAT and Bridge when setting up a public network configuration. 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic eth0
valid_lft 84714sec preferred_lft 84714sec
inet6 fe80::a00:27ff:fe8d:c04d/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
inet 192.100.0.41/24 brd 192.100.0.255 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:fe35:dd15/64 scope link
valid_lft forever preferred_lft forever |
@justmeandopensource Could you check the token file that is has proper address set in the worker nodes. IIRC it's at |
@darktempla no sure what you mean by this:
All of the components that do listen for external traffic (e.g. kube-api) does actually listen on |
@jnummelin - Your correct there is a I have created the following gist in-case others would like to follow my example of testing k0s out on a 2 node (1 master, 1 worker) Vagrant Virtualbox VMs. https://gist.github.com/darktempla/439d04a5da67748e99ca7c4fd9e87994 |
Based on last comments I believe this is solved. |
Unfortunately it still does not work, here is the setup. k0sclt-multi.yaml
Vagrantfile
haproxy-config
/home/urdeath/.k0sctl/cache/k0sctl.log
|
@viktormohl - where are you running the k0s command from? If its from your VM host (physical machine) then it not working seems reasonable, you have configured a private vagrant network and are referencing the "private" subnet from your local machine it will not be able to resolve. Vagrant allows ssh access to the machines by using port mapping & using non generic SSH ports. For example 2222 (host) -> 22 (guest) and when you run However I would just go with a public network its an easier configuration as basically the VMs and your host are on the same network. Here is my gist with this configuration: Your other option if it must be private which would likely be easier is if you SSH into one of the VM machines and run the k0s command inside the private network, I suspect that should work as expected. Hopefully this information helps give you some ideas to try. |
OK, seems I have something similar. I have next nodes:
Host OS and all these nodes are in the same network, pingable and all ports are opened. And such config
So nothing really special. But with
But everything is just working well w/o
But later section
can be added and it works just well:
So that looks like a bug. I think cluster should be created with initial config just well. My haproxy config is absolutely the same as in official documentation:
|
This haproxy config is working just well
|
Glad you've figured it out. Seems like the backend ports in your first example weren't quite right. The docs specify them correctly. |
True. Copy-paste is my main enemy. |
Hi,
I am using VirtualBox virtual machines for this cluster. This is a multi master HA setup. All 3 control planes seem to be configured fine. The worker node fails to join the cluster. This only happens when I use HA setup with HAProxy external load balancer.
Note: On single master multi node setup (1 master, 2 workers), everything works fine.
Here is my k0sctl.yaml
Output of k0sctl apply command
Log entries from k0sworker service on the failing VM (172.16.16.104)
more errors in k0sworker service
Thanks,
Venkat
The text was updated successfully, but these errors were encountered: