-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
unable to connect agent to master #1523
Comments
+1, ran into the same issue. We have k3s server running on a GCE instance. Attempted to connect a remote node to the agent using it's public IP (35.228.x.x). k3s-agent created a load balancer from 127.0.0.1:41933 to 35.228.x.x:6443, but when trying to connect to the proxy, it is using instance's private IP (10.50.x.x), to which the remote node does not have connection to.
|
ran into the same issue. This is how I solved it create cluster master node:
Get Token on master node:
join workers to the cluster:
|
Yup, configuring the master node's external IP worked for me. |
I am facing the same issue,
On agent node:
|
@md2119 don't use Use |
Add Ansible to list of services. Configure the nodes too Fix compose syntax Fix docker invocation Add Ansible playbook for installing k3s Don't need that much memory according to docs Ensure 6443 is available to host... ...so that we can use `kubectl` more easily. Ignore retry files Fix path Use the ssh key for our shared token syntax Disable prompt for host keys syntax SSH keys in wrong place (do not squash) Set hostname Welcome to k3s! Instructions might be out of date. The instructions provided work great if you're installing k3s locally or on a cloud instance. They fail hard when you're installing it on VBox with Vagrant because the CNI, `flannel`, assumes that all network comms will go out from the default interface, which is the NAT bridge on VBox VMs. The end-result is that no nodes ever join the cluster and you have no idea why unless you run `journalctl` to peek into what's going on. Kubernetes continues to be a powerful pain in the ass. Sources: - https://github.com/michaelc0n/k3s/blob/master/Vagrantfile - k3s-io/k3s#1523 Syntax fixes. Fix syntax Only the primary should forward this port
Same issue. It just picks the wrong interface. Not very encouraging that this issue is still open almost a year later :-/ |
@tcurdt how would you suggest we resolve this issue? It is a support request, not a defect in the software. Nodes need to be properly configured to support the environment they are deployed in. Absent any cloud-provider-specific integrations, this includes telling them what their public IP address is, if it differs from the IP assigned to the interface. I provided an example of how to do this in the post right above yours. Are you just here to me-too, or would you care to provide any information on how your environment is configured, what errors you've encountered, and what you've tried so far? |
IMO it is a defect in the install script. It should detect that there is more than just one option, report that - and not just use the first interface. On top of that I consider this also a "defect" in the documentation. With that said I am happy to provide more details and to help to resolve this. In fact it is super easy to reproduce as this is Vagrant+VirtualBox setup. Here is a snippet of the most relevant part:
As for the above example - it sure helped with the joining. I don't see any node metrics in Lens yet - but I need to check if this could be due to a different reason. |
The install script doesn't detect much of anything with regards to network configuration; it's already more complicated than we would like. Kubernetes and Flannel attempt to guess the best address by looking at what interface is associated with the default route. There are flags to override interface selection, as well as override both the internal and external IPs if you need to do so. You can provide those flags to the install script and it will pass them through to the systemd unit. In short, there is always some point of complexity after which you are going to have to tell the system what you want. We're glad to help you navigate that, if you show up looking for help instead of just to complain. |
Not sure that check breaks the camels back, but then it's even more important to point at this in the documentation.
It would be great to mention these details here. Would you agree?
I also happy to help. I can even offer a vagrant file to reproduce this. Not sure that counts as "complaining". |
Hello, I run in a similar issue. Note, ens33 is the first address to show up. Further, I have a default route via ens33. I have a raspberry worker node connected to the local switch via ethetnet (eth0 with with static 192.168.100.2/24 addr). It also has a Wi-Fi access to Internet. In a naive attempt, I set on k3s server deamon fields On worker node I set K3S_URL=https://192.168.100.1:6443 On worker node, looking a Then, on K3s server I set However, on worker node I still see My guess is that k3s master is advertising/binding to the wrong address! |
Little update. Got the following in
After a while
From where k3s is picking the 192.168.80.129 address? |
Anyone who got raspberry-pis and straight up jumped into installing k3s and are having this issue - please learn from me, a long time linux user who didn't run sudo apt update
sudo apt upgrade before proceeding. |
If you're running a traefik reverse proxy as your external load balancer in a HA config, this is what did the trick for me: add loadbalancer server port label: add an entrypoint: entryPoints:
http:
address: ":80"
https:
address: ":443"
k3s:
address: ":6443" add TCP router and service: tcp:
services:
k3s:
loadBalancer:
servers:
- address: "10.240.0.11:6443"
- address: "10.240.0.12:6443"
routers:
k3s:
entryPoints:
- "k3s"
rule: "HostSNI(`*`)"
tls:
passthrough: true
service: k3s |
I still can't get agents to connect to servers. Servers happily connect to each other but agents don't. I tried it with
Edit: To clarify, when I run |
@lkj4 what do the logs on your agents say? |
@brandond this tip was good, I think agents don't have
Ok, I got following gazillion times...
|
@woniupapa can you provide any relevant information, such as logs from the agent? Note that you cannot run kubectl from the agent, as it does not host the Kubernetes control-plane and does not have a copy of the k3s admin kubeconfig. |
The above comment fixed the problem! I will close the matter |
I've got the same problem with Ubuntu 21.10 Server on RPi. |
@nbbn you're probably best off opening a new issue describing what specifically you're having problems with, this one has become a bit of a dumping ground for folks with unrelated and/or poorly described issues. |
Version:
k3s version v1.17.3+k3s1 (5b17a17)
Describe the bug
unable to join workers to the cluster
To Reproduce
install k3s w/ default options on nodeA
install k3s agent on nodeB using
sudo /usr/local/bin/k3s agent -s https://{my_server_ip}:6443 -t <token from "/var/lib/rancher/k3s/server/node-token" on master node> --with-node-id 1
Expected behavior
node B joins the cluster
Actual behavior
node will not add, cannot access local proxy to the master API
Additional context
No firewalls are on the system or between the two nodes (virtualized nodes, same subnet, directly exposed to LAN)
The text was updated successfully, but these errors were encountered: