-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Distributed hybrid and wireguard-native : wrong interface internal IP ? #7355
Comments
Complementary information :
Another example of the output of
Thanks a lot |
The wireguard flannel backend only handles cluster networking - traffic between pods and services on different nodes. It does not solve the problem of how to allow pods to access nodes across isolated networks. Things like prometheus or metrics-server that attempt to scrape nodes directly will not work, as traffic leaving the cluster to a node IP is not handled by the CNI. I would take a look at #7353 - it addresses many of the challenges you're running into. |
Can I assume from your reply that my |
It looks good at first glance. You didn't mention any problems with pod-to-pod or pod-to-service connectivity so I'm assuming the wireguard-native flannel backend is working properly otherwise? |
Yes, that seem to work without any problem as I can access an hello-world service that brings me to a pod on my remote node (debian-test) as well as a pod on my master node (k3s-master). Thank you very much for your valuable information and quick reply ! |
I can confirm that your config is totally correct. Your node IP keeps being the private IP. If you execute |
Thanks a lot. For other people having the same problem, what I needed to specify on master-node and worker-node was the --node-ip parameter. Forcing the Wireguard VPN IP on each node solved my problems. |
Wonderful to hear we were able to help! Closing for now since there's not a clear bug/request here. |
Environmental Info:
K3s Version:
Node(s) CPU architecture, OS, and Version:
Cluster Configuration:
K3s-master configuration :
K3s-node1 configuration :
Debian-test configuration :
Describe the bug:
--flannel-backend=wireguard-native
should create a mesh VPN network between nodes and use that network for internal communicationkubectl get nodes -o wide
should show wireguard IPwg show
on every node seems to confirm that master can communicate with workers, and workers can ping master, using VPN mesh networkSteps To Reproduce:
hello-world
to confirm that I can reach pod ondebian-test
using service or ingressExpected behavior:
kubectl get nodes -o wide
on masterActual behavior:
kubectl get nodes -o wide
on masterAdditional context / logs:
Thanks a lot !
The text was updated successfully, but these errors were encountered: