Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixed wireguard MTU and added windows iface func #1567

Merged
merged 1 commit into from May 26, 2022

Conversation

rbrtbnfgl
Copy link
Contributor

Description

  • Fixed MTU for flannel wireguard interfaces to be the same of external interface.
  • Added GetInterfaceBySpecificIPRouting on windows

Todos

  • Tests
  • Documentation
  • Release note

Release Note

None required

Signed-off-by: rbrtbnfgl <roberto.bonafiglia@gmail.com>
@rbrtbnfgl rbrtbnfgl merged commit c9d01ce into flannel-io:master May 26, 2022
@rbrtbnfgl rbrtbnfgl deleted the wireguard-mtu branch May 26, 2022 08:24
@sclem
Copy link

sclem commented Jul 22, 2022

I believe these MTU changes are incorrect. The wireguard MTU needs to account for the packet overhead from the host interface. This makes the flannel-wg interface set MTU to 1500. (Default is 1420). I am using k3s with --flannel-backend=wireguard-native and the latest update (v1.23.8+k3s2) broke my network connectivity between nodes.

@rbrtbnfgl
Copy link
Contributor Author

rbrtbnfgl commented Jul 22, 2022

It shouldn't break it because the MTU of the NIC of the POD is rightly measured here https://github.com/flannel-io/flannel/blob/master/backend/wireguard/wireguard_network.go#L72 with the overhead so the packets are generated by the PODs with the right size even if the MTU of flannel-wg is higher.

@sclem
Copy link

sclem commented Jul 22, 2022

Here is some more information:

I have a vps agent node and a master node at home. Both have gateway interfaces with mtu 1500. If I do an iperf test on the cni0 interface when using wireguard-native with flannel-wg mtu at 1500, I get terrible speeds (0-50kbps, sometimes zero) between nodes. Dropping the mtu to 1420 and it restores near native speeds. I am not an expert, but I do not think the wireguard interface MTU can be the same as the host mtu.

@rbrtbnfgl
Copy link
Contributor Author

Ok so you are launching the iperf on the machine and not from a pod. The MTU is fixed because it was designed to transport the traffic of the pods and with the previous setting it wasn't working in case of MTU greater then 1500. I can add the fixed overhead also for flannel-wg.

@sclem
Copy link

sclem commented Jul 22, 2022

Yes I think that will do it! For more context, the behavior I specifically was seeing over wireguard-native with mtu 1500:

Create a librespeed deployment and metallb loadbalancer (lan ip) service on my cluster. Set nodeSelector to the cloud vps node. This way I can have a gui speed test between my cloud node and my private lan, over the wireguard-native tunnel. I could download, but my upload speed dropped to zero. That's what prompted me to track this down.

@rbrtbnfgl
Copy link
Contributor Author

#1620 should fix it

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants