-
Notifications
You must be signed in to change notification settings - Fork 336
Out-of-tree MPTCP uses only 8 interfaces out of 16 #406
Comments
Mainline kernel's MPTCP config:
|
Hello, I see that you are using the Fullmesh PM. This PM has a hard limit: https://github.com/multipath-tcp/mptcp/blob/mptcp_v0.95/net/mptcp/mptcp_fullmesh.c#L23 Is your goal to use more than 8 addresses per connection? We already talked about that in the past and it was hard for us to find a realistic use case to use so many subflows :-) You can check the addresses picked by the PM by looking at |
Yup.
Yeah, I admit my use-case won't be the primary example of MPTCP.
Yup, it matches it.
Thanks for the pointer. The throughput of SSH increased linearly, now reaching 54.0 MiB/s. I can see why the limit of 8 was put -
I can understand that 8 is a reasonable limitation. For those who're interested though, I'll leave the commit here: Thanks for the help! |
Thank you for having tried and shared the modified code! It can help others :) By chance, may you share your use case? Maintaining more than 8 addresses, with possibly 8x8 subflows, that's a lot :-) |
Hey, sorry for the late reply, got caught up with work recently. I don't think I can provide the details of the company's internal networking infrastructure, but if I were to make an analogy, we're kind of in a weird position of being able to get as many IP addresses from the ISP as we want, but with each limited to < 50 Mbps. We know for a fact that the whole switching capacity well exceeds the throughput of the entire addresses combined, so we deployed an MPTCP environment that relays SOCKS5 proxy server from outside's unlimited/unthrottled computer to get faster Internet access. We're currently using WireGuard with MPTCP, microsocks and redsocks2 for the entire setup. |
I see why you need to use more addresses, thank you for the explanation, an interesting use-case! And nice to see it works well with all these proxies! Can we force WireGuard to use TCP? Or I guess MPTCP is in a tunnel managed by WireGuard. |
Yeah, MPTCP is living inside WireGuard tunnels. I didn't conduct an experiment yet to see whether which is better: "Multiple WireGuarded interfaces with MPTCP and unencrypted microsocks proxy" or "Unencrypted interfaces with MPTCP and encrypted SOCKS5 proxy(e.g., ssh or shadowsocks)" I opted for WireGuard as it naturally gets parallelized across multiple CPU cores, but who knows, maybe the latter can outperform ¯_(ツ)_/¯ I should experiment around that sooner or later.. |
Just leaving here an update on our use-case :) We settled on using WireGuard + MPTCP + shadowsocks-rust (without encryption: plain), and it is rock solid for months now. If we don't use WireGuard, something goes wrong with shadowsocks-rust and TCP connections randomly hang, which I don't believe is due to either MPTCP or shadowsocks-rust itself. |
Thank you for sharing this, always useful from our development point of view to know how MPTCP is used :) |
@arter97 what version of the kernel and mptcp do you use in your setup? |
@starkovv I use a custom kernel based on v5.4 with mptcp_trunk branch merged. Notable change is arter97/x86-kernel@443fcdf as mentioned in the above comment. |
This is more or less the setup I have at home. I can get as many 100 Mbps links as I want from the ISP so I plan to use 10 subflows to get 1 Gbps connection. I use WireGuard to take care of all the non-TCP traffic over the most stable link (especially helpful with encypting DNS traffic and delay-sensitive use cases). The reason I use an unknown Chinese protocol is because my home router cannot handle high throughput with encryption. And where I live, I'm pretty sure the ISP uses their firewall to track SOCKS traffic. So I believe using an unknown protocol like vless keeps me under the radar. |
Possibly related to #128 but the description and comments don't seem to quite match with what I'm seeing.
We recently had the opportunity to upgrade the server environment from 8 Ethernet ports to 16, but MPTCP doesn’t scale beyond 8 interfaces.
As the server has real users/clients, it’s quite hard to conduct experiments on the server so I created 2 VMs to replicate the issue. The same issue happens on the VM as well.
VM 1 has 17 virtio NICs(eth0-16), each throttled to 30 Mbps.
VM 2 has 1 virtio NIC(eth0), unthrottled.
VM 1:
VM 2:
VM 1 initiates MPTCP connection via SSH to VM 2:
For some reason, MPTCP uses
eth0,1,10,11,12,13,14,15
but nothing else.(Checked via ifconfig’s TX packets usage)
The issue happens on both
mptcp_v0.95
(Linux v4.19) andmptcp_trunk
(Linux v5.4).Linux v5.10’s MPTCP v1 uses only 1 interface(eth0) and the performance is capped at 3.41 MiB/s.
Here’s the relevant kernel configs:
Here are logs after turning on mptcp_debug.
VM 1:
VM 2:
Here’s libvirt definition for both VMs, in case you guys want to try this setup:
VM 1: https://pastebin.com/VeWCLmac
VM 2: https://pastebin.com/NXXmz9tj
Thanks in advance :)
The text was updated successfully, but these errors were encountered: