Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MTU settings allow much better performance #2

Closed
lorenz opened this issue Feb 1, 2016 · 2 comments

Comments

Projects
None yet
2 participants
@lorenz
Copy link

commented Feb 1, 2016

When I'm artificially bumping up the MTU size on the interface close to the maximum (but over 60000 seems to lower throughput), I get ~2% more (950Mbps) than normal iperf wirespeed (935Mbps) at 10% single core load on a AES-NI capable CPU with encryption enabled. This was tested over a Gigabit LAN (MTU 1500, -> No Jumbo frames or anything special). I also tested it over a 1Gbps fiber internet connection with two servers in the same city. Results are pretty much the same, around 2% better than wirespeed at a bit more than 10% single core load, but there is other stuff on the server which might cause that additional load. Tests were run on hardware with (3/4)770(K) CPUs and Intel NICs.

I believe this is because this saves a lot of context switches between the Kernel and Userspace, from like 90'000/s to around 2'000/s. This obviously also causes lower overhead on vpncloud because it needs to process less packets. The Linux kernel is probably really efficient at fragmenting the resulting large packets or maybe Intel hardware-accelerates this.

Do you see any downside to doing that or is that a good idea?

@dswd

This comment has been minimized.

Copy link
Owner

commented Feb 2, 2016

Setting the MTU higher than the physical medium causes IP to fragment each inner packet into several outer packets and reassemble them at the receiving side. This has the advantage that VpnCloud has to process less packets and therefore uses less CPU (most of it will be memcpy) and has less context switches. It will also save some bytes on the wire as less bytes are used for VpnCloud headers and UDP headers and only a small additional IP fragmentation header is added. I think in total that is the 2% you are seeing.
The downside is that if one of the outer packets gets lost on the wire the whole inner packet is lost. If your MTU is is 40 times higher than normal, 40 outer packets will be sent for one inner packet. So your packet loss rate will be about 40 times higher than normal (e.g. 4.1% instead of 0.1%). It will also influence your delay distribution negatively.
If the delay distribution and the loss ratio matters to you, this is a bad idea. If your line is pretty good and you just care about throughput, I think it makes sense to increase the MTU.

@lorenz

This comment has been minimized.

Copy link
Author

commented Feb 2, 2016

Thanks for your reply. I don't really have any packet loss as everything in between is a 10Gbps backbone with two 1Gbps access ports so that should be fine for my use case. Thanks for your great work on VpnCloud!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.