Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MTU settings allow much better performance #2

Closed
lorenz opened this issue Feb 1, 2016 · 4 comments
Closed

MTU settings allow much better performance #2

lorenz opened this issue Feb 1, 2016 · 4 comments

Comments

@lorenz
Copy link

@lorenz lorenz commented Feb 1, 2016

When I'm artificially bumping up the MTU size on the interface close to the maximum (but over 60000 seems to lower throughput), I get ~2% more (950Mbps) than normal iperf wirespeed (935Mbps) at 10% single core load on a AES-NI capable CPU with encryption enabled. This was tested over a Gigabit LAN (MTU 1500, -> No Jumbo frames or anything special). I also tested it over a 1Gbps fiber internet connection with two servers in the same city. Results are pretty much the same, around 2% better than wirespeed at a bit more than 10% single core load, but there is other stuff on the server which might cause that additional load. Tests were run on hardware with (3/4)770(K) CPUs and Intel NICs.

I believe this is because this saves a lot of context switches between the Kernel and Userspace, from like 90'000/s to around 2'000/s. This obviously also causes lower overhead on vpncloud because it needs to process less packets. The Linux kernel is probably really efficient at fragmenting the resulting large packets or maybe Intel hardware-accelerates this.

Do you see any downside to doing that or is that a good idea?

@dswd
Copy link
Owner

@dswd dswd commented Feb 2, 2016

Setting the MTU higher than the physical medium causes IP to fragment each inner packet into several outer packets and reassemble them at the receiving side. This has the advantage that VpnCloud has to process less packets and therefore uses less CPU (most of it will be memcpy) and has less context switches. It will also save some bytes on the wire as less bytes are used for VpnCloud headers and UDP headers and only a small additional IP fragmentation header is added. I think in total that is the 2% you are seeing.
The downside is that if one of the outer packets gets lost on the wire the whole inner packet is lost. If your MTU is is 40 times higher than normal, 40 outer packets will be sent for one inner packet. So your packet loss rate will be about 40 times higher than normal (e.g. 4.1% instead of 0.1%). It will also influence your delay distribution negatively.
If the delay distribution and the loss ratio matters to you, this is a bad idea. If your line is pretty good and you just care about throughput, I think it makes sense to increase the MTU.

@lorenz
Copy link
Author

@lorenz lorenz commented Feb 2, 2016

Thanks for your reply. I don't really have any packet loss as everything in between is a 10Gbps backbone with two 1Gbps access ports so that should be fine for my use case. Thanks for your great work on VpnCloud!

@romanrm
Copy link

@romanrm romanrm commented Aug 6, 2020

For 10bps interfaces, the default MTU is already 9000, so VpnCloud should run at a MTU of 8800.

Firstly, there's a typo, should be 10 Gbps, not 10 bps (hopefully).

Secondly, why does the overhead double for 9000 MTU? If you needed 100 bytes at 1500 to fit your headers, why that becomes 200 bytes at 9000?

@dswd
Copy link
Owner

@dswd dswd commented Aug 6, 2020

@romanrm I fixed that typo, thanks for reporting. Also you are absolutely right, an MTU of 8900 is also fine.
BTW: VpnCloud 2, which I am working on right now, will automatically set the perfect MTU on the interface for you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
3 participants