Join GitHub today
GitHub is home to over 36 million developers working together to host and review code, manage projects, and build software together.Sign up
MTU settings allow much better performance #2
When I'm artificially bumping up the MTU size on the interface close to the maximum (but over 60000 seems to lower throughput), I get ~2% more (950Mbps) than normal iperf wirespeed (935Mbps) at 10% single core load on a AES-NI capable CPU with encryption enabled. This was tested over a Gigabit LAN (MTU 1500, -> No Jumbo frames or anything special). I also tested it over a 1Gbps fiber internet connection with two servers in the same city. Results are pretty much the same, around 2% better than wirespeed at a bit more than 10% single core load, but there is other stuff on the server which might cause that additional load. Tests were run on hardware with (3/4)770(K) CPUs and Intel NICs.
I believe this is because this saves a lot of context switches between the Kernel and Userspace, from like 90'000/s to around 2'000/s. This obviously also causes lower overhead on vpncloud because it needs to process less packets. The Linux kernel is probably really efficient at fragmenting the resulting large packets or maybe Intel hardware-accelerates this.
Do you see any downside to doing that or is that a good idea?
Setting the MTU higher than the physical medium causes IP to fragment each inner packet into several outer packets and reassemble them at the receiving side. This has the advantage that VpnCloud has to process less packets and therefore uses less CPU (most of it will be memcpy) and has less context switches. It will also save some bytes on the wire as less bytes are used for VpnCloud headers and UDP headers and only a small additional IP fragmentation header is added. I think in total that is the 2% you are seeing.