-
Notifications
You must be signed in to change notification settings - Fork 285
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using raw sockets and kernel bypassing to improve performance #188
Comments
Tinc already supports raw sockets: set As for UDP, sendmmsg() and recvmmsg() will reduce system call overhead by processing many packets in one go. However, the tun device does not expose any kind of equivalent functionality to user space, so packets to and from a tun interface still have a significant system call overhead. The easiest way to improve performance is to make tinc multi-threaded, but that's easier said than done. |
What about bridging tinc to the host with something like openvswith, or macvtap? |
I'm not sure that is going to work. The problem is that a raw socket will capture packets coming in to the device and will send them back out to the device, without those packets ever being processed any further by the kernel. In contrast, a tun device will capture anything coming from the kernel and will send packets back to the kernel. But if you are interested in it, just go ahead and try it out. If it works and is usable, then it might be worth spending time implementing PACKET_MMAP. |
After some thought and experiment, I think the following should work(only linux), as it makes tinc behaves as a vm side by side with the host:
From my test, veth interfaces are quite fast, and transfering data across network namespace does not need user/kernel context switch. What do you think? |
I tried this, but it doesn't work unfortunately. |
Did you have a look at KNI in DPDK? Olivier |
#110 appears to have a possibly workable solution to Raw sockets (and hence sendmmsg & writev) are not the solution, they are on the wrong "side" of the tun/tap unfortunately. |
I'm closing this issue, as raw sockets are not a way to improve tinc. However, syscall overhead is quite big in tinc, so if there are other ways to solve this issue, another ticket should be made. And there's also #275 which tries to address this. |
There are a lot kernel bypassing work in linux network stack, like packet_mmap, dpdk or the recent ebpf/xdp framework. I'm wondering if tinc could take advantage of these tricks, or even use raw sockets instead of the conventional udp+tuntap interface to reduce the kernel overhead?
Wireguard has drawn a lot of attention recently, but I think it is not much superior over tinc other than it seats in the kernel, while only handle a small part of tinc's job. Also, raw socket is not linux-specific, so if tinc could use raw sockets, it could be a state of the art high performance cross platform solution.
The text was updated successfully, but these errors were encountered: