-
Notifications
You must be signed in to change notification settings - Fork 165
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Any results to share? #13
Comments
I don't have any results to share. Any results will be dependent on how you have tuned the kernel. Also note that for TCP and UDP benchmarks I have not added support for the linux The origin of this tool was when I was a heavy user of Erlang before they added support for CFFI and the only way to integrate C/C++ libs was to use TCP or Unix domain sockets. At that time 10 years ago on Solaris TCP and Unix sockets had same latency for IPC, but on Linux TCP sockets where much slower (5-10x, 3 vs 10us). Today I would recommend to use kernel bypass for inter-server communication and SHM for inter-process communication. My two lock-free queues can be trivially modified for use as IPC https://github.com/rigtorp/SPSCQueue https://github.com/rigtorp/MPMCQueue |
How do you use kernel bypass for inter-server communication, openonload? And actually I've done some tryout on SHM based msg queues: |
Openonload does accelerate IPC, but I'm not using it. |
What kernel bypass method do you recommend for inter-server communication such as TCP which would go through NIC? |
TCPDirect |
Please do you have any benchmark result to share?
And why not use shared memory as IPC? It should have the lowest latency.
The text was updated successfully, but these errors were encountered: