Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Any results to share? #13

Closed
MengRao opened this issue Sep 25, 2018 · 5 comments
Closed

Any results to share? #13

MengRao opened this issue Sep 25, 2018 · 5 comments

Comments

@MengRao
Copy link

MengRao commented Sep 25, 2018

Please do you have any benchmark result to share?
And why not use shared memory as IPC? It should have the lowest latency.

@rigtorp
Copy link
Owner

rigtorp commented Oct 10, 2018

I don't have any results to share. Any results will be dependent on how you have tuned the kernel. Also note that for TCP and UDP benchmarks I have not added support for the linux SO_BUSY_POLL flag.

The origin of this tool was when I was a heavy user of Erlang before they added support for CFFI and the only way to integrate C/C++ libs was to use TCP or Unix domain sockets. At that time 10 years ago on Solaris TCP and Unix sockets had same latency for IPC, but on Linux TCP sockets where much slower (5-10x, 3 vs 10us).

Today I would recommend to use kernel bypass for inter-server communication and SHM for inter-process communication. My two lock-free queues can be trivially modified for use as IPC https://github.com/rigtorp/SPSCQueue https://github.com/rigtorp/MPMCQueue

@MengRao
Copy link
Author

MengRao commented Oct 10, 2018

I don't have any results to share. Any results will be dependent on how you have tuned the kernel. Also note that for TCP and UDP benchmarks I have not added support for the linux SO_BUSY_POLL flag.

The origin of this tool was when I was a heavy user of Erlang before they added support for CFFI and the only way to integrate C/C++ libs was to use TCP or Unix domain sockets. At that time 10 years ago on Solaris TCP and Unix sockets had same latency for IPC, but on Linux TCP sockets where much slower (5-10x, 3 vs 10us).

Today I would recommend to use kernel bypass for inter-server communication and SHM for inter-process communication. My two lock-free queues can be trivially modified for use as IPC https://github.com/rigtorp/SPSCQueue https://github.com/rigtorp/MPMCQueue

How do you use kernel bypass for inter-server communication, openonload?

And actually I've done some tryout on SHM based msg queues:
https://github.com/MengRao/SPSC_Queue
https://github.com/MengRao/MPSC_Queue
https://github.com/MengRao/PubSubQueue

@rigtorp
Copy link
Owner

rigtorp commented Oct 11, 2018

Openonload does accelerate IPC, but I'm not using it.

@MengRao
Copy link
Author

MengRao commented Oct 11, 2018

What kernel bypass method do you recommend for inter-server communication such as TCP which would go through NIC?
As far as I know, openonolad + solarflare NIC is an option that's not hard to use.

@rigtorp
Copy link
Owner

rigtorp commented Oct 11, 2018

TCPDirect

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants