Skip to content
Ultra low latency messaging kernel
C++ C
Find file
Latest commit 7ba9e67 Sep 23, 2014 Erik Rigtorp committed with Move README to
Failed to load latest commit information.
include Add cacheline padding to prevent false sharing Nov 21, 2012
.gitignore Initial commit Nov 9, 2010
COPYING Initial commit Nov 10, 2010
Makefile Add some unit tests Nov 15, 2010
TODO Add some unit tests Nov 16, 2010
local_lat.cpp Add message framing Nov 15, 2010
local_thr.cpp Add message framing Nov 16, 2010
remote_lat.cpp Add message framing Nov 16, 2010
remote_thr.cpp Add message framing Nov 16, 2010
tests.cpp Add some unit tests Nov 16, 2010


NanoMQ is a ultra low latency messaging kernel. It enables messaging between processes in much the same way as POSIX message queues but at sub-microsecond latencies. NanoMQ uses efficient wait-free ring buffers arranged in a complete graph. Each node can send messages to any other node, receiving nodes needs to exclusively own a CPU core or HyperThread. The ultra low latency can thus be achieved by avoiding context switches.


Just run make. Requires recent GCC. Tests require Google Test.


On my Intel(R) Core(TM)2 Duo CPU T7500 @ 2.20GHz I get an average RTT of 250 ns and a throughput of 13M msg/s for a two node setup with 100 byte messages. It will be interesting to see measurements on multi CPU systems and the latency depending on which cache the cores share.

Use case

In high frequency trading (HFT) systems you want to separate feed handlers and order management systems (OMS) from strategy code in order to increase fault tolerancy and support live deployment of bug fixes or new strategies. NanoMQ allows you to separate these parts of a trading system into separate processes while keeping communication latencies to a fraction of a microsecond.


Git repository:


Free use of this software is granted under the terms of the GNU General Public License (GPL). For details see the file COPYING included with the NanoMQ distribution.

Something went wrong with that request. Please try again.