Skip to content


Subversion checkout URL

You can clone with
Download ZIP
Ruby extension that wraps the official high level ZeroMQ C API ( )
C Ruby C++
Pull request Compare This branch is 179 commits behind master.
Fetching latest commit...
Cannot retrieve the latest commit at this time.
Failed to load latest commit information.


rbczmq - binding for the high level ZeroMQ C API Build Status

© 2011 Lourens Naudé (methodmissing), James Tucker (raggi) with API guidance from the czmq ( project.

About ZeroMQ

In a nutshell, ZeroMQ is a hybrid networking library / concurrency framework. I quote the ØMQ Guide ( :

“ØMQ (ZeroMQ, 0MQ, zmq) looks like an embeddable networking library but acts like a concurrency framework. It gives you sockets that carry whole messages across various transports like in-process, inter-process, TCP, and multicast. You can connect sockets N-to-N with patterns like fanout, pub-sub, task distribution, and request-reply. It's fast enough to be the fabric for clustered products. Its asynchronous I/O model gives you scalable multicore applications, built as asynchronous message-processing tasks. It has a score of language APIs and runs on most operating systems. ØMQ is from iMatix and is LGPL open source.”

Another ZeroMQ extension ?

This extension bundles both ZeroMQ (libzmq, and CZMQ (libczmq, and as such have no third party dependencies other than a Ruby distribution and a C compiler. My goals for this project were :

  • Access to a powerful messaging technology without having to install a bunch of dependencies

  • A stable and mostly version agnostic (2.x and 3.x series) API

  • Leverage and build upon a mature and maintained client (CZMQ)

  • Target Ruby distributions with a stable and comprehensive C API (MRI, Rubinius, JRuby is work in progress)

  • Support for running sockets in Threads - both green and native threads should be supported and preempt properly with edge-triggered multiplexing from libzmq.

  • Integrate with the Garbage Collector in a predictable way. CZMQ and the ZeroMQ framework is very fast and can allocate an enormous amount of objects in no time when using the Frame, Message and String wrappers. Resources such as socket connections should be cleaned up when objects are finalized as well.

  • Expose Message envelopes and Frames to developers as well to allow for higher level protocols and constructs.

  • Enforce well known best practices such as restricting socket interactions to within the thread the socket was created in etc.


ZeroMQ can have higher throughput than TCP in most cases by using a message batching technique. Please have a look through the Performance section in the ZeroMQ FAQ ( for further implementation details.

Some notes about these benchmarks :

  • Messages go through the full network stack on localhost (TCP/IP transport)

  • The sender and receiver endpoints are Ruby processes which coerce transferred data to native String objects on both ends.

  • There's thus a definite method dispatch cost in addition to intermittent pauses from the Garbage Collector

  • It's still plenty fast for most soft real-time applications and we're able to push in excess of 1 gigabits/s with the 1024 byte payloads. A language with automatic memory management cannot easily comply to hard real-time guarantees anyways.

TCP/IP loopback, 100k messages, 100 byte payloads

Lourenss-MacBook-Air:rbczmq lourens$ MSG_COUNT=100000 MSG_SIZE=100 ruby perf/pair.rb
Local pids: 2042
Remote pid: 2043
Sent 100000 messages in 0.3933s ...
[2043] Memory used before: 1128kb
[2043] Memory used after: 3012kb
[2042] Memory used before: 1120kb
====== [2042] transfer stats ======
message encoding: string
message size: 100 [B]
message count: 100000
mean throughput: 227978 [msg/s]
mean throughput: 182.383 [Mb/s]
[2042] Memory used after: 22432kb

TCP/IP loopback, 100k messages, 1024 byte payloads

Lourenss-MacBook-Air:rbczmq lourens$ MSG_COUNT=100000 MSG_SIZE=1024 ruby perf/pair.rb
Local pids: 2027
Remote pid: 2028
Sent 100000 messages in 0.641198s ...
[2028] Memory used before: 1120kb
[2028] Memory used after: 12776kb
[2027] Memory used before: 1144kb
====== [2027] transfer stats ======
message encoding: string
message size: 1024 [B]
message count: 100000
mean throughput: 160756 [msg/s]
mean throughput: 1316.919 [Mb/s]
[2027] Memory used after: 189004kb

TCP/IP loopback, 100k messages, 2048 byte payloads

Lourenss-MacBook-Air:rbczmq lourens$ MSG_COUNT=100000 MSG_SIZE=2048 ruby perf/pair.rb
Local pids: 2034
Remote pid: 2035
Sent 100000 messages in 0.94703s ...
[2035] Memory used before: 1140kb
[2035] Memory used after: 7212kb
[2034] Memory used before: 1128kb
====== [2034] transfer stats ======
message encoding: string
message size: 2048 [B]
message count: 100000
mean throughput: 123506 [msg/s]
mean throughput: 2023.528 [Mb/s]
[2034] Memory used after: 277712kb

Have a play around with the performance runner and other socket pairs as well -


As a first step I'd highly recommend you read (and reread) through the zguide ( as understanding the supported messaging patterns and topologies is fundamental to getting the most from this binding. Here's a few basic examples. Please refer to documentation ( and test cases ( for detailed usage information.

Basic send / receive, in process transport

ctx =
rep = ctx.socket(:PAIR)
port = rep.bind("inproc://send.receive")
req = ctx.socket(:PAIR)
req.send("ping") # true
rep.recv # "ping"


Fair-queued work distribution to a set of worker threads

ctx =
push = ctx.bind(:PUSH, "inproc://push-pull-distribution.test")
threads = []
5.times do
  threads << do
    pull = ctx.connect(:PULL, "inproc://push-pull-distribution.test")
    msg = pull.recv

sleep 0.5 # avoid "slow joiner" syndrome
messages = %w(a b c d e f)
messages.each do |m|
  push.send m

threads.each{|t| t.join }
threads.all?{|t| messages.include?(t.value) } # true


Async request / reply routing

ctx =
router = ctx.bind(:ROUTER, "inproc://routing-flow.test")
dealer = ctx.socket(:DEALER)
dealer.identity = "xyz"

dealer.recv # "request"

router.recv # "xyz"
router.recv # "reply"


Send / receive frames

ctx =
rep = ctx.socket(:PAIR)
req = ctx.socket(:PAIR)
ping = ZMQ::Frame("ping")
req.send_frame(ping) # true
rep.recv_frame # ZMQ::Frame("ping")
rep.send_frame(ZMQ::Frame("pong")) # true
req.recv_frame # ZMQ::Frame("pong")
rep.send_frame(ZMQ::Frame("pong")) # true
req.recv_frame_nonblock # nil
sleep 0.3
req.recv_frame_nonblock # ZMQ::Frame("pong")


Send / receive messages

ctx =
rep = ctx.socket(:PAIR)
req = ctx.socket(:PAIR)

msg =
msg.push ZMQ::Frame("header")
msg.push ZMQ::Frame("body")

req.send_message(msg) # nil

recvd_msg = rep.recv_message
recvd_msg.class # ZMQ::Message
recvd_msg.pop # ZMQ::Frame("header")
recvd_msg.pop # ZMQ::Frame("body")




  • A POSIX compliant OS, known to work well on Linux, BSD variants and Mac OS X

  • Ruby MRI 1.8, 1.9 or Rubinius (JRuby capi support forthcoming)

  • A C compiler


Rubygems installation

gem install rbczmq

Building from source

git clone

Running tests

rake test

OS X notes:

If you are installing the package on a new mack ensure you have libtool and autoconf installed. You can get those with brew packaging system:

brew install libtool autoconf automake


  • ZMQ::Message#save && ZMQ::Message.load

  • Optimize zloop handler callbacks (perftools)

  • OS X leaks utility -

  • Handle GC issue with timers in loop callbacks

  • czmq send methods aren't non-blocking by default

  • Revisit the ZMQ::Loop API

  • RDOC fail on mixed C and Ruby source files that document that same constants

  • GC guards to prevent recycling objects being sent / received.

  • Sockets can bind && connect to multiple endpoints - account for that

  • Watch out for further cases where REQ / REP pairs could raise EFSM

  • Do not clobber local scope from macros (James's commit in master)

  • Incorporate examples into CI as well

  • Zero-copy semantics for frames

Contact, feedback and bugs

This project is still work in progress and I'm looking for guidance on API design, use cases and any outlier experiences. Please log bugs and suggestions at

Something went wrong with that request. Please try again.