ZK is an application programmer's interface to the Apache ZooKeeper server. It is based on the zookeeper gem which is a multi-Ruby low-level driver. Currently MRI 1.8.7, 1.9.2, 1.9.3, REE, and JRuby are supported. Rubinius 2.0.testing is supported-ish (it's expected to work, but upstream is unstable, so YMMV).
ZK is licensed under the MIT license.
See the RELEASES file for information on what changed between versions.
This library is heavily used in a production deployment and is actively developed and maintained.
Development is sponsored by Snapfish and has been generously released to the Open Source community by HPDC, L.P.
ZooKeeper is a multi-purpose tool that is designed to allow you to write code that coordinates many nodes in a cluster. It can be used as a directory service, a configuration database, and can provide cross-cluster locking, leader election, and group membership (to name a few). It presents to the user what looks like a distributed file system, with a few important differences: every node can have children and data, and there is a 1MB limit on data size for any given node. ZooKeeper provides atomic semantics and a simple API for manipulating data in the heirarchy.
One of the most useful aspects of ZooKeeper is the ability to set "watches" on nodes. This allows one to be notified when a node has been deleted, created, changd, or has had its list of child znodes modified. The asynchronous nature of these watches enables you to write code that can react to changes in your environment without polling and busy-waiting.
Znodes can be ephemeral, which means that when the connection that created them goes away, they're automatically cleaned up, and all the clients that were watching them are notified of the deletion. This is an incredibly useful mechanism for providing presence in a cluster ("which of my thingamabobers are up?). If you've ever run across a stale pid file or lock, you can imagine how useful this feature can be.
Znodes can also be created as sequence nodes, which means that beneath a given path, a node can be created with a given prefix and assigned a unique integer. This, along with the ephemeral property, provide the basis for most of the coordination classes such as groups and locks.
ZooKeeper is easy to deploy in a Highly Available configuration, and the clients natively understand the clustering and how to resume a session transparently when one of the cluster nodes goes away.
The zookeeper gem provides a low-level, cross platform library for interfacing with ZooKeeper. While it is full featured, it only handles the basic operations that the driver provides. ZK implements the majority of the recipes in the ZooKeeper documentation, plus a number of other conveniences for a production environment. ZK aims to be to Zookeeper, as Sequel or ActiveRecord is to the MySQL or Postgres drivers (not that ZK is attempting to provide an object persistence system, but rather a higher level API that users can develop applications with).
- a robust lock implementation (both shared and exclusive locks)
- a leader election implementation with both "leader" and "observer" roles
- a higher-level interface to the ZooKeeper callback/watcher mechanism than the zookeeper gem provides
- a simple threadpool implementation
- a bounded, dynamically-growable (threadsafe) client pool implementation
- a recursive Find class (like the Find module in ruby-core)
- unix-like rm_rf and mkdir_p methods
- an extension for the Mongoid ORM to provide advisory locks on mongodb records
In addition to all of that, I would like to think that the public API the ZK::Client provides is more convenient to use for the common (synchronous) case. For use with EventMachine there is zk-eventmachine which provides a convenient API for writing evented code that uses the ZooKeeper server.
- All users of resque or other libraries that depend on
fork()are encouraged to upgrade immediately. This version of ZK features the
zookeeper-1.1.0gem with a completely rewritten backend that provides true fork safety. The rules still apply (you must call
#reopenon your client as soon as possible in the child process) but you can be assured a much more stable experience.
- Added a new
:ignoreoption for convenience when you don't care if an operation fails. In the case of a failure, the method will return nil instead of raising an exception. This option works for
statwill ignore the option (because it doesn't care about the state of a node).
# so instead of having to do: begin zk.delete('/some/path') rescue ZK::Exceptions;:NoNode end # you can do zk.delete('/some/path', :ignore => :no_node)
- MASSIVE fork/parent/child test around event delivery and much greater stability expected for linux (with the zookeeper-1.0.3 gem). Again, please see the documentation on the wiki about proper fork procedure.
- fix a bug where a forked client would not have its 'outstanding watches' cleared, so some events would never be delivered
Phusion Passenger and Unicorn users are encouraged to upgrade!
fork(): ZK should now work reliably after a fork() if you call
reopen()ASAP in the child process (before continuing any ZK work). Additionally, your event-handler (blocks set up with
zk.register) will still work in the child. You will have to make calls like
zk.stat(path, :watch => true)to tell ZooKeeper to notify you of events (as the child will have a new session), but everything should work.
See the fork-handling documentation on the wiki.
You are STRONGLY ENCOURAGED to go and look at the CHANGELOG from the zookeeper 1.0.0 release
NOTICE: This release uses the 1.0 release of the zookeeper gem, which has had a MAJOR REFACTORING of its namespaces. Included in that zookeeper release is a compatibility layer that should ease the transition, but any references to Zookeeper* heirarchy should be changed.
Refactoring related to the zokeeper gem, use all the new names internally now.
Create a new Subscription class that will be used as the basis for all subscription-type things.
Add new Locker features!
LockerBase#assert!- will raise an exception if the lock is not held. This check is not only for local in-memory "are we locked?" state, but will check the connection state and re-run the algorithmic tests that determine if a given Locker implementation actually has the lock.
LockerBase#acquirable?- an advisory method that checks if any condition would prevent the receiver from acquiring the lock.
Deprecation of the
unlock!methods. These may change to be exception-raising in a future relase, so document and refactor that
unlockare the way to go.
Fixed a race condition in
event_catcher_spec.rbthat would cause 100% cpu usage and hang.
ZK strives to be a complete, correct, and convenient way of interacting with ZooKeeper. There are a few things to be aware of:
In versions <e; 0.9 there is only one event dispatch thread. It is very important that you don't block the event delivery thread. In 1.0, there is one delivery thread by default, but you can adjust the level of concurrency, allowing more control and convenience for building your event-driven app.
ZK uses threads. You will have to use synchronization primitives if you want to avoid getting hurt. There are use cases that do not require you to think about this, but as soon as you want to register for events, you're using multiple threads.
It is very important that you not ignore connection state events if you're using watches.
ACLS: HOW DO THEY WORK?! ACL support is mainly faith-based now. I have not had a need for ACLs, and the authors of the upstream twitter/zookeeper code also don't seem to have much experience with them/use for them (purely my opinion, no offense intended). If you are using ACLs and you find bugs or have suggestions, I would much appreciate feedback or examples of how they should work so that support and tests can be added.
ZK::Client supports asynchronous calls of all basic methods (get, set, delete, etc.) however these versions are kind of inconvenient to use. For a fully evented stack, try zk-eventmachine, which is designed to be compatible and convenient to use in event-driven code.
- papertrail: Hosted log management service
- redis_failover: Redis client/server failover managment system
- DCell: Distributed ruby objects, built on top of the super cool Celluloid framework.
- I'm usually hanging out in IRC on freenode.net in the BRAND NEW #zk-gem channel.
- if you really want to, you can also reach me via twitter @slyphon