Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
2.0.0 (BigCouch 0.4.0)
Major new changes since version 1.3 include:
Shard allocation based on "zones". A node can be tagged as belonging to a particular zone, and mem3 will do its best to allocate shard copies in distinct zones.
Support for the _replicator DB. Mem3 coordinates the local node's efforts to run its share of the replication jobs specified in the _replicator database. Nodes in the cluster mediate a replication if the document specifying that replication is stored on a shard whose primary copy is hosted by that node.
Caching layers have been updated to look for binary keys instead of atoms. This matches the new API in CouchDB 1.1.0, but it means that mem3 2.0 is incompatible with CouchDB 1.0.x / BigCouch 0.3.x.
The covering set of shards for a stale=ok view read (mem3:ushards/1) is now chosen evenly from all the nodes hosting shard copies.
The error handling surrounding updates to the shard_db has been improved. Body-level conflicts (as opposed to races to apply the same update) are detected and reported.
Nodes attempt to connect to their peers on startup instead of waiting for the first request that requires them to do so.
1.1.1 (BigCouch 0.3.0)
- Fix a bug handling documents that hash to a boundary between shards
- Add a ushards/1 function to return exactly one copy (and always the same copy, modulo node failures) of each shard in a DB.
- Automatically replicate data between shard copies (BigCouch #1).
- Enable tagging of database shards with a common suffix.
- Fix sync triggers based on nodeup/nodedown events.
- Synchronize _users database between cluster nodes.
- New streamlined replicator used for all mem3-triggered internal replications.
126.96.36.199 (BigCouch 0.2.0)
- Use more efficient timers
- Additional tests, removed some logging
188.8.131.52 (BigCouch 0.1.0)
- Initial release