Skip to content
Commits on Apr 15, 2011
  1. Project rename and move pidq => pooler

    committed Apr 14, 2011
Commits on Apr 1, 2011
  1. Fix some tests

    committed Apr 1, 2011
  2. Fix behavior when a consumer crashes.

    Also started adding some stats which might provide for a cleaner way to
    do some of the testing; take a pid and then inspect the stats, induce a
    crash, and then inspect stats again.
    Hi Sean,
    Good chatting with you on irc yesterday.  The upcoming secondary index
    feature you mentioned sounds like it would eliminate one of the
    remaining points of friction we encounter when trying to model our
    data in a Riak-efficient fashion.  Per your request, here are some
    notes on how we would like to use Riak.
    We are evaluating Riak for use as a backing store for Chef server as
    well as the store for the authorization service that is part of the
    Opscode Platform.  In both cases, the majority of operations can be
    modeled as simple key/value lookup.  However, both uses also require
    modeling collections that support add, delete, and list operations.
    Rather than describing the complete schema, I'll describe the use case
    around nodes in Chef as I think that captures the sort of thing we
    need in a few places...
    == Nodes in Chef ==
    Node's are stored by unique name and have a typical value size of
    30Kb.  Suppose an organization has ~10K nodes and there are 50K orgs.
    The operations we need are as follows:
    - Concurrently add new nodes to an org.
    - Concurrently delete nodes.
    - List all nodes in an org.  For large node counts, we will want to
      paginate in some fashion to display the list in a web-ui.  Nodes do
      not change orgs.
    - List all nodes in an org with a specified environment attribute.
      The environment attribute is part of the node data, but can be
      changed; a node edit can move a node from env1 to env2.
    Current assumption is that we would store nodes in an OrgName-Nodes
    bucket by id.
    It is also worth mentioning that we provide users with the ability to
    search for nodes by arbitrary node attribute (nodes are loosely
    structured JSON documents).  We currently use solr to index node data.
    For search, we need wild-carding ('?', '*') and Boolean query at
    minimum.  A while back I did a bit of experimenting with Riak search,
    but quickly got stuck with some rough edges with query parsing (wrong
    analyzer and difficulty with certain special characters that happen to
    be in almost all of our queries).
    Does that give enough context?  If not, let me know how I can
    elaborate and we'll go from there.
    Best Wishes,
    + seth
    Seth Falcon | Senior Software Design Engineer | Opscode | @sfalcon
    committed Oct 18, 2010
Commits on Sep 13, 2010
Commits on Sep 9, 2010
  1. first test passing

    committed Sep 6, 2010
Commits on Sep 6, 2010
  1. Add html export of readme

    committed Sep 5, 2010
  2. initial

    committed Sep 5, 2010
Something went wrong with that request. Please try again.