Permalink
Switch branches/tags
Nothing to show
Commits on Apr 2, 2011
  1. WIP on supervisor based pools.

    pidq_sup is top-level and should eventually supervise the pidq
    gen_server as well as the pidq_pool_sup supervisor.
    
    You dynamically create pools by calling
    supervisor:start_child(pidq_pool_sup, [A1]) with A1 containing details
    of how to start children of that pool.
    
    pidq_pooled_worker_sup is a simple_one_for_one supervisor for the pooled
    workers.
    seth committed Apr 2, 2011
Commits on Apr 1, 2011
  1. Remove generated readme html

    seth committed Apr 1, 2011
  2. update rebar

    seth committed Apr 1, 2011
  3. Fix some tests

    seth committed Apr 1, 2011
  4. Fix behavior when a consumer crashes.

    Also started adding some stats which might provide for a cleaner way to
    do some of the testing; take a pid and then inspect the stats, induce a
    crash, and then inspect stats again.
    
    Hi Sean,
    
    Good chatting with you on irc yesterday.  The upcoming secondary index
    feature you mentioned sounds like it would eliminate one of the
    remaining points of friction we encounter when trying to model our
    data in a Riak-efficient fashion.  Per your request, here are some
    notes on how we would like to use Riak.
    
    We are evaluating Riak for use as a backing store for Chef server as
    well as the store for the authorization service that is part of the
    Opscode Platform.  In both cases, the majority of operations can be
    modeled as simple key/value lookup.  However, both uses also require
    modeling collections that support add, delete, and list operations.
    
    Rather than describing the complete schema, I'll describe the use case
    around nodes in Chef as I think that captures the sort of thing we
    need in a few places...
    
    == Nodes in Chef ==
    
    Node's are stored by unique name and have a typical value size of
    30Kb.  Suppose an organization has ~10K nodes and there are 50K orgs.
    The operations we need are as follows:
    
    - Concurrently add new nodes to an org.
    
    - Concurrently delete nodes.
    
    - List all nodes in an org.  For large node counts, we will want to
      paginate in some fashion to display the list in a web-ui.  Nodes do
      not change orgs.
    
    - List all nodes in an org with a specified environment attribute.
      The environment attribute is part of the node data, but can be
      changed; a node edit can move a node from env1 to env2.
    
    Current assumption is that we would store nodes in an OrgName-Nodes
    bucket by id.
    
    It is also worth mentioning that we provide users with the ability to
    search for nodes by arbitrary node attribute (nodes are loosely
    structured JSON documents).  We currently use solr to index node data.
    For search, we need wild-carding ('?', '*') and Boolean query at
    minimum.  A while back I did a bit of experimenting with Riak search,
    but quickly got stuck with some rough edges with query parsing (wrong
    analyzer and difficulty with certain special characters that happen to
    be in almost all of our queries).
    
    Does that give enough context?  If not, let me know how I can
    elaborate and we'll go from there.
    
    Best Wishes,
    
    + seth
    
    --
    Seth Falcon | Senior Software Design Engineer | Opscode | @sfalcon
    seth committed Oct 18, 2010
Commits on Sep 13, 2010
Commits on Sep 9, 2010
  1. first test passing

    seth committed Sep 7, 2010
Commits on Sep 6, 2010
  1. Add html export of readme

    seth committed Sep 6, 2010
  2. initial

    seth committed Sep 6, 2010