Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP
branch: az1047-raise-g…
Commits on Dec 30, 2011
  1. @jtuple
Commits on Dec 20, 2011
  1. @jaredmorrow
Commits on Dec 19, 2011
  1. @rzezeski
Commits on Dec 16, 2011
  1. @rzezeski

    Convert new_claim to act as pass-thru to claim mdoule

    rzezeski authored
    Think about removing this module in future major version with a warning
    in the release notes to anyone referencing this module.
  2. @rzezeski

    Revert "Remove new_claim module, everything was moved into claim module"

    rzezeski authored
    This reverts commit b6409ca.
    
    Since we've already introduced this module name there may be people using
    it already and we don't want to break their system.
  3. @rzezeski
  4. @rzezeski

    Default to v2 claim, update QC tests, fix bug in select_indices

    rzezeski authored
    1. The new default claim is now set to v2.
    
    2. The semantics of wants_claim changed so I had to update the wants_claim
       test.  Essentially the old wants_claim was simply an idicator if the ring
       is inbalanced at all and would return `{yes,0}` if it is.  The new wants_claim
       is more true to the name in that it return `{yes,N}` meaning the node would
       like to calim `N` partitions.
    
    3. Based on the unique nodes property there was an edge case in the situation
       where there is 16 partitions and 15 nodes.  I'm not sure if this edge case
       would appear in other situations.  Anyways, the way select_indices was written
       when the 15th node would go to claim it would determine that there was no safe
       partition it could claim and then would perform a rebalance (diagonalize).
       However a rebalance doesn't make any guarentee about keeping the target_n
       invariant on wrap around.  So you would end up with the last and first partition
       being owned by the same node.  The problem was that select_indies assumed that
       the first owner could give up it's partition `First = (LastNth =:= Nth)` but that
       wouldn't hold true and then no other partition could be claimed because they
       would all be within target_n of the LastNth/FirstNth.  My change is to pass
       an explicit flag in the accumulator that represents whether or not the node has
       claimed anything yet.  This makes, the possibly incorrect, assumption that the node
       never currently owns anything when `select_indices` is called.  I was able to get a
       500K iteration of the QC prop to pass but I do wonder if things could be different
       in production.  After talking with Joe he seemed to think the change was safe.
  5. @rzezeski
  6. @rzezeski

    Add 1 & 2 arity claim APIs

    rzezeski authored
    The claim APIs currently require both 1 & 2 arity functions
    because of the two different ways legacy gossip and new gossip
    call claim.
    
    The reason both default and v1 are exported is because soon the
    default will be v2 and you still need a way to allow the user to
    set the claim algo to v1.
  7. @rzezeski

    Rename current claim algo to v1

    rzezeski authored
    The new claim algo in riak_core_new_claim is going to replace
    the current default.  Rename current to v1 and later add new
    as v2.
  8. @rzezeski

    Comment out spiraltime QC

    rzezeski authored
    This test always results in a timeout for me.
    For now just don't run it.
Commits on Dec 15, 2011
  1. @rzezeski
  2. @rzezeski

    Set default handoff_concurrency to 1

    rzezeski authored
    We've found that in extreme load situations our default handoff concurrency,
    paired with the fact that, currently, no incoming throttling is done, can cause
    the node to become overloaded and the latency to spike.  Rather than have
    a potentially harmful default Riak should err on side of safety.
  3. @rustyio

    Merge pull request #121 from basho/AZ1011-synchronous-vnode-startup

    rustyio authored
    Synchronous VNode Startup (AZ1011)
  4. @rustyio
  5. @rustyio
  6. @rustyio
  7. @rustyio
  8. @rustyio

    Update riak_core:register/N to trigger a ring_update and start vnodse…

    rustyio authored
    …; move wait_for_app/N into riak_core, add wait_for_service/N, add logging to both.
    
    AZ1011
Commits on Dec 14, 2011
  1. @rustyio

    Change startup so that the application and vnodes start before the se…

    rustyio authored
    …rvice is declared as 'up'.
    
    AZ1011
Commits on Dec 9, 2011
  1. @jonmeredith
Commits on Nov 21, 2011
  1. @rzezeski
Commits on Nov 17, 2011
  1. @jaredmorrow

    Roll version 1.0.2

    jaredmorrow authored
Commits on Nov 12, 2011
  1. @jtuple @rzezeski

    Add ability to throttle gossip rate

    jtuple authored rzezeski committed
Commits on Nov 4, 2011
  1. @jonmeredith
  2. @jonmeredith

    Replace incorrect {noreply, State} with continue(State) in riak_core_…

    jonmeredith authored
    …vnode.
    
    Corrects the handle_info return when the vnode is forwarding and
    has deleted the state.
    
    Fixes: bz://1227
Commits on Nov 2, 2011
  1. @rustyio
  2. @jaredmorrow
Commits on Oct 28, 2011
  1. @rustyio

    Merge pull request #104 from basho/AZ895-add-siblings-and-objsize-stats

    rustyio authored
    Allow caller to specify Min/Mix/Bins as parameter to mean_and_nines/N.
  2. @rustyio

    Add a rounding mode to histogram percentile calculations, allow the c…

    rustyio authored
    …aller to choose whether we round up or down.
    
    AZ895
  3. @rustyio
Commits on Oct 25, 2011
  1. @rzezeski
  2. @rzezeski

    Fix a couple of specs

    rzezeski authored
    Fix two specs with incorrect return types.  These specs are giving me problems
    upstream while trying to spec out riak_search.
Commits on Oct 20, 2011
  1. @jaredmorrow

    Rolling version to 1.0.1

    jaredmorrow authored
Commits on Oct 11, 2011
  1. @jtuple

    Merge pull request #102 from basho/bz1242-stalled-handoff

    jtuple authored
    Fix BZ1242: Stalled handoff when ring is fixed-up
Something went wrong with that request. Please try again.