Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP
branch: 13311-improve-…
Commits on Nov 20, 2012
  1. @kocolosk

    Preserve original implementation in fabric_rpc

    kocolosk authored
    Coordinators on nodes running the old release will be the only ones
    hitting fabric_rpc for views, and those coordinators will be expecting
    the original API.
    
    BugzID: 13311
  2. @kocolosk

    Use fabric_rpc2 endpoints

    kocolosk authored
    BugzID: 13311
Commits on Jun 14, 2012
  1. @davisp

    Use rexi:stream/1 for view backpressure

    davisp authored
    This uses the new rexi:stream/1 API to allow rexi workers to stream
    results back to the coordinator process. This is intended to reduce the
    sensitivity of views to RTT between nodes involved in a view response.
Commits on Jun 5, 2012
  1. @kocolosk

    Add upgrade instructions

    kocolosk authored
    BugzID: 13177
  2. @kocolosk

    Use mem3:live_shards/2

    kocolosk authored
  3. @kocolosk
  4. @kocolosk
Commits on Jun 1, 2012
  1. @kocolosk

    Merge branch '1.6.x'

    kocolosk authored
    Conflicts:
    	rebar.config
    	src/fabric_db_doc_count.erl
    	src/fabric_db_info.erl
    	src/fabric_group_info.erl
    	src/fabric_rpc.erl
    	src/fabric_util.erl
    	src/fabric_view_changes.erl
    
    BugzID: 13177
Commits on May 24, 2012
  1. @kocolosk

    Merge pull request #48 from cloudant/13586-chunked-encoded-data

    kocolosk authored
    Handle nulls that occur in embedded ids
    
    BugzID: 13586
Commits on May 18, 2012
  1. @rnewson

    Merge pull request #49 from cloudant/dreyfus

    rnewson authored
    Dreyfus
Commits on May 14, 2012
  1. Handle nulls that occur in embedded ids

    Bob Dionne authored
    fabric:docid calls erlang:error if an id is null so it seems like
    we should not bother to even spawn a process to call open in this
    case as it's a common value from JS errors.
    
    BugzID:13586
  2. @kocolosk

    Merge pull request #46 from cloudant/13125-bubble-not-found-errors

    kocolosk authored
    Bubble db not found errors
    
    BugzID: 13125
Commits on May 13, 2012
  1. Allow caller to specify module for submit_jobs

    Robert Newson authored Robert Newson committed
Commits on May 12, 2012
  1. @davisp

    Bubble db not found errors

    davisp authored
    When requesting the _changes feed of a deleted database with a continuous
    or longpoll style we would return a `400 Bad Request` error instead of
    the correct `404 Not Found`. This was because we were swallowing all
    errors when validating the `since` parameter. This patch just catches
    the except and returns it appropriately.
    
    BugzId: 13125
Commits on May 8, 2012
  1. @kocolosk

    Remove custom appup

    kocolosk authored
  2. @kocolosk

    Use only shards on live nodes in send_changes

    Bob Dionne authored kocolosk committed
    BugzID: 13525
Commits on May 7, 2012
  1. @kocolosk

    Merge pull request #45 from cloudant/13525-use-live-nodes-shard-repla…

    kocolosk authored
    …cement
    
    Use only shards on live nodes in send_changes
    
    BugzID: 13525
Commits on May 2, 2012
  1. Use only shards on live nodes in send_changes

    Bob Dionne authored
    BugzID: 13525
Commits on May 1, 2012
  1. @kocolosk

    Merge pull request #42 from cloudant/13470-make-get_dbs-zone-aware_ma…

    kocolosk authored
    …ster
    
    Merge pull request #39 from cloudant/13470-make-get_dbs-zone-aware
    
    BugzID: 13470
Commits on Apr 24, 2012
  1. @kocolosk
  2. @kocolosk
  3. @kocolosk
  4. @kocolosk
  5. @kocolosk
  6. @davisp

    Add timeout for known length attachment uploads

    davisp authored
    Similar error condition as fixed earlier today for chunked attachments.
    This one seems to happen less often but I did see it while debugging.
Commits on Apr 23, 2012
  1. @kocolosk
  2. @davisp

    Timeout chunked attachment uploads

    davisp authored
    Its possible that the chunked attachment writers would get lost waiting
    to receive a message that never arrived. This would end up leaving an
    orphaned process that would hold open database shard copies which
    prevented them from being freed by delete_nicely.
    
    This patch just inserts a ten minute time limit waiting for the next
    message before it gives up and exits. Its important to note that this is
    just the length of time between messages carrying data for the
    attachment (which should be about 4K each) and not a time limit for the
    entire upload.
Commits on Apr 19, 2012
  1. Merge pull request #39 from cloudant/13470-make-get_dbs-zone-aware

    Robert Newson authored
    13470 make get dbs zone aware
  2. @davisp

    Upgrade to new mem3 shards API

    davisp authored
    BugzId: 13414
Commits on Apr 18, 2012
  1. @rnewson

    Merge pull request #39 from cloudant/13470-make-get_dbs-zone-aware

    rnewson authored
    13470 make get dbs zone aware
  2. Describe the algorithm

    Robert Newson authored
Commits on Apr 17, 2012
  1. Make fabric_util:get_db zone aware

    Robert Newson authored
    Use the new mem3:group_by_proximity method to prefer local, then
    zone-local nodes over zone-remote nodes.
    
    BugzID: 13470
Commits on Apr 13, 2012
  1. @kocolosk

    Ignore 'complete' messages from suppressed workers

    kocolosk authored
    If we have a database where one copy of a partition contributes zero
    rows to a reduce view but another copy of the same partition contributes
    one or more rows we can end up erroneously removing a worker which has
    already contributed a row to a response.  This puts the coordinator into
    an inconsistent state and can ultimately cause the response to hang.
    
    The fix is simple -- just guard processing of 'complete' messages by
    checking if the worker sending the message has already lost the race to
    another copy.
    
    BugzID: 13461
  2. @kocolosk
  3. @kocolosk

    Ignore 'complete' messages from suppressed workers

    kocolosk authored
    If we have a database where one copy of a partition contributes zero
    rows to a reduce view but another copy of the same partition contributes
    one or more rows we can end up erroneously removing a worker which has
    already contributed a row to a response.  This puts the coordinator into
    an inconsistent state and can ultimately cause the response to hang.
    
    The fix is simple -- just guard processing of 'complete' messages by
    checking if the worker sending the message has already lost the race to
    another copy.
    
    BugzID: 13461
Something went wrong with that request. Please try again.