Permalink
Commits on Nov 20, 2012
  1. Preserve original implementation in fabric_rpc

    Coordinators on nodes running the old release will be the only ones
    hitting fabric_rpc for views, and those coordinators will be expecting
    the original API.
    
    BugzID: 13311
    kocolosk committed Nov 20, 2012
  2. Use fabric_rpc2 endpoints

    BugzID: 13311
    kocolosk committed Nov 20, 2012
Commits on Jun 14, 2012
  1. Use rexi:stream/1 for view backpressure

    This uses the new rexi:stream/1 API to allow rexi workers to stream
    results back to the coordinator process. This is intended to reduce the
    sensitivity of views to RTT between nodes involved in a view response.
    davisp committed Jun 12, 2012
Commits on Jun 5, 2012
  1. Add upgrade instructions

    BugzID: 13177
    kocolosk committed Jun 5, 2012
  2. Use mem3:live_shards/2

    kocolosk committed Jun 5, 2012
  3. Prepare for Options proplist instead of changing records

    BugzID: 13177
    kocolosk committed Jun 5, 2012
  4. Use #view_query_args.extra instead of adding a new field

    BugzID: 13177
    kocolosk committed Jun 5, 2012
Commits on Jun 1, 2012
  1. Merge branch '1.6.x'

    Conflicts:
    	rebar.config
    	src/fabric_db_doc_count.erl
    	src/fabric_db_info.erl
    	src/fabric_group_info.erl
    	src/fabric_rpc.erl
    	src/fabric_util.erl
    	src/fabric_view_changes.erl
    
    BugzID: 13177
    kocolosk committed Jun 1, 2012
Commits on May 24, 2012
  1. Merge pull request #48 from cloudant/13586-chunked-encoded-data

    Handle nulls that occur in embedded ids
    
    BugzID: 13586
    kocolosk committed May 24, 2012
Commits on May 18, 2012
  1. Merge pull request #49 from cloudant/dreyfus

    Dreyfus
    Robert Newson committed May 18, 2012
Commits on May 14, 2012
  1. Handle nulls that occur in embedded ids

    fabric:docid calls erlang:error if an id is null so it seems like
    we should not bother to even spawn a process to call open in this
    case as it's a common value from JS errors.
    
    BugzID:13586
    Bob Dionne committed May 14, 2012
  2. Merge pull request #46 from cloudant/13125-bubble-not-found-errors

    Bubble db not found errors
    
    BugzID: 13125
    kocolosk committed May 14, 2012
Commits on May 13, 2012
  1. Allow caller to specify module for submit_jobs

    Robert Newson committed with Robert Newson Mar 29, 2012
Commits on May 12, 2012
  1. Bubble db not found errors

    When requesting the _changes feed of a deleted database with a continuous
    or longpoll style we would return a `400 Bad Request` error instead of
    the correct `404 Not Found`. This was because we were swallowing all
    errors when validating the `since` parameter. This patch just catches
    the except and returns it appropriately.
    
    BugzId: 13125
    davisp committed May 12, 2012
Commits on May 8, 2012
  1. Remove custom appup

    kocolosk committed May 8, 2012
  2. Use only shards on live nodes in send_changes

    BugzID: 13525
    Bob Dionne committed with kocolosk May 2, 2012
Commits on May 7, 2012
  1. Merge pull request #45 from cloudant/13525-use-live-nodes-shard-repla…

    …cement
    
    Use only shards on live nodes in send_changes
    
    BugzID: 13525
    kocolosk committed May 7, 2012
Commits on May 2, 2012
  1. Use only shards on live nodes in send_changes

    BugzID: 13525
    Bob Dionne committed May 2, 2012
Commits on May 1, 2012
  1. Merge pull request #42 from cloudant/13470-make-get_dbs-zone-aware_ma…

    …ster
    
    Merge pull request #39 from cloudant/13470-make-get_dbs-zone-aware
    
    BugzID: 13470
    kocolosk committed May 1, 2012
Commits on Apr 24, 2012
  1. Merge pull request #41 from cloudant/13414-mem3-cache-lru

    BugzID: 13414
    kocolosk committed Apr 24, 2012
  2. Add timeout for known length attachment uploads

    Similar error condition as fixed earlier today for chunked attachments.
    This one seems to happen less often but I did see it while debugging.
    davisp committed Apr 24, 2012
Commits on Apr 23, 2012
  1. Timeout chunked attachment uploads

    Its possible that the chunked attachment writers would get lost waiting
    to receive a message that never arrived. This would end up leaving an
    orphaned process that would hold open database shard copies which
    prevented them from being freed by delete_nicely.
    
    This patch just inserts a ten minute time limit waiting for the next
    message before it gives up and exits. Its important to note that this is
    just the length of time between messages carrying data for the
    attachment (which should be about 4K each) and not a time limit for the
    entire upload.
    davisp committed Apr 23, 2012
Commits on Apr 19, 2012
  1. Merge pull request #39 from cloudant/13470-make-get_dbs-zone-aware

    13470 make get dbs zone aware
    Robert Newson committed Apr 19, 2012
  2. Upgrade to new mem3 shards API

    BugzId: 13414
    davisp committed Mar 27, 2012
Commits on Apr 18, 2012
  1. Merge pull request #39 from cloudant/13470-make-get_dbs-zone-aware

    13470 make get dbs zone aware
    Robert Newson committed Apr 18, 2012
  2. Describe the algorithm

    Robert Newson committed Apr 18, 2012
Commits on Apr 17, 2012
  1. Make fabric_util:get_db zone aware

    Use the new mem3:group_by_proximity method to prefer local, then
    zone-local nodes over zone-remote nodes.
    
    BugzID: 13470
    Robert Newson committed Apr 17, 2012
Commits on Apr 13, 2012
  1. Ignore 'complete' messages from suppressed workers

    If we have a database where one copy of a partition contributes zero
    rows to a reduce view but another copy of the same partition contributes
    one or more rows we can end up erroneously removing a worker which has
    already contributed a row to a response.  This puts the coordinator into
    an inconsistent state and can ultimately cause the response to hang.
    
    The fix is simple -- just guard processing of 'complete' messages by
    checking if the worker sending the message has already lost the race to
    another copy.
    
    BugzID: 13461
    kocolosk committed Apr 13, 2012
  2. Ignore 'complete' messages from suppressed workers

    If we have a database where one copy of a partition contributes zero
    rows to a reduce view but another copy of the same partition contributes
    one or more rows we can end up erroneously removing a worker which has
    already contributed a row to a response.  This puts the coordinator into
    an inconsistent state and can ultimately cause the response to hang.
    
    The fix is simple -- just guard processing of 'complete' messages by
    checking if the worker sending the message has already lost the race to
    another copy.
    
    BugzID: 13461
    kocolosk committed Apr 13, 2012