New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Saving two docs via bulkDocs causes two replication requests #7095

Closed
garethbowen opened this Issue Feb 19, 2018 · 1 comment

Comments

Projects
None yet
2 participants
@garethbowen
Member

garethbowen commented Feb 19, 2018

Issue

I'm creating two new documents and saving them using the bulkDocs API. In the background I have a continuous replication to the server. When the docs are saved PouchDB makes two replication requests one for each doc, when it should make one with both docs.

  1. This is bad for performance as one request is better than two. Also CouchDB write performance is better when given two docs at the same time.
  2. This causes a bug in my code because the two documents have a circular reference to each other so if only one exists even for a short period of time the code doesn't operate correctly.
  3. While not breaking the spec, this is inconsistent with how CouchDB operates - when two docs are saved CouchDB responds to changes requests with both docs.

Info

  • Environment: browser
  • Platform: Chrome
  • Adapter: IndexedDB
  • Server: CouchDB 2.1

Reproduce

As above.

Possible fix

In the IDB runBatchedCursor implementation the batch size is being passed as -1, which means useGetAll is false, which means the code ends up using a cursor which effectively has a batch size of 1. If I change the value of useGetAll to true then the code behaves as I'd expect. I propose changing the code to the following (or equivalent):

  var useGetAll = typeof objectStore.getAll === 'function' &&
    typeof objectStore.getAllKeys === 'function' &&
    (batchSize > 1 || batchSize === -1) && !descending;

NB: There is a comment above the block which states:

... batchSize is -1 (i.e. batchSize unlimited, not really clear the user wants a batched approach where the entire DB is read into memory, perhaps they are filtering on a per-doc basis)

I'm not really sure what this means or if the proposed change will have unintended consequences.

@daleharvey

This comment has been minimized.

Member

daleharvey commented Feb 19, 2018

I vaguelly remember us having this optimisation before however they are generally fairly hard to maintain, happy to take a look at a PR if you think you can get this working

garethbowen added a commit that referenced this issue Feb 21, 2018

(#7095) - Reduce replication to server requests
This solves an issue where a bulkDocs request of two docs with a
continuous replication to the server causes the docs to be sent
in two replication requests rather than one.

Firstly this modifies the runBatchedCursor implementation to use
getAll() and getAllKeys() when a batchSize of -1 (or infinite) is
given. This means the onBatch callback will fire once instead of
once per doc.

Secondly this changes the idb changes implementation to call the
onChange handler in the same tick so all changes get added to the
pending batch before it's processed.

daleharvey added a commit that referenced this issue Feb 23, 2018

(#7095) - Reduce replication to server requests
* (#7095) - Reduce replication to server requests

This solves an issue where a bulkDocs request of two docs with a
continuous replication to the server causes the docs to be sent
in two replication requests rather than one.

Firstly this modifies the runBatchedCursor implementation to use
getAll() and getAllKeys() when a batchSize of -1 (or infinite) is
given. This means the onBatch callback will fire once instead of
once per doc.

Secondly this changes the idb changes implementation to call the
onChange handler in the same tick so all changes get added to the
pending batch before it's processed.

* Fixed eslint errors

@daleharvey daleharvey closed this Feb 23, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment