Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

(#3015) - Issues GET requests on pull replication in parallel for each b... #3174

Closed
wants to merge 1 commit into from

Conversation

daleharvey
Copy link
Member

...atch.

In node.js using default settings (5 connections in the pool + 100 docs per batch) over a HTTP connection to a local PouchDB server perf increased by 8x over previous perf.

When replicating down the 0.6 gig NPM CouchDB database over HTTPS using CouchDB locally it took CouchDB about 0.69 seconds/100 records. Without this change it took PouchDB around 30 seconds/100 records. With this change on default settings we reduced time to 4 seconds/100 records. By changing the number of connections to 15 and the batch size to 1000 we got perf down to 0.9 seconds/100 records. So perf got down to only 30% worse than CouchDB. I strongly suspect that when the node.js connection harvesting bug is fixed (so we don't lose our existing connections between batches) and with a little more tweaking (it's clear that we are a bit more 'chatty' than CouchDB's replication client) we should be able to match CouchDB's perf.

…h batch.

In node.js using default settings (5 connections in the pool + 100 docs per batch) over a HTTP connection to a local PouchDB server perf increased by 8x over previous perf.

When replicating down the 0.6 gig NPM CouchDB database over HTTPS using CouchDB locally it took CouchDB about 0.69 seconds/100 records. Without this change it took PouchDB around 30 seconds/100 records. With this change on default settings we reduced time to 4 seconds/100 records. By changing the number of connections to 15 and the batch size to 1000 we got perf down to 0.9 seconds/100 records. So perf got down to only 30% worse than CouchDB. I strongly suspect that when the node.js connection harvesting bug is fixed (so we don't lose our existing connections between batches) and with a little more tweaking (it's clear that we are a bit more 'chatty' than CouchDB's replication client) we should be able to match CouchDB's perf.
@nolanlawson
Copy link
Member

+1, very pleased. Looks like this closes #2940 as well!

@daleharvey daleharvey closed this Dec 14, 2014
@yaronyg
Copy link
Member

yaronyg commented Dec 15, 2014

@daleharvey & @nolanlawson - YEAH!!!! Although this is going to make #3185 more urgent as anyone using node.js with this change is going to see a lot of connection failures due to how node 0.10 handles connections.

BTW, should I also follow this pattern? That is, submit my PR (with ugly history) to Github, let Travis do it's thing, use the patch trick but instead of committing the patch directly to master submit the patch as a new PR? Just checking because I'd really like to not do more damage. :)

@daleharvey
Copy link
Member Author

@yaronyg if you push your stuff as a branch to master then it will run the full test

You can commit with ugly history then squash when you have a green r+'d PR, the patch thing wont squash the commits so be careful with it

@yaronyg
Copy link
Member

yaronyg commented Dec 15, 2014

Now I'm completely confused but i figure it's my job to figure out Git, not yours. On my next commit I'll just have to fight through this.

@nolanlawson
Copy link
Member

Also we are not strict about seeing a green run in Travis for the exact commit that gets squashed/rebased/pushed; as long as it's green at some point, that's fine.

@daleharvey daleharvey deleted the 3015 branch October 3, 2016 13:28
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants