Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

meta collection alternative #44

Closed
benan789 opened this issue Feb 16, 2018 · 14 comments
Closed

meta collection alternative #44

benan789 opened this issue Feb 16, 2018 · 14 comments

Comments

@benan789
Copy link

Does the meta collection serve any other purpose other than getting the routing info? If not, wouldn't it be better to just query elasticsearch for the routing info?

@rwynn
Copy link
Owner

rwynn commented Feb 16, 2018

When you customize routing you cannot do a get without the routing info. Do you need to support deletes in mongo propagating to ES? If not then you don’t need meta. Inserts and updates are fine cause your JS sets the routing. On a delete all we have from mongo is the mongo Id which doesn’t give us the routing.

@benan789
Copy link
Author

What about ids search? It's probably not as fast as get but shouldn't be that much slower.

@rwynn
Copy link
Owner

rwynn commented Feb 17, 2018

Why do you think meta is an issue? Do you see errors?

@benan789
Copy link
Author

Are the meta upserts to mongo bulked? Not sure if that's a bottleneck, but syncing is very slow, I have yet to fully sync a db of 10mil docs without it breaking, i think the fix you did last night helped, it was able to sync to 4mil whereas before it could only do 2. Also it takes a lot of space on the db.

@rwynn
Copy link
Owner

rwynn commented Feb 17, 2018

Got you. Definitely could be bulked. But if the indexing count going up slowly?

@benan789
Copy link
Author

Ya i think it gets slower as the count goes higher. It's at 2.7 right now and I started syncing like 6 hrs ago.

@rwynn
Copy link
Owner

rwynn commented Feb 17, 2018

Can you try with direct-read-limit really high? Less queries. Read up on the direct-* options. Also I noticed from you comment yesterday about the error, the direct read query errored with a timeout. The query actually sorts the entire collection by _id and then seeks to the offset applies the limit. That is why I suggest a really high limit. Default is 5000 I think. That’s still 2000 queries and as it gets higher it has to seek past more documents so gets slower.

@rwynn
Copy link
Owner

rwynn commented Feb 17, 2018

Also did you up the thread pool on the ES side?
https://rwynn.github.io/monstache-site/start/
thread_pool:
bulk:
queue_size: 200

And consider setting the refresh interval to -1?

@benan789
Copy link
Author

benan789 commented Feb 17, 2018

Are you using skip?

The cursor.skip() method is often expensive because it requires the server to walk from the beginning of the collection or index to get the offset or skip position before beginning to return results. As the offset (e.g. pageNumber above) increases, cursor.skip() will become slower and more CPU intensive. With larger collections, cursor.skip() may become IO bound.

Consider using range-based pagination for these kinds of tasks. That is, query for a range of objects, using logic within the application to determine the pagination rather than the database itself. This approach features better index utilization, if you do not need to easily jump to a specific page.

$gt: id should fix this

@rwynn
Copy link
Owner

rwynn commented Feb 17, 2018

Skip used yes. And $gt is a good idea. I wonder if $gt would work if someone used a strange Id like an object? Query would be like { _id : $gt: {x: 1 } }. Would have to try it cause it needs to work in general case. I guess if we’re sorting by _id it must work for any value of _id.

@rwynn
Copy link
Owner

rwynn commented Feb 17, 2018

I think using the range selector instead of skip is a huge performance gain! I’ll fix it and publish a new release on Monday. Thanks for your help!

@rwynn
Copy link
Owner

rwynn commented Feb 17, 2018

@benan789 git it another try with the latest release when you get a chance. I'm seeing collections with millions of documents getting synced pretty quickly now.

@benan789
Copy link
Author

Much better! Thank you for fixing it so fast!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants