New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can’t search: MDB_READERS_FULL #2648
Comments
Thanks @irevoire Is it something we also had in the previous versions? (v0.27.0 or v0.28.0) |
Hey, I had an idea that maybe we could send a 429 Too Many Requests instead of an internal error. But we could find more and better ideas by looking at the Retry-After guide by Mozilla. Another idea could be to let the users set the LMDB max_readers value by themselves, it is 126 parallel readers by default. |
Maybe we could create a « search service » that holds ONE read transaction and make every search goes through. |
Why not but I find it a little bit more complex as there is more synchronization to do between thread and workers, also doing so forces us to share the read transaction between thread ( |
Yeah, that's sure, but it does fix the issue instead of letting the user handle it.
Do you think it would solve the issue to have one read transaction by thread instead of one for everyone? |
What I worry about is that it is not recommended to keep a read transaction for too long as it can block write transactions (as there can only be 1 read transaction and 1 write transaction at the same time) and keeping a read transaction for too long can force the database to grow in size. Therefore, I am worried about how will you make sure to drop the read transaction after 2 seconds if there is no read transaction and only a writing one. Will you use channels to send read transaction requests to a function that is running in a separate thread that is dynamically handling the 2 seconds timeout to drop the transaction? That's a quite complex system. |
Huum, I don't think we'll need such a complex system, but I'm not sure yet; I think before choosing any solution, it would be good to make a POC of this one since it's the only one really solving the issue instead of making it appear later. |
Another solution could simply be to either:
|
But why is there a limit in the beginning? Will it throw an error if we ask for too many readers? |
So after reading a little bit more about the consequences of setting this parameter to a high value, we can as it doesn't impact performances. |
Closing this in favor of #2786 |
Hi @Kerollmops and @curquiza 👋 I'm trying to understand what the implication is for the cloud team. |
This one, this is the number of readers, so the number of "queued" request that Meilisearch can have #2786 is closed so now it's not 126 but 1024 😇 (released in v0.30.0) |
Describe the bug
Hello, I was running a bunch of requests in parallel and then meilisearch throwed a few hundred times this error:
I would like to know how many connection we can do at the same time?
And also this limit should be specified in the known limits documentation page.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
It execute the search
Screenshots
If applicable, add screenshots to help explain your problem.
Meilisearch version: main branch
Additional context
Additional information that may be relevant to the issue.
I’m running macOS
The text was updated successfully, but these errors were encountered: