-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cache size does not have any effect #6275
Comments
The cache size parameter doesn't affect intermediate query computation sizes. But there should be some way to deal with that, without the server dying. In my opinion this is a known defect of the product, and I'd like it to be fixed someday. But you have 49 GB to burn through! What sort of query are you running? |
I agree. But then, when the computation ends, why doesn't it free the cache memory used for the query ? Also I don't get the X% cached used in the interface. Right now, no requests are running and rethink is using 25G but I can see 93% cache used with a |
What was the actual query? |
Here is one:
|
@Andarius I don't think you can group by a secondary index and a field like that. You have to group by either a secondary index or one or more fields/functions. |
That would mean it is using 14.2 GB for the cache and 10.8 GB for other data. Does the memory usage go down if you query it with Do you have a large amount of tables? I believe he metadata for those can consume a lot of memory in some cases. It is also possible there is a memory leak somewhere. |
No it does not.
In total I have around 10 tables, but when I run the request that it's only on 1 table and it fills the cache. |
Are you confusing cache usage and disk space. the GB number is disk space I beleive. I have a 4GB RAM VM and have 100GB od disk space. |
This exact thing happens to me a lot on my cluster serving a production environment. I have a sense that lowering my cache by a certain amount will free up the headroom needed for table metadata and per-query memory, but i have no idea how to figure out what that amount is. And if I'm even slightly off and the instance swaps more than a little, everything comes to a halt. |
Looking at the original error in this thread (a segfault) is different than the error we see with rethinkdb on mem issues - we saw a call to an "out of memory" in rethinkdb that cause the process to halt. We added a --restart always to the docker image to get around it. Adding RAM just made it fail less often (two days vs a few hours). Doesn't seem like large DB's of hundreds of megs are handled well. If you believe swapping is an issue, you might look in Linux hugetlb/hugepage processing. That helped a similar mysql issue a few years back. |
I'm running rethinkdb in docker (alpine 2.3.5) with
--cache-size 15000
set (one node here).However on some heavy queries rethinkdb uses a lot more than the allowed memory (up to 64Gb) then run up of memory and dies.
Here is the full stack trace of the error:
The text was updated successfully, but these errors were encountered: