New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Another server crash #38
Comments
For reference, the backtrace reduces to:
which is one of the most sensical backtraces I've seen yet, it seems like our symbol files are good for something after all. This appears to be the same as issue #35, though, closing that one since it has less information. Thanks for including the version you're using, by the way, it makes this much easier. As for the error itself, I'm not sure what could cause an error on the worker socket, but I'll see what I can find. It seems that we should be robust enough to handle this with (at the worst) only failing a query. |
Thanks, wasn't sure if those were the same since the other one happened in different circumstances. This one seems to crop up reproducibly (on this machine: a VirtualBox VM on my Mac) when I run certain queries, though the same queries work fine if I run it on an EC2 box with more or less the same data. Let me know if there's a good way to get more debugging info from this. |
If there is a certain set of queries that can reproduce this, it would be helpful to know what they are, and we could try reproducing it here. |
Actually, it seems that all (?) queries make it fail? Running
(in the admin UI) runs for about 15 seconds and then the server dies. |
Looks like something is borked about that table; operations on another table that I just made work fine. |
Hmm is the table still borked if you shutdown and then restart the server? If so would it be possible for you to share the relevant data files with us or do they contain sensitive information? |
Not sure what you mean -- the server crashes and I've been starting it back up each time. The data files aren't particularly sensitive but the table is 1.5 GB so is a little impractical to transport. |
We'll see if we can get a large file transfer going. |
I'd add rethinkdb datafiles tend to respond well to compression. I've seen about a factor of 10 decrease in size which could make this a bit more palatable. |
K got my hands on a data file for this looking in to it. |
So I have downloaded and run your data files and been unable to get any sort of crash with them. This was at f533ac9 and was run on an ubuntu machine rather than a virtualbox. I'm going to look in to 1.2.5 and see if I can reproduce it. If not next step is virtual box. |
spicyj alright I've had no luck with the specific binary from v1.2.5 which leads me to believe the VM is blame. You mentioned in IRC that you wouldn't mind sending me the VM image. That would be incredibly helpful if you wouldn't mind. I believe you have my email address. |
Hmm, so I just got the VM and ran your query. For me it crashed because the rethinkdb process ran out of memory and was killed. This is easy enough to fix but it seems to match pretty well with your symptoms could this be the problem? Running rethinkdb with only 512 Mb of memory isn't fully recommended but it can be done by decreasing the cache size of the tables. |
That could definitely be the case. How'd you determine that it ran out of memory? |
If this is indeed the case, we should give a good error message in case of out-of-memory conditions (e.g. if malloc fails, crash with an intelligent error message). Of course even printing a good error message in those conditions may not be possible, in which case we should ideally detect that the condition might occur and print a message in advance. Tricky, but certainly doable and will save people a lot of grief. |
I was under the impression that modern Linux only allows malloc and friends to fail when the address space is exhausted, and instead deploy the OOM killer when memory is exhausted. So that would be hard to gracefully error on. We could in theory periodically check how much memory we're using and warn if we're using too much. |
Ah, I didn't know about that. But in any case, yes, I was proposing the latter. EDIT: actually scratch that, I was proposing both, but given the OOM stuff, only the latter actually makes sense. (Although it might be possible to detect that you're being killed by the kernel and log it, though that seems unlikely to me) |
Yeah I got hit with the oom killer so we can't do too much about that. We On Monday, November 19, 2012, coffeemug wrote:
|
Well, of course every responsible sysadmin would turn overcommiting off on a server, so malloc would indeed fail. |
There was a patch designed to notify processes of low memory conditions so they can take some action -- http://lwn.net/Articles/267013/. Unfortunately I can't find the relevant block device on Ubuntu 11.10, so either most distros disable it in the kernel, or the patch never went through. I'm not sure what a good way to detect when OOM killer spins up is, but it'd be great if we could find one so we can warn the user. |
@spicyj In my case it popped up with a message that the process had been killed due to memory usage in the main window (the one I got when I started the virtual box session). If this is indeed the case and you need to run rethinkdb in low memory conditions the way to do it is by decreasing the sizes of the tables. This can be done in the optional settings of the create table dialogue. I ran with 256 MB of cache and still hit the oom although it did take a lot longer. I think you need to take it down as low as 128 total for it to work. |
This is on version 1.2.5-1-0ubuntu1~precise from the ppa.
I don't know how to symbolicate it, sorry.
Let me know if this is helpful or spammy.
The text was updated successfully, but these errors were encountered: