Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Another server crash #38

Closed
sophiebits opened this issue Nov 11, 2012 · 22 comments
Closed

Another server crash #38

sophiebits opened this issue Nov 11, 2012 · 22 comments
Assignees
Milestone

Comments

@sophiebits
Copy link

This is on version 1.2.5-1-0ubuntu1~precise from the ppa.

2012-11-11T23:02:56.330463998 98.447105s error: Sun Nov 11 23:02:56 2012

1: /usr/bin/rethinkdb() [0x515d92]
2: /usr/bin/rethinkdb() [0x512f54]
3: /usr/bin/rethinkdb() [0x44b85f]
4: /usr/bin/rethinkdb() [0x5f3d49]
5: /usr/bin/rethinkdb() [0x608e26]
6: /usr/bin/rethinkdb() [0x60975e]
7: +0x7e9a at 0x7fa338be9e9a (/lib/x86_64-linux-gnu/libpthread.so.0)
8: clone+0x6d at 0x7fa3389174bd (/lib/x86_64-linux-gnu/libc.so.6)

I don't know how to symbolicate it, sorry.

Let me know if this is helpful or spammy.

@Tryneus
Copy link
Member

Tryneus commented Nov 11, 2012

For reference, the backtrace reduces to:

0x515d92 - std::string format_backtrace(bool), src/backtrace.cc:198
0x512f54 - void report_fatal_error(const char*, int, const char*, ...), src/errors.cc:65
0x44b85f - void pool_t::worker_t::on_error(), src/extproc/pool.cc:216
0x5f3d49 - void linux_event_watcher_t::on_event(int), src/arch/io/event_watcher.cc:66
0x608e26 - void epoll_event_queue_t::run(), src/arch/runtime/event_queue/epoll.cc:114
0x60975e - void* linux_thread_pool_t::start_thread(void*), src/arch/runtime/thread_pool.cc:141

which is one of the most sensical backtraces I've seen yet, it seems like our symbol files are good for something after all. This appears to be the same as issue #35, though, closing that one since it has less information.

Thanks for including the version you're using, by the way, it makes this much easier.

As for the error itself, I'm not sure what could cause an error on the worker socket, but I'll see what I can find. It seems that we should be robust enough to handle this with (at the worst) only failing a query.

@sophiebits
Copy link
Author

Thanks, wasn't sure if those were the same since the other one happened in different circumstances.

This one seems to crop up reproducibly (on this machine: a VirtualBox VM on my Mac) when I run certain queries, though the same queries work fine if I run it on an EC2 box with more or less the same data.

Let me know if there's a good way to get more debugging info from this.

@Tryneus
Copy link
Member

Tryneus commented Nov 11, 2012

If there is a certain set of queries that can reproduce this, it would be helpful to know what they are, and we could try reproducing it here.

@sophiebits
Copy link
Author

Actually, it seems that all (?) queries make it fail? Running

r.db('ka').table('raw_feedback').count().run();

(in the admin UI) runs for about 15 seconds and then the server dies.

@sophiebits
Copy link
Author

Looks like something is borked about that table; operations on another table that I just made work fine.

@jdoliner
Copy link
Contributor

Hmm is the table still borked if you shutdown and then restart the server? If so would it be possible for you to share the relevant data files with us or do they contain sensitive information?

@sophiebits
Copy link
Author

Not sure what you mean -- the server crashes and I've been starting it back up each time.

The data files aren't particularly sensitive but the table is 1.5 GB so is a little impractical to transport.

@coffeemug
Copy link
Contributor

We'll see if we can get a large file transfer going.

@jdoliner
Copy link
Contributor

I'd add rethinkdb datafiles tend to respond well to compression. I've seen about a factor of 10 decrease in size which could make this a bit more palatable.

@jdoliner
Copy link
Contributor

K got my hands on a data file for this looking in to it.

@jdoliner
Copy link
Contributor

So I have downloaded and run your data files and been unable to get any sort of crash with them. This was at f533ac9 and was run on an ubuntu machine rather than a virtualbox. I'm going to look in to 1.2.5 and see if I can reproduce it. If not next step is virtual box.

@jdoliner
Copy link
Contributor

spicyj alright I've had no luck with the specific binary from v1.2.5 which leads me to believe the VM is blame. You mentioned in IRC that you wouldn't mind sending me the VM image. That would be incredibly helpful if you wouldn't mind. I believe you have my email address.

@ghost ghost assigned jdoliner Nov 16, 2012
@jdoliner
Copy link
Contributor

Hmm, so I just got the VM and ran your query. For me it crashed because the rethinkdb process ran out of memory and was killed. This is easy enough to fix but it seems to match pretty well with your symptoms could this be the problem? Running rethinkdb with only 512 Mb of memory isn't fully recommended but it can be done by decreasing the cache size of the tables.

@sophiebits
Copy link
Author

That could definitely be the case. How'd you determine that it ran out of memory?

@coffeemug
Copy link
Contributor

If this is indeed the case, we should give a good error message in case of out-of-memory conditions (e.g. if malloc fails, crash with an intelligent error message). Of course even printing a good error message in those conditions may not be possible, in which case we should ideally detect that the condition might occur and print a message in advance. Tricky, but certainly doable and will save people a lot of grief.

@mlucy
Copy link
Member

mlucy commented Nov 20, 2012

I was under the impression that modern Linux only allows malloc and friends to fail when the address space is exhausted, and instead deploy the OOM killer when memory is exhausted. So that would be hard to gracefully error on.

We could in theory periodically check how much memory we're using and warn if we're using too much.

@coffeemug
Copy link
Contributor

Ah, I didn't know about that. But in any case, yes, I was proposing the latter.

EDIT: actually scratch that, I was proposing both, but given the OOM stuff, only the latter actually makes sense. (Although it might be possible to detect that you're being killed by the kernel and log it, though that seems unlikely to me)

@jdoliner
Copy link
Contributor

Yeah I got hit with the oom killer so we can't do too much about that. We
can keep an eye on memory usage vs total system memory.

On Monday, November 19, 2012, coffeemug wrote:

Ah, I didn't know about that. But in any case, yes, I was proposing the
latter.


Reply to this email directly or view it on GitHubhttps://github.com//issues/38#issuecomment-10541021.

@danielmewes
Copy link
Member

Well, of course every responsible sysadmin would turn overcommiting off on a server, so malloc would indeed fail.
http://www.mjmwired.net/kernel/Documentation/vm/overcommit-accounting
(the every in there is ironic. I know that many people don't do it, and there might even be occasional reasons not to)
Also, malloc can still fail with overcommiting on, as some heuristic is apparently at work in the kernel that decides whether and how much to overcommit memory.

@coffeemug
Copy link
Contributor

There was a patch designed to notify processes of low memory conditions so they can take some action -- http://lwn.net/Articles/267013/. Unfortunately I can't find the relevant block device on Ubuntu 11.10, so either most distros disable it in the kernel, or the patch never went through. I'm not sure what a good way to detect when OOM killer spins up is, but it'd be great if we could find one so we can warn the user.

@jdoliner
Copy link
Contributor

@spicyj In my case it popped up with a message that the process had been killed due to memory usage in the main window (the one I got when I started the virtual box session). If this is indeed the case and you need to run rethinkdb in low memory conditions the way to do it is by decreasing the sizes of the tables. This can be done in the optional settings of the create table dialogue. I ran with 256 MB of cache and still hit the oom although it did take a lot longer. I think you need to take it down as low as 128 total for it to work.

@jdoliner
Copy link
Contributor

So unless I'm mistaken we've concluded that this was an oom-killer problem and since we have other issues open that deal directly with that issue (#98, #97) I'm going to close this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants