Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

After all RAM is used, uncaught std::bad_alloc crashes fish #3351

Closed
2 tasks done
fritschy opened this issue Sep 5, 2016 · 11 comments
Closed
2 tasks done

After all RAM is used, uncaught std::bad_alloc crashes fish #3351

fritschy opened this issue Sep 5, 2016 · 11 comments

Comments

@fritschy
Copy link

fritschy commented Sep 5, 2016

  • Have you checked if problem occurs with fish 2.3.1?
  • Tried fish without third-party customizations (check sh -c 'env HOME=$(mktemp -d) fish')?

fish version installed (fish --version):
fish, version 2.3.1

OS/terminal used:
Debian Testing using rxvt-unicode-256color

Talk about the the issue here.
I used the following pattern to create a bytestream from different "sources":

begin
lz4cat ~/large-file.lz4
cat ~/*.some-other-files.dat
end | do-some-work

Which after a while failed with the following error:

terminate called after throwing an instance of 'std::bad_alloc'
  what():  std::bad_alloc
Aborted

Reproduction steps

Just output a large amount of data inside the begin/end block:

  1. Launch fish
  2. begin; cat /dev/zero; end | md5sum -
  3. fish aborts.

Expected results

Well, some result depending on the command after the pipe.

@faho
Copy link
Member

faho commented Sep 5, 2016

I'm willing to bet that you're just running out of memory.

The reason for this is that we currently cache all output for blocks (a known limitation which should be fixed).

@faho
Copy link
Member

faho commented Sep 5, 2016

Closing as duplicate of #1396 - unless you can show that you're not running out of memory.

@faho faho closed this as completed Sep 5, 2016
@faho faho added the duplicate label Sep 5, 2016
@fritschy
Copy link
Author

fritschy commented Sep 5, 2016

That is exactly the cause of the error message :)

Oh man, and I really searched mutliple times.

@faho
Copy link
Member

faho commented Sep 5, 2016

No worries - sometimes it's kinda hard to see the root cause, especially when you haven't gone through our issue list multiple times.

@floam
Copy link
Member

floam commented Sep 6, 2016

Would setting rlimits help here? It can take a... while to run out of RAM here it seems - I'd probably prefer it failed sooner than later.

Should we catch std::bad_alloc? In theory only the begin; end block should fail and we'd report the error and set some $status.

@floam floam changed the title Using begin; ...; end to concat outputs of different programs leads to std::bad_alloc Using begin; ...; end leads uncaught std::bad_alloc crashing fish after using all RAM Sep 6, 2016
@floam floam changed the title Using begin; ...; end leads uncaught std::bad_alloc crashing fish after using all RAM After all RAM is used, uncaught std::bad_alloc crashes fish Sep 6, 2016
@floam floam added the question label Sep 6, 2016
@krader1961
Copy link
Contributor

Would setting rlimits help here?

How would you determine the limit so that it doesn't cause failures when they wouldn't otherwise occur?

In theory only the begin; end block should fail and we'd report the error and set some $status.

This only makes sense if we set a hard upper bound on the data buffered from a function. And probably not even then since we would be effectively obscuring the real failure. This is also likely to cause other unexpected secondary failures. Note that there are two types of OOM failures. The first is when the logical address space of the process is exceeded. Something that is very unlikely on most platforms on which fish runs. The second is exceeding the physical and backing store (swap) memory available. Once that happens recovery within the context of the process that caused the problem is highly problematic.

@floam
Copy link
Member

floam commented Sep 6, 2016

How would you determine the limit so that it doesn't cause failures when they wouldn't otherwise occur?

I'm not really sure @krader1961 - it's not something I've done before. I just know that other shells, I think, are setting some defaults. My only thoughts so far are that:

  • I doubt anyone is really doing any serious memory-intensive computations (intentionally) with fish
  • People don't want their interactive shells to be killed by the OOM killer, however:
  • People don't want a mistaken command to bring their whole system to its knees either, 16GB of RAM later

I don't know what kind of recipe of rlimits and any kinds of hinting might get the best result. Do you think it's best to just stay away from this stuff entirely?

@ridiculousfish
Copy link
Member

Nice find!

fish is compiled with -fno-exceptions because it reduces binary size and helps ensure exceptions don't sneak in. Handling OOM gracefully is very hard, and crashing on OOM is usually second-best. Having an upper limit on the amount of output buffering sounds like a good idea, but as krader says it's hard to know what that limit should be.

@krader1961
Copy link
Contributor

I just know that other shells, I think, are setting some defaults.

I have never seen a login shell set limits on memory, file descriptors, core file size, or anything else controlled by the setrlimit(2) API. It's usually a policy implemented via Linux PAM or a similar mechanism that the login shell inherits.

I do agree with most of your other points. My disagreement is with the naive proposal to simply call setrlimit(RLIMIT_DATA) as a means of "fixing" this problem.

@floam
Copy link
Member

floam commented Sep 6, 2016

640K ought to be enough for anybody

@floam
Copy link
Member

floam commented Sep 6, 2016

I have never seen a login shell set limits on memory, file descriptors, core file size, or anything else controlled by the setrlimit(2)

Looks like I'm (mostly) wrong - the setrlimit() stuff I've seen happen before appears limited to a shell making sure it can dump its own core.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants