New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
After all RAM is used, uncaught std::bad_alloc crashes fish #3351
Comments
I'm willing to bet that you're just running out of memory. The reason for this is that we currently cache all output for blocks (a known limitation which should be fixed). |
Closing as duplicate of #1396 - unless you can show that you're not running out of memory. |
That is exactly the cause of the error message :) Oh man, and I really searched mutliple times. |
No worries - sometimes it's kinda hard to see the root cause, especially when you haven't gone through our issue list multiple times. |
Would setting rlimits help here? It can take a... while to run out of RAM here it seems - I'd probably prefer it failed sooner than later. Should we catch |
How would you determine the limit so that it doesn't cause failures when they wouldn't otherwise occur?
This only makes sense if we set a hard upper bound on the data buffered from a function. And probably not even then since we would be effectively obscuring the real failure. This is also likely to cause other unexpected secondary failures. Note that there are two types of OOM failures. The first is when the logical address space of the process is exceeded. Something that is very unlikely on most platforms on which fish runs. The second is exceeding the physical and backing store (swap) memory available. Once that happens recovery within the context of the process that caused the problem is highly problematic. |
I'm not really sure @krader1961 - it's not something I've done before. I just know that other shells, I think, are setting some defaults. My only thoughts so far are that:
I don't know what kind of recipe of rlimits and any kinds of hinting might get the best result. Do you think it's best to just stay away from this stuff entirely? |
Nice find! fish is compiled with |
I have never seen a login shell set limits on memory, file descriptors, core file size, or anything else controlled by the setrlimit(2) API. It's usually a policy implemented via Linux PAM or a similar mechanism that the login shell inherits. I do agree with most of your other points. My disagreement is with the naive proposal to simply call setrlimit(RLIMIT_DATA) as a means of "fixing" this problem. |
640K ought to be enough for anybody |
Looks like I'm (mostly) wrong - the setrlimit() stuff I've seen happen before appears limited to a shell making sure it can dump its own core. |
sh -c 'env HOME=$(mktemp -d) fish'
)?fish version installed (
fish --version
):fish, version 2.3.1
OS/terminal used:
Debian Testing using rxvt-unicode-256color
Talk about the the issue here.
I used the following pattern to create a bytestream from different "sources":
begin
lz4cat ~/large-file.lz4
cat ~/*.some-other-files.dat
end | do-some-work
Which after a while failed with the following error:
Reproduction steps
Just output a large amount of data inside the begin/end block:
Expected results
Well, some result depending on the command after the pipe.
The text was updated successfully, but these errors were encountered: