-
Notifications
You must be signed in to change notification settings - Fork 2.1k
After IO buffer is overrun, IO buffering is broken for future commands #5267
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Sure, yeah. More specifically, the "read too much" flag on the buffer is probably never reset. Also, since we now have something that actually outputs 10MiB.... should we increase the limit? I mean that entire thing came about because somebody noticed that reading from (IIRC) /dev/zero would "hang" fish and be generally unhelpful. So it just needs to be large enough that it's "hang detection", not something to discourage you from using the shell. So 100MiB or even 1GiB would probably be alright on most systems (obviously if you ran this on a raspberry pi with 256MiB of RAM and no swap the latter won't really work). |
Oh, wait, there actually is an underlying limit: ARG_MAX. If we're over that the OS won't accept the arguments anyway. On my system So that 10MiB won't work anyway. |
No, the problem is that IO buffering is not only used for command substitution but also where it shouldn't be used such as piping output from a function's stdout to another function or process' stdin. If you look at the example you'll note that there's no command substitution (apart from IMHO this is fish's biggest job-related failure (way worse than the keepalive process), all function/block output is buffered when all of it should be using anonymous pipes connected directly to the read end instead. Moreover, the IO buffer isn't really a "buffer" in the traditional sense in that it's written to then read from (such as the OS' own pipe buffer) but rather a write-only buffer that becomes available for reading only after the write is completely finished, i.e. it must buffer the entire content. |
The complete -c rubbish -xa '(foo2)' is a command substitution. |
I can only confirm this with So it really does seem like it's the command substitution buffer being used for a command substitution. |
Oh, I completely missed that, I was just focusing on the function definition. I'm pretty sure function output is buffered even without command substitution, but I think this bug (shell unusable) only affects command substitution. |
Yup - #1396. |
Ah, yes. That's the one. The reason function stdout buffering doesn't run into this bug is that functions there's no limit on how big the function buffer can get, they're explicitly allowed an uncapped buffer. |
This behaves kind of weird. $ rubbish <TAB>
fish: Too much data emitted by command substitution so it was discarded #....
$ fg
fish: Too much data emitted by command substitution so it was discarded
$ echo (echo abc)
$ fg
fg: There are no suitable jobs Do we somehow share buffers sometimes? |
Don't attempt to complete against package names if the user is trying to enter a switch to speed things up. Also work around #5267 by not wrapping unfiltered `all-the-package-name` calls in a function.
When we discard output because there's been too much, we print a warning, but subsequent uses of the same buffer still discard. Now we explicitly reset the flag, so we warn once and everything works normal after. Fixes fish-shell#5267.
Don't attempt to complete against package names if the user is trying to enter a switch to speed things up. Also work around fish-shell#5267 by not wrapping unfiltered `all-the-package-name` calls in a function.
I just ran into this working on the
|
@mqudsi: This works for me. Please confirm your $version. |
It’s a recent master build but I know the test as documented here was fixed since I confirmed as much at the time. But the issue can be triggered differently. |
@mqudsi: Any idea how? What did you do differently? |
This worked (it's an even simpler test than the original one): mqudsi@Blitzkrieg /m/d/D/CPSA Board> function foo
bar (all-the-package-names)
end
mqudsi@Blitzkrieg /m/d/D/CPSA Board> function bar
end
mqudsi@Blitzkrieg /m/d/D/CPSA Board> foo
fish: Too much data emitted by command substitution so it was discarded
bar (all-the-package-names)
^
in function “foo”
called on standard input
mqudsi@Blitzkrieg /m/d/D/CPSA Board> fg
/usr/local/share/fish/config.fish (line 267): Too much data emitted by command substitution so it was discarded
builtin fg (__fish_expand_pid_args $argv)
^
in function “fg”
called on standard input |
Okay, this works for me with 29c627. In addition, I'm going to increase the read limit to 100MiB. That still might not blow up your little Raspi, and you could possibly pass it in multiple passes. I don't think fish is used on smaller machines, so we don't need to optimize for those. |
Someone has hit the 10MiB limit (and of course it's the number of javascript packages), and we don't handle it fantastically currently. And even though you can't pass a variable of that size in one go, it's plausible that someone might do it in multiple passes. See #5267.
all-the-package-names
has recently passed the 10MiB boundary, which uncovered some problems:When the io buffer boundary is reached, the shell can be left unusable:
Obviously,
$argv
is empty here and wouldn't have returned any data, let alone > 10MB.To reproduce,
rubish.fish:
followed by the following:
The text was updated successfully, but these errors were encountered: