Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failing to read from IO::Pipe fast enough, apparently loses data #6159

Open
p6rt opened this issue Mar 20, 2017 · 3 comments
Open

Failing to read from IO::Pipe fast enough, apparently loses data #6159

p6rt opened this issue Mar 20, 2017 · 3 comments
Labels
IO

Comments

@p6rt
Copy link

@p6rt p6rt commented Mar 20, 2017

Migrated from rt.perl.org#131026 (status was 'open')

Searchable as RT131026$

@p6rt
Copy link
Author

@p6rt p6rt commented Mar 20, 2017

From @zoffixznet

  zoffix@​VirtualBox​:~$ perl6 -e '$ = shell(​:out, "yes | head -n 100000").out.lines'
  head​: write error​: Connection reset by peer
  head​: write error
  zoffix@​VirtualBox​:~$ perl6 -e '$ = shell(​:out, "yes | head -n 100000").out.IO​::Handle​::lines'
  zoffix@​VirtualBox​:~$ perl6 -e '$ = shell(​:out, "yes | head -n 100000").out.IO​::Handle​::lines'
  head​: write error​: Connection reset by peer
  head​: write error

This is Rakudo version 2017.02-186-g9da6de4 built on MoarVM version 2017.02-18-g5f9d698
implementing Perl 6.c.

Note that the same error doesn't happen with head -n 60000, suggesting it's some sort of buffer sized to 65536. Adjusting RAKUDO_DEFAULT_READ_ELEMS doesn't solve it.

Unsure if this is meant to be this way or not. I'd expect no write errors to happen or for Perl 6 to complain about it. If this is normal, then at least it should be documented as a caveat.

@p6rt
Copy link
Author

@p6rt p6rt commented Mar 20, 2017

From @geekosaur

Note that 65535 is the POSIX-specified minimum largest write size that is
guaranteed to be atomic (_PIPE_BUF), and derives from the usual size of
kernel-side pipe buffers (Linux's are actually larger, but sizes larger
than the POSIX minimum _PIPE_BUF are likely to incur portability issues, so
libuv may stick to the POSIX minimum). Reads and writes larger than
_PIPE_BUF may not interact well with non-blocking I/O. Meaning, this may be
a libuv-level bug, or moarvm using libuv incorrectly.

("Connection reset by peer" is an odd error for this, which is part of why
I suspect libuv​: feels like it picked a fake error to return on
non-blocking read or write doing something it didn't expect.)

On Mon, Mar 20, 2017 at 1​:36 PM, Zoffix Znet <perl6-bugs-followup@​perl.org>
wrote​:

# New Ticket Created by Zoffix Znet
# Please include the string​: [perl #​131026]
# in the subject line of all future correspondence about this issue.
# <URL​: https://rt-archive.perl.org/perl6/Ticket/Display.html?id=131026 >

zoffix@​VirtualBox​:~$ perl6 -e '$ = shell(​:out, "yes | head -n
100000").out.lines'
head​: write error​: Connection reset by peer
head​: write error
zoffix@​VirtualBox​:~$ perl6 -e '$ = shell(​:out, "yes | head -n
100000").out.IO​::Handle​::lines'
zoffix@​VirtualBox​:~$ perl6 -e '$ = shell(​:out, "yes | head -n
100000").out.IO​::Handle​::lines'
head​: write error​: Connection reset by peer
head​: write error

This is Rakudo version 2017.02-186-g9da6de4 built on MoarVM version
2017.02-18-g5f9d698
implementing Perl 6.c.

Note that the same error doesn't happen with head -n 60000, suggesting
it's some sort of buffer sized to 65536. Adjusting
RAKUDO_DEFAULT_READ_ELEMS doesn't solve it.

Unsure if this is meant to be this way or not. I'd expect no write errors
to happen or for Perl 6 to complain about it. If this is normal, then at
least it should be documented as a caveat.

--
brandon s allbery kf8nh sine nomine associates
allbery.b@​gmail.com ballbery@​sinenomine.net
unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net

@p6rt
Copy link
Author

@p6rt p6rt commented Mar 20, 2017

The RT System itself - Status changed from 'new' to 'open'

@p6rt p6rt added the IO label Jan 5, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
1 participant
You can’t perform that action at this time.