Fixed `Vend::Server` so it'll read more than a single byte at a time #2

wants to merge 2 commits into


None yet
2 participants

ghost commented Jul 16, 2011

At the time the offending sysread executes, $r will be equal to 1 (from the immediately preceding select returning how many FDs are ready). This doesn't read all input in one big gulp, as intended; it makes IC read one. byte. at. a. time. This gets very noticeable when one tries to upload a 60MB file. :P


ghost commented Sep 27, 2011

Just people actually look at these? :)


ghost commented Jan 25, 2012

ddavenport, sorry, sometimes we see these, but there is a problem with email notifications so we hadn't seen this. The main place for discussing Interchange issues is the mailing list.

It's an interesting problem you found. It looks like Kevin Walsh accidentally broke it while trying to increase the number of bytes it reads! See:


I'm guessing we should set it to something really large, like 64 KB, or maybe 1 MB, since there seems to be no harm and that comes closer to slurping at once.


(Oops ... edited to fix wrong commit hash before.)


ghost commented Jan 25, 2012

Ok, fixed in main Interchange.


Thank you very much!

@jonjensen jonjensen added a commit to jonjensen/interchange that referenced this pull request Oct 31, 2013

@jonjensen jonjensen Increase size of socket read for much better performance
ddavenport found a serious performance regression in Vend::Server, such
that each socket read was only for 1 byte, making large uploads
painfully slow:


I found the regression was introduced in this commit:

commit 2a69743
Author: Kevin Walsh <>
Date:   Mon Feb 4 21:42:18 2008 +0000

        * Changes in the _read() subroutine:

        -- select() returns -1 upon error, whereas sysread() returns undef,
           so we need to allow for both.

        -- Catch EAGAIN as well as EINTR as soft errors to retry on.

        -- Read the entire available amount of data in one hit instead of
           forcing the data to be read in 512-byte chunks.

Kevin clearly intended to read as much data as was waiting, but select()
returns the number of waiting filehandles, not # of waiting bytes.

Rather than reverting to the original 512 bytes per read, or
ddavenport's suggested 1024, I used 1 MiB which is still arbitrary but
large enough to get most normal http requests in few passes, and handles
large uploads efficiently.

Thanks, ddavenport!

Original patch:

commit 2a6308d
Author: ddavenport <>
Date:   Fri Jul 15 20:50:54 2011 -0700

    Fixed Vend::Server so it'll read more than a single byte at a time
(which was causing large file uploads to fail).

diff --git a/lib/Vend/ b/lib/Vend/
index 5baf109..4a82ce8 100644
--- a/lib/Vend/
+++ b/lib/Vend/
@@ -795,7 +795,7 @@ sub _read {

     do {
    if (($r = select($rin, undef, undef, $Global::SocketReadTimeout || 1)) > 0) {
-       $r = sysread($fh, $$in, $r, length($$in));
+       $r = sysread($fh, $$in, 1024, length($$in));
     } while ((!defined($r) || $r == -1) && ($!{eintr} || $!{eagain}));
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment