Skip to content

Commit

Permalink
rt-52469: adding a timeout for read_from_stdin
Browse files Browse the repository at this point in the history
If we wait more than 5 seconds for something from stdin,
the function timeouts and returns nothing
  • Loading branch information
yanick committed Dec 6, 2009
1 parent f58d1b7 commit b132e1a
Show file tree
Hide file tree
Showing 2 changed files with 35 additions and 17 deletions.
38 changes: 21 additions & 17 deletions lib/CGI.pm
Expand Up @@ -1038,24 +1038,28 @@ sub read_from_stdin {
my($tempbuf) = '';
my($bufsiz) = 1024;
my($res);
while ($eoffound == 0) {
if ( $MOD_PERL ) {
$res = $self->r->read($tempbuf, $bufsiz, 0)
}
else {
$res = read(\*STDIN, $tempbuf, $bufsiz);
}
while (1) {
eval {
local $SIG{ALRM} = sub { die "alarm\n" };
alarm 5;
$res = $MOD_PERL ? $self->r->read($tempbuf, $bufsiz, 0)
: read(\*STDIN, $tempbuf, $bufsiz)
;
alarm 0;
};
if ( $@ ) {
return $res if $@ eq "alarm\n";
die $@; # wan't the timeout, propagate
}
if ( !defined($res) ) {
# TODO: how to do error reporting ?
$eoffound = 1;
last;
}
if ( $res == 0 ) {
$eoffound = 1;
last;
}
$localbuf .= $tempbuf;
if ( !defined($res) or $res == 0 ) {
# TODO: how to do error reporting ?
$eoffound = 1;
last;
}
$localbuf .= $tempbuf;
}
$$buff = $localbuf;
Expand Down
14 changes: 14 additions & 0 deletions t/rt-52469.t
@@ -0,0 +1,14 @@
use strict;
use warnings;

use Test::More tests => 1; # last test to print

use CGI;

$ENV{REQUEST_METHOD} = 'PUT';

my $cgi = CGI->new;

pass 'new() returned';


1 comment on commit b132e1a

@markstos
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for taking a look at this Yanick.

This is an area where I would rather follow a precedent than take our own approach. The CGI::Simple logic for reading from STDIN boils down to this:

my $length = $ENV{'CONTENT_LENGTH'} || 0;

# silently discard data ??? better to just close the socket ???
while ( $length > 0 ) {
  last unless _internal_read( $self, $handle, my $buffer );
  $length -= length( $buffer );
}

I also tried to look at the Mojo approach. It's read loop looks like this:

Request body

my $poll = IO::Poll->new;
$poll->mask(\*STDIN, POLLIN);
while (!$req->is_finished) {
    $poll->poll(0);
    my @readers = $poll->handles(POLLIN);
    last unless @readers;
    my $read = STDIN->sysread(my $buffer, CHUNK_SIZE, 0);
    $req->parse($buffer);
}

Mojo defines "is_finished()" as being in any one of the following states: "done", "done_with_leftovers" and "error". I did not completely trace through the logic to see how the object gets put into all of these states.

However, I did mock-up a comparable test case for both CGI::Simple and Mojo, and neither of them hang by simply setting the REQUEST_METHOD to "PUT".

RFC 2616, the HTTP 1.1 RFC, includes this: " The presence of a message-body in a request is signaled by the
inclusion of a Content-Length or Transfer-Encoding header field in the request's message-headers."

Neither CGI.pm or CGI::Simple currently pay attention to the Transfer-Encoding header at all. Mojo appears to respect the header, but it's approach is different enough that it doesn't appear easy to "port" to CGI.pm.

So in summary, I have two questions:

  1. Is there a reason the CGI::Simple approach won't work for us here?
  2. CGI.pm does appear to handle "chunked" messages, but it decides to do so not by checking for the Transfer-Encoding header, but by finding the absence of a Content-Length. By contrast, Mojo clearly has severa references to "Transfer-Encoding" and "transfer_encoding" in it's code base. Should we also be looking at updating our handling of this header and chunked transfers? I would think if there was a problem with current implementation there were be a bug report about it by now...

Please sign in to comment.