Skip to content

Commit

Permalink
Fixed bug #69900 Too long timeout on pipes
Browse files Browse the repository at this point in the history
  • Loading branch information
weltling committed Jul 28, 2015
1 parent e4153e9 commit 20e765b
Showing 1 changed file with 3 additions and 3 deletions.
6 changes: 3 additions & 3 deletions main/streams/plain_wrapper.c
Original file line number Diff line number Diff line change
Expand Up @@ -358,11 +358,11 @@ static size_t php_stdiop_read(php_stream *stream, char *buf, size_t count TSRMLS
if (!PeekNamedPipe(ph, NULL, 0, NULL, &avail_read, NULL)) {
break;
}
/* If there's nothing to read, wait in 100ms periods. */
/* If there's nothing to read, wait in 10ms periods. */
if (0 == avail_read) {
usleep(100000);
usleep(10);
}
} while (0 == avail_read && retry++ < 320);
} while (0 == avail_read && retry++ < 3200000);

/* Reduce the required data amount to what is available, otherwise read()
will block.*/
Expand Down

5 comments on commit 20e765b

@krakjoe
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This appears to be causing intermittent failures of proc_open_bug69900.phpt, it's probably because the test is stupid ... can someone have a look in detail at it please, possibly you @weltling ?

@weltling
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@krakjoe which particular run, where? If it has started to fail lately, that's likely a sporadic CI issue.

Thanks.

@krakjoe
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

https://travis-ci.org/php/php-src/jobs/257655138

That was the last fail that I looked at the detail of ... it seems to fail pretty regularly, but because I'm me, I can't remember when it started ... all I can say for sure, is that it was sometime in the past ... which narrows it down ...

@weltling
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ahh, travis, i thought it were some issues with blocking pipes. That option has no effect on Linux.

Yeah, travis is quite a bad experience lately. But i don't think some particular test can be accused. Looking on the fails daily as well, there are various and not only. For example this one

https://travis-ci.org/php/php-src/jobs/257681364

Many timing related are affected in any areas, of course. But the general issue seems to be the test image having a cut on resources or some internal travis instability, so build or test run hangs in the middle and other negative impacts are caused. For that particular test - of course one can touch the timeout. But if you look at the test - writing/reading the word "hello" through the pipe taking more than 1 millisecond? Common :)

I would say, it were time someone with a contact to travis to talk to them, as having CI that unreliable is bad. Or maybe one should think about alternative, dunno.

Thanks.

@weltling
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@krakjoe i've now added a special case for travis b013368 to that test. Lets see whether it improves. IMO it's a concerning situation, as we constantly adapt tests to pass on CI, whereby its instability enforces sometimes not that plausible timeout values. And on the other hand, polluting tests with env checks is also not good. Many other tests still suffer from the general unreliability, eg. those that spawn sub processes, depend on tcp connections, etc. Heh.

Perhaps with timeout based tests it were even ok, so might check if a centralized solution would make sense. Say there were a function like i_run_under_ci(), that would be available in a test to reflect the conditions might be a special case. That however would make tests unnecessary complicated in certain case.

Thanks.

Please sign in to comment.