Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use explicit include opt for perl calls. #3496

Closed
wants to merge 7 commits into from

Conversation

Projects
None yet
3 participants
@ladar
Copy link
Contributor

commented Jan 24, 2019

At the top, perl is called using with the "-Isrcdir" option, and it works:

https://github.com/curl/curl/blob/master/tests/runtests.pl#L182-L183

But on line 3868, that option is omitted. This caused problems for me, as the symbol-scan.pl script in particular, couldn't find it's dependencies properly:

https://github.com/curl/curl/blob/master/tests/runtests.pl#L3868

This patch fixes that oversight by making calls to perl sub-shells uniform.

Use explicit include opt for perl calls.
At the top, perl is called using with the "-Isrcdir" option, and it works:

https://github.com/curl/curl/blob/master/tests/runtests.pl#L182-L183

But on line 3868, that option is omitted. This caused problems for me, as the symbol-scan.pl script in particular, couldn't find it's dependencies properly:

https://github.com/curl/curl/blob/master/tests/runtests.pl#L3868

This patch fixes that oversight by making calls to perl sub-shells uniform.
@bagder

bagder approved these changes Jan 27, 2019

@@ -3865,7 +3865,7 @@ sub singletest {
if($cmdtype eq "perl") {
# run the command line prepended with "perl"
$cmdargs ="$cmd";
$CMDLINE = "perl ";

This comment has been minimized.

Copy link
@bagder

bagder Jan 27, 2019

Member

I should probably even be just

$CMDLINE = $perl;

... since $perl is already set to be a suitable command line to invoke perl with. What do you think?

This comment has been minimized.

Copy link
@ladar

ladar Jan 28, 2019

Author Contributor

I considered that, but figured they were separated for a reason. But it sounds like that isn't the case. I'll update the pull request accordingly.

ladar added some commits Jan 28, 2019

@bagder

This comment has been minimized.

Copy link
Member

commented Jan 28, 2019

For some reason, this change seems to cause some tests to fail reliably among the travis jobs! Need to investigate...

@ladar

This comment has been minimized.

Copy link
Contributor Author

commented Jan 28, 2019

I was just looking at that myself and wondering if it was the change I made or something else. Should I push the original change again, to see if it still passes properly?

ladar added some commits Jan 28, 2019

@ladar

This comment has been minimized.

Copy link
Contributor Author

commented Jan 28, 2019

I tried kicking off Travis jobs for both variants, but the original variant build job seems to have been cancelled.

@bagder

This comment has been minimized.

Copy link
Member

commented Jan 28, 2019

Yeah, our travis settings make it cancel the older builds if you commit again. To reduce the number of useless builds as we typically have several PRs getting worked on in parallel and it only hands us a limited number of simultaneous builds.

@ladar

This comment has been minimized.

Copy link
Contributor Author

commented Jan 29, 2019

I feel your pain. I've spent countless hours tweaking my own Travis config so it runs reliably. I had to write logic to retry apt updates/installs, and then cache the deb packages in a standalone stage. Otherwise a hash mismatch, or particularly slow mirror, would slow down my build enough that it would fail. I found the:

addons:
    apt:
        config:
            retries: true

To be useless. i can send you what I've been using, if you're interested.

And after all the effort to overcome apt problems, I still see builds failing because the gcc-8 + -O3 jobs occasionally time out. Normally that build variation takes~46 minutes, but if GCE is a little sluggish, the slowdown is enough for the job to hit 50 minutes... and I still haven't figured out a solution to that problem yet.

I have another Travis project whose sole purpose is checking various cloud hosted URLs, downloading the file, and verifying the hashes. If the downloads run single file, the script times out. If I separate each URL into its own job, I hit the 200 job limit. If I try running all of the jobs in parallel, a handful will fail, curl reporting an SSL error, which I still haven't been able to diagnose. I wrote the following, to avoid the overload issue, problem. It will cause your Travis script to periodically pause, and wait for the load to go down, before continuing:

  - sleep 5 ; while true; do sleep 1 ; if [[ "`uptime | awk '{ gsub(/,/, ""); printf "%.0f\n", $9; }'`" -lt "8" ]]; then break ; fi ; done

And I added the following to my configs as well. It will prevent a build from timing out for lack of output, and I find the load average information helpful in determining whether a problem is a load problem, or something else:

  - (for ((i = 0; i < 256; ++i)); do sleep 20 ; printf "%s (%s )\n" "`date`" "`uptime`" ; done) & LOOP_PID=$! ; disown

I'm rambling. I only wanted to say I kicked off:

https://travis-ci.org/curl/curl/builds/485683164

And so far, which was my original patch, (and what I've been using locally), and while the Travis job isn't done yet, so far it appears to be working.

@jay

This comment has been minimized.

Copy link
Member

commented Jan 29, 2019

That's because the Travis builds slow down so much devs see weird curl / network / SSL errors which have nothing to do with us.

The problem was the missing space

== Contents of files in the log/ dir after test 1119
=== Start of file curl.log
 perl -I../symbol-scan.pl ./.. ../include/curl >log/stdout1119 2>log/stderr1119
=== End of file curl.log

because

curl/tests/runtests.pl

Lines 182 to 183 in 4258dc0

# invoke perl like this:
my $perl="perl -I$srcdir";

so try "$perl "

@ladar

This comment has been minimized.

Copy link
Contributor Author

commented Jan 29, 2019

@jay I wasa about to say I already tried that:

https://github.com/ladar/curl/commit/3161cb074ead943e2446d4449bef24b2a09d92f7

But I gather you're saying there needs to be a space after the variable name ... aka "$perl ";' ... I'll commit it when the current build finishes, so it doesn't get cancelled.

As for the other issue, I agee, it's wasn't a question of "fault." I was having trouble figuring out why curl didn't wait and retry. I never figured out the issue conclusively, but left off with the theory that it was the OCSP verification step which was timing out/failing from the high load.

@jay

This comment has been minimized.

Copy link
Member

commented Jan 30, 2019

The CI failures seem arbitrary and unrelated.

@ladar

This comment has been minimized.

Copy link
Contributor Author

commented Jan 30, 2019

I couldn't figure out the cause either. My eyeballs suggested the output matched expected. My only theory is that ActivePerl might be doing something weird.

@jay can you trigger an AppVeyor rebuild? I'm curious about whether the same errors will repeat. If they do, then there may in fact be an issue. In which case I'd suggest going with the original submission.

@bagder

This comment has been minimized.

Copy link
Member

commented Feb 1, 2019

We're plagued by false positives in the appveyor tests. Restarted some of them now but I consider them unrelated.

@ladar

This comment has been minimized.

Copy link
Contributor Author

commented Feb 2, 2019

Looks like all is good in the neighborhood. I assume you're waiting for a merge window? Any idea when it will be open? I'm working on a couple other feature patches that I might try and submit too.

@jay jay closed this in 53400f5 Feb 4, 2019

@jay

This comment has been minimized.

Copy link
Member

commented Feb 4, 2019

Thanks. We have a feature window but this is a bug fix so there is no need to wait. If you have any submissions ready don't wait for a window let us worry about that just submit and someone will take a look and determine how to proceed. However if you have features you haven't finished working on yet I suggest discussing it on the appropriate mailing list so you can get some idea of how they will be received before you spend time.

@lock lock bot locked as resolved and limited conversation to collaborators May 5, 2019

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
You can’t perform that action at this time.