-
Notifications
You must be signed in to change notification settings - Fork 724
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Builds failing: Unable to connect to ppa.launchpad.net:http (apt-get install failed) (redis-server) #9112
Comments
https://answers.launchpad.net/launchpad/+question/663396 We will respond to your support ticket shortly. |
I believe I am experiencing a similar issue
|
More logs: |
We've been seeing this on multiple builds as well. It's quite painful. Example: https://travis-ci.org/apache/arrow/jobs/335088022 |
By the way, it's not only ppa.launchpad.net, so perhaps it's a connectivity issue on the Travis-CI side:
|
Also:
|
@pitrou We are also STILL facing this issue. I also added details on launchpad ticket @BanzaiMan mentioned above. But there is still no response from Travis team. 😞 |
I am also experiencing this same issue. It has been going on for a while:
See: https://travis-ci.org/dbudwin/RoboHome-Web/jobs/335442485#L416 |
@BanzaiMan any news on this? It's not only |
@BanzaiMan This issue should be reopened. We have problems not only to ppa.launchpad.net https://travis-ci.org/linux-test-project/ltp as well. I keep restarting it, but these false positives are bad :-(. |
Request for informationFor all who are experiencing timeouts during
Feel free to include the full line containing the error you receive (e.g. Extra credit
|
So far, I have been able to reproduce this only on GCE (sudo-enabled) builds. I would love to see a link to job where an My current theory is that |
I've noticed for at least the past week or two, no exact date.
My build isn't failing, but I'm investigating some side-effects of my build and I'm curious if this issue is related.
It goes through phases, sometimes it's every time, other times it happens less frequently.
No. I haven't tried fixing this particular issue.
No. But if I rebuild a build enough times I can almost guarantee it'll happen.
Still trying to determine this. In one project a build is failing and I don't know why. In another project with a similar structure to the first, the build passes even with this issue. I think it's likely a problem with my build, but I don't this issue is helping. |
I have this on two different projects (both
Yes, Feb. 1st around 22:53 CET
Not sure how it works, but it happens during the "install mono" part
After the initial fail, it happens consistently with new commits and restarts
No, not sure how I can add a boot delay as the failure happens before the contents from my
Yes, it happens 100% consistently on the two repo's linked above
Always fails because the required packages are not present on the system. |
We've been having this problem sporadically on https://github.com/crawl/crawl for some time now.
gce, trusty, sudo-enabled
It's been happening sporadically for over a month, possibly for the lifetime of the current trusty image. For example, here's a build from early january that illustrates it.
When this timeout happens, the builds error in before_install. But the timeout appears to be before that.
Possibly as much as 1 in 4 jobs (not all of our jobs will error though, see below).
No to both (though it happens a lot).
Some of our jobs install libsdl2-dev. When the timeout happens, on those jobs, it appears that apt-get will have an old/wrong package list and this sets up the exact conditions for #8317 to happen (as long as the timeout happens, our build getting this error seems deterministic). Our jobs that don't install libsdl2-dev don't typically error (though they may occasionally).
Here's a test build (on my fork) that as a bonus has a sleep command: https://travis-ci.org/rawlins/crawl/jobs/337601162 (On this fork I've temporarily enabled only jobs that install libsdl2-dev while trying to sort this out.)
Didn't help, see the above log. You can see that the timeouts are happening even before the git clone during what appears to be the initial image setup (if I'm understanding things correctly), so the sleep command isn't going to do much. |
So far doing this:
To update has been working for me. It hasn't taken more than about ~21 seconds to update in total though. I figure the 60 second timeout is a worst case scenario (as are the retries). Edit: Found one that took ~33 seconds to update. Still under 60 but... half way there! Edit Again: One just took ~41 seconds. Getting closer and closer to 60. Edit Again: One just timed out at 61 seconds: https://travis-ci.org/intel/cNVMe/jobs/340211665 Going to try doing this instead:
|
hostname: 7e67749c-b620-47f1-80c2-9eda5c3875c0@1.production-3-worker-com-c-1-gce
I started yesterday to build with Travis CI :)
1 in 3 jobs
Just restart the job
Randomly after some build
The job continue after the timeout until |
For example
Not sure, but at least several weeks ago.
It's varying. Sometimes 1 in 50, sometimes 1 in 2...
No :-(
No.
Not sure, since we don't notice if it doesn't cause the job to fail. But quite often the job fails indeed :-( |
I can confirm that something like this seems to be a workaround if added as the first command in before_install (haven't tweaked the timeout setting, but I have just a few example so far):
For example, here's a successful build (well, it's not done yet, but it got past the crucial point in our apt script, so it should be fine) that has the ppa.launchpad.net errors: https://travis-ci.org/rawlins/crawl/jobs/340632159 |
Note that the log I linked to retries the apt-get command three times, and all three of them failed. |
@JasonGross Probably this incident: https://www.traviscistatus.com/incidents/zmw37lfzk3p3 |
No it's not. With sudo not required it happened to me already multiple times today. |
Broken for long time (at least for 9 months), not progress and ticket closed. With approach like this we should probably switch to another service :( |
This is suggested by Travis travis-ci/travis-ci#9112 (comment) git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@65084 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
Untill today, i have this issue. sudo apt-get update |
@mteguhfh Where do you see this? |
in Elementary JUNO, i want to update but got error in appcenter too |
@mteguhfh Sorry, what does that mean? Do you have a build log URL that shows this problem? |
Well this issue is still basically everywhere... |
Please include the build log URL. Thanks. |
@tbarbette we haven't had this problem since instituting one of the fixes earlier in this thread, and definitely not since the issue was closed, except for occasional problems that are not on the travis side. (Our builds originally had ppa.launchpad.net problems quite regularly.) So you're probably going to have to give them a bit more to work with... |
I'll do so when I'll catch it again. I relaunch the test manually when it happens so not sure where to get the log now. |
Hello, all! We recently shipped a change to the Linux build execution infrastructure that makes use of a region-local HTTP cache for APT. The use of this cache is dynamic based on availability at the beginning of a job, and is currently only for the APT |
The new APT proxy should be transparent and used on every build (depending on availability). If you are still seeing this issue, could you post relevant build links/URLs? We would be happy to have a look. Thanks! (/cc @pevik from this comment on travis-ci/travis-build#1606) |
These errors lead to an out-of-date package list being used, which then causes the build to error relating to the libsdl2-dev package. The timeout may need further tweaking, and hopefully there'll be a point where travis will fix this, but this basically works for me in my fork, so I'm going to try adding it to master. See ongoing discussion in the following issue: travis-ci/travis-ci#9112 (comment) (cherry picked from commit 89cf10f)
These errors lead to an out-of-date package list being used, which then causes the build to error relating to the libsdl2-dev package. The timeout may need further tweaking, and hopefully there'll be a point where travis will fix this, but this basically works for me in my fork, so I'm going to try adding it to master. See ongoing discussion in the following issue: travis-ci/travis-ci#9112 (comment) (cherry picked from commit 89cf10f)
We've hit this issue for a while in a private repo that used sudo-enabled infrastructure. As of today, I've switched it to use the non sudo approach and the build is working again. |
It is fixed on our side for quite some time now. I guess it took some days to be effective everywhere. |
@tbarbette - It was still failing today on fresh commits, and this is why I mentioned it above. |
@bsipocz Could you email us at support@travis-ci.com with details? It would be good to investigate, if the issue still persists. |
We just started having some networking issues related to apt today. Here is an example build: It isn't happening on every build but has been happening on about half for the last few hours. I did make a change to our travis configuration today but it was to use pipenv, which is a later step so it seems unrelated. cc @jamwalla |
Hi. Sorry for resurrecting this thread, but this issue frequently happens to me with apt cache connection timeout via apt addon: |
Same here. This issue started appearing again for our StackStorm/st2 builds on Friday. It happens very often for most of our builds and it's almost impossible to get builds to pass because of this issue. Here is an example of such build - https://travis-ci.org/StackStorm/st2/jobs/574431900 (I already contacted support@, but haven't heard back yet). To try to work around this issue, I moved away from apt addon back to manually running apt-get commands with increased timeout - StackStorm/st2#4772 The workaround seems to be working, but it's far from ideal since apt-get update now takes a long time. EDIT: It appears this only affects Precise, but not Xenial based builds. |
We are facing this issue for almost 2 weeks on multiple builds. I sent a mail to support@travis-ci last week, but there is still no reply. I saw some issue here with similar type but not sure if they are related.
We are using Travis Premium.
The text was updated successfully, but these errors were encountered: