Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cache web assets between CI runs #2089

Merged
merged 2 commits into from
Jun 14, 2017
Merged

Conversation

alexlamsl
Copy link
Collaborator

@alexlamsl
Copy link
Collaborator Author

@kzc so after the first run, I restarted all the jobs in this PR so the files are cached locally:

https://travis-ci.org/mishoo/UglifyJS2/builds/242279742

@kzc
Copy link
Contributor

kzc commented Jun 13, 2017

There doesn't appear to be any rhyme or reason as to what's going on in Travis CI. The job timings ranged from 28 to 37 minutes. I think there's a fixed CPU quota across all jobs.

The caching appears to help somewhat. Should that fail in the future we could either remove one of the node versions or only selectively run jetstream on certain node versions.

@alexlamsl
Copy link
Collaborator Author

@kzc indeed the timing still varies a lot. I think my action plan is to restart this PR's CI in between my $day_job, then observe if there are any time-outs.

Even if this PR ends up making the runtime more consistent to the point that it won't require manual intervention as often, it would be a win.

@alexlamsl
Copy link
Collaborator Author

Speaking of picking Node.js versions - do you know of any usage stats on active deployments?

I've got a hunch that Node.js 0.10 may still be popular, but 0.12 would be rather exotic. It would also be interesting to see whether the LTS labelling as any effects on adoption rates.

@kzc
Copy link
Contributor

kzc commented Jun 13, 2017

Speaking of picking Node.js versions - do you know of any usage stats on active deployments?

google to the rescue: https://nodesource.com/node-by-numbers

@alexlamsl
Copy link
Collaborator Author

So Node.js 5 has more downloads than 0.12, and I don't recall trying that out in the past 😅

@kzc
Copy link
Contributor

kzc commented Jun 13, 2017

Even 0.12 had 20K downloads/day as of December. There's a long tail of use. If we can support ancient versions without too much trouble then it's worth doing. It all comes down to I/O functions in portability - most of Uglify is just in-memory compute that will work anywhere.

@alexlamsl
Copy link
Collaborator Author

As long as I don't get a red notice from Travis, I don't mind keeping all the versions TBH. Making things work on as many platforms as possible, especially from an identical codebase without ifdefs etc., has been a hobby/psychological condition of mine.

@alexlamsl
Copy link
Collaborator Author

alexlamsl commented Jun 13, 2017

The extra logging is may be driving the job towards the global time limit:
https://travis-ci.org/mishoo/UglifyJS2/jobs/242279744#L3183

Experiment to verify this claim is underway:
https://travis-ci.org/mishoo/UglifyJS2/builds/242368364

Edit: nah, still times out. So even if the average is lower it's not worth the information loss.

@kzc
Copy link
Contributor

kzc commented Jun 13, 2017

Edit: nah, still times out. So even if the average is lower it's not worth the information loss.

Agreed.

Looks like a global CPU upper limit for travis jobs that also depends on machine load at the time. So travis tests will still fail from time to time.

We could skip jetstream on node 0.12. That's probably enough.

@alexlamsl
Copy link
Collaborator Author

Wow, 15m34s is fast:
https://travis-ci.org/mishoo/UglifyJS2/jobs/242476675

@kzc
Copy link
Contributor

kzc commented Jun 13, 2017

With jetstream not being run for 0.12, do you think the web asset caching makes a difference?

@alexlamsl
Copy link
Collaborator Author

alexlamsl commented Jun 13, 2017

With jetstream not being run for 0.12, do you think the web asset caching makes a difference?

Haven't made up my mind yet - been busy restarting the jobs for more data points.

Although ignoring performance for a second, the cache would make the jobs more reliable in theory - we had CDNs going down before and I ended up having to update the list inside test/benchmark.js to workaround it. With this cache we will still fail when running locally, but never on Travis CI.

@alexlamsl
Copy link
Collaborator Author

test/fetch.js does solve the intermittent socket time-out issues in test/benchmark.js & test/jetstream.js before this PR is in place. And since those cached assets are actually static (they are test inputs), we should be safe to rely on this new behaviour.

@alexlamsl alexlamsl merged commit 41beae4 into mishoo:master Jun 14, 2017
@alexlamsl alexlamsl deleted the travis-cache branch June 14, 2017 03:53
@kzc
Copy link
Contributor

kzc commented Jun 16, 2017

These various workarounds appear to have worked. Travis builds now take around 30 minutes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants