-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Configure hex, gradle, pip and npm caches. #4799
Conversation
a2b349c
to
40382ed
Compare
aba7907
to
c39c9d4
Compare
restarted the CI and it runs through ... |
Thanks @big-r81! Maybe I'll try a few more rebuilds to see how it does. Eventually caches should spread across all the nodes. |
That sounds like a good idea.
|
One still failed in the same place on s390x worker....
We can check manually by logging into the workers, the ones we have access too, at least. We can see if there are any files in the cache.
Hmm we don't have anything like that. We could make separate jenkins task to do that. Like we have that jenkins docker image cleaning task. |
c39c9d4
to
5852b79
Compare
I also added a MIX_HOME dir as a cache option. Let's see if that makes any difference. Otherwise, I think we might want to reach out the Z Linux (s390x arch) team. It could either be a compatibility issue or somehow the CI IP range might be throttled. |
The caching seems to at least speed up the 'Build Release Tarball' step. I noticed on main it takes about 6 minutes and with caches it's about 5 minutes. Builds overall seem to have gotten 4-6 minutes faster. |
5852b79
to
54598f5
Compare
I made an issue in the Z Linux community: linuxone-community-cloud/tickets#58 For now I think we'll disable the s390x worker until that's figure out. The ARM Debian worker can also be disabled for now based on talking to the maintainer. We still have the FreeBSD arm workers to we'll be testing the architecture there, |
54598f5
to
5b665bc
Compare
We saw repeated failures on some CI nodes, possibly from them being throttled by hex.pm. To mitigate it, try to setup package caches for hex, gradle, pip and npm. So far it works with Docker builder only but requires that cache directories in /home/jenkins are owned by the `jenkins` user. That has to happen as part of the Jenkins node setup. Nodes controlled from couchdb-infra-cm have been updated to have those directories and permission however the ARM64 one doesn't yet, so we're excluding it temporarily. The s390x node currently seems to have a hard time fetching hex.pm packages specifically. All the other workers don't have that issue so it's getting excluded as well. Linux One community issue: linuxone-community-cloud/tickets#58
5b665bc
to
45eed55
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, let’s merge it and see how it works.
+1
We saw repeated failures on some CI nodes, possibly from them being throttled
by hex.pm. To mitigate it, try to setup package caches for hex, gradle, pip and
npm.
So far it works with Docker builder only but requires that cache directories in
/home/jenkins are owned by the
jenkins
user. That has to happen as part ofthe Jenkins node setup. Nodes controlled from couchdb-infra-cm have been
updated to have those directories and permission however the ARM64 one doesn't
yet, so we're excluding it temporarily.