You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have an example of doing a cache (not of the layer, but of the spack install, but the logic would be similar) here https://github.com/sciworks/spack-updater/blob/db34cdd94dcc1fdb29626e78f85f5fa62156faad/build/action.yaml#L53-L64. E.g., given that spack cares about micro-architecture, we associate the cache key with it. In this case it saves the build about 22 minutes in not needing to build bootstrap from source (and we don't hit the spack binary cache because spack only supports ubuntu 20.04 for it afaik).
For container builds, given that you have all of your spack logic in one layer, it's going to be very hard to get a cache hit (especially for an environment where you do spack add and then one spack install but you might have better luck outside of an environment with separate spack install <package>, one per layer. The article you linked would make sense to work with build kit, and (my lazy self) sometimes just has a step that pulls the container URI I'm building to retrieve any possibly matching layers before attempting the new build.
Using a build cache means building bespoke containers for each repo is easy and straightforward, so for the moment this approach isn't required. Closing for now.
Layer level caching could make it feasible for the container build to be in-lined into the main Build CI.
https://evilmartians.com/chronicles/build-images-on-github-actions-with-docker-layer-caching
https://mmeendez8.github.io/2021/04/23/cache-docker.html
The text was updated successfully, but these errors were encountered: