-
-
Notifications
You must be signed in to change notification settings - Fork 109
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Work around 10GB limit by changing caching backend? #126
Comments
As you mentioned |
I've opened a PR (#154) which provides an option to use BuildJet who provide up to 20GB/repo/week, and in my testing is much faster when using self hosted runners. YMMV, but this was the lowest lift to get things working for me, while still having the "it's almost entirely drop in". |
I think rust-cache can workaround this by compressing the cache using a separate algorithm before passing it to actions/cache, though that would also require manually decompressing it. actions/cache uses zstd with default level settings, simply using -22 with -extreme would make the archive smaller without affecting the decompression time. Switching to xz/lzma would make it smaller at the price of much slower compression and decompression compared to max-level of zstd, but it still can be useful as long as it's faster than compiling. |
We've been running into the 10GB limit from github actions (have a few different workflows that each cache large artifacts). Would rust-cache be amenable to having pluggable cache storage backends (eg S3) with my main motivation being to work around the 10GB limit.
Sccache has a few pluggable backends and so the interface could take some inspiration. There are a few actions-cache-s3 type things in the actions marketplace to take inspiration from as well.
Open to other ideas as well to help with the 10GB limit!
The text was updated successfully, but these errors were encountered: