Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cache miss even though has cache before with exact restore key #1556

Open
DawnNguyenAhiho opened this issue Feb 17, 2025 · 9 comments
Open

Comments

@DawnNguyenAhiho
Copy link

I have already run cache action on base branch before, it saved with restore key macOS-ruby-<my Gemfile.lock has>. But after 4 day not using that cache, today I run an action and when to the step cache, it say cache not found for inputs key: the key with the exact key that saved before.

I'm using cache v3.

Is this an expected behavior of cache action, how can I expand the timeout of cache file. I don't want after some days off, I have to bundle install again.

Thank you

@keithslater
Copy link

They are transitions their backend from v1 to v2. This could be related to an issue I saw where sometimes it will use v1 and sometimes it will use v2.

@DawnNguyenAhiho
Copy link
Author

@keithslater oh does that mean I can't do anything but accept the situation that sometime my cache lost and I have to install all again

@fe-ax
Copy link

fe-ax commented Feb 27, 2025

This is breaking our pipelines too

@AlexanderRichert-NOAA
Copy link

I've rerun my pipeline any number of times and it's still not working, please fix/revert whatever broke this.

@chohanbin
Copy link

chohanbin commented Feb 27, 2025

I've been having lots of cache miss with actions/cache@v3 too. For me, using actions/cache@v4 fixed the issue for now.

@keithslater
Copy link

One thing you can try is creating a repository variable called ACTIONS_CACHE_SERVICE_V2 and set it to true. It sounds like this should force GitHub Actions to always use v2 backend and not switch back and forth.

@fe-ax
Copy link

fe-ax commented Feb 27, 2025

I feel like this maybe has something to do with runner types. One of my runners is arm64 and one is amd64. I guess one of them talks to v1 and one to v2. I write to the cache from both arches and then read it only from amd64.

I'm excited to try this env variable tomorrow.

For now I'm pushing to artifact store and downloading it in the next job while immediately deleting the artifact again after. This is reliable.

@AlexanderRichert-NOAA
Copy link

No luck here with ACTIONS_CACHE_SERVICE_V2=true as repo var, but thanks @keithslater for the suggestion.

@DawnNguyenAhiho
Copy link
Author

I'm using cache@v4 now and it work for me. Even though the runs are a week apart, I still get a cache hit. I think you guys can upgrade to cache v4

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants