-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug: Failed to CreateCacheEntry #1541
Comments
Did you find any workaround? |
We've noticed the same issue.
our cache step looks pretty the same as above |
Yes. Workaround is to fix the version to v4.1.2 then it works again. |
I have the same issue. From #1510, I would guess the problem is related to the new cache service (v2) APIs. Cause v4.1.2 will become deprecated in one month, it is only a temporary solution. Could somebody fix it as soon as possible? (TBH I don't know where to report the issue other than here) Update: Yes, it works again. I am using v4. |
I think it works again - we use v3. |
I am curious as to why the action didn't fail when the cache upload failed? I get that the upload might have failed, either due to a latent bug in the action or changes in the API (or some combination). Should the action itself not have failed when the upload didn't succeed? |
The upload only gives a warning. And I also created a support ticket where they mentioned they rolled back the change. |
Yeah following this issue, I restarted the various jobs I manage and see them working again. Would it not make sense to have the action fail (error) on upload error, instead of warning? |
On February 7th we began the rollout of the new cache service backend. There was a bug in the implementation that caused a wider range of repositories to start calling the new backend even though they weren't intended to do so just yet. The service responded with 404s. That problem was resolved on the same day. Please do not revert back to previous versions. There are only 4 compatible versions with the new service backend. Please only use one of the below versions:
Every other version is deprecated: #1510 |
In my repository, it reports
so maybe there is another issue of new service backend. |
@Tiphereth-A - This is not an error, cache entries are immutable. If you're trying to create a cache entry with the same key, you will get this |
I'm currently also facing this issue. The save action is not able to save a new cache although none are existing at that moment. It always tells me that the cache is already existing although it's not. |
Same issue here. |
@yonitou @alexanderb2911 - we need more information to troubleshoot. If you have public repos, share the workflow runs that failed here. Ohterwise, please create support tickets and make sure to include links to the workflows that are failing with the Also, please share the link to this issue and ask the support engineer to escalate to the Actions team please. |
@Link- We are also facing the issue in this public workflow Both when trying to update to using Our current usage of caching involves:
However, it seems that even using
I would be happy to have misunderstood though... Anyhow, I'll be interested to really understand the new constraint to correctly adapt key and restore/save in a situation of multiple job created through Matrix. All jobs should share a cache and there should not be one for each matrix element. Thanks a lot. |
@cderv - I looked at your setup, the cache is being saved, it's being saved once and any attempt to save it after that you will get this warning. The warning annotation is most likely what's causing confusion because the legacy service would have not set a warning annotation for a cache entry key collision. This is not a failure, your workflow jobs are finishing successfully.
The (409) Conflict is normal and expected behaviour. Cache entries are also immutable at the moment, there are no configurable parameters to change that behaviour. I'll work with the team to reverse the warning annotation addition to prevent further confusion after I verify the behaviour with the former versions. Am I misunderstanding the situation? |
I think you got it right.
Yes there is no failure. One of the job is doing the saving, and the other jobs try to save later on but there is this warning thrown from each one of them as they indeed can't. This clutter the log with a lot of warning, and this indeed created confusion has it seemed to work ok.
That is good to know. This confirm my initial understanding. I will go on with my initial change.
That would really help to not have this warning thrown by default, especially if everything is going as expected. maybe annotate in workflow debug mode only ? or do not create an annotation that go up in summary ? This created the confusion indeed, especially as I did not see anyway to avoid each following job to detect cache have been saved already, and not try to save the cache. Thank you. |
It did not remember this and this was less prominent. It means it woud still be a good thing to find a way to avoid this warning, but this is another topic. Thanks for having looked into this ! |
@Link- We have a case of the 409 error showing up even though there is no existing entry with that key. In fact, we cleared the entire cache and the error still shows up. This is the most recent run: https://github.com/PennyLaneAI/catalyst/actions/runs/13418034619/job/37483671023 (Note while the |
@dime10 - let me review, thanks for sharing the run |
We identified the problem, thank you all for chiming in! The fix will be released in a day or two. In short, when a cache entry is created, if for whatever reason it's not finalized that slot is locked and will not be reused in future attempts to create a cache entry with the same key. Since the cache entry was never finalized, it will also not be served. |
It seems to be exactly the issue :) after changing the name of the cache key I used before (even if my cache was completely empty), the 409 errors disappeared |
Circumvents upstream bug in GitHub caching infrastructure: actions/cache#1541 (comment) --------- Co-authored-by: David Ittah <dime10@users.noreply.github.com>
My case is a little different. I expect to get this warning because I have several parallel jobs, and the first one to complete will save the cache while the rest will receive the warning. I think this is a valid pattern. In my opinion, this warning should be removed or at least moved to the debug level. I don't care that there is already a cache entry, and moreover, there is nothing I can do about it. |
Is this a backend change? Will you let us know when the fix has been released? |
This is a backend change and I'll update this issue once we deploy the fix. |
We have this issue on Windows Server and macOS runners on many different workflows. |
Similar issue here (MacOS) |
The issue just started to appear again .. It's quite urgent to fix it @Link- |
Sorry to post again but we are completely stuck since yesterday and I need to insist on this matter.
|
You are commenting on an issue which is only about a warning which does not prevent workflows from running. You may have a different issue. The latest version of @actions/cache is |
Hi @MikeMcC399, thanks for your quick answer. ![]() The Our workflows need the cache to make up some critic actions depending on the If we update the package to |
According to #1541 (comment) a backend change is needed to fix the warning. The warning is not fixed by updating @actions/cache. In your situation I would test updating to the latest version to see if it improves your workflow success. |
Thanks @MikeMcC399 |
I am a community member like you, so I can't provide any ETA. I also doubt whether the backend fix will solve your problem. The fix is only to suppress the warning. That's all. You may need to open a new issue for your problem. |
Did you check your caches ? You can use The problem we discussed until now in this discussion is about the Warning annotations when a workflow tries to save a cache with a key that is already saved (by a concurrent workflow for example). See #1541 (comment) and #1541 (comment) And specific one related to new cache #1541 (comment) - maybe you are hitting that one.
They say they will update as soon as backend is fixed (#1541 (comment)) |
Thanks @MikeMcC399 for your help |
Ok. Sorry then for the unnecessary repeat. I wasn't sure. Then we are waiting for the official update. |
@yonitou et al, the backend change is coming this week. We had to iterate on a couple of ideas and put the change in motion. In short, cache keys that are locked due to a failed upload will be cleared every In the meantime, I recommend that you maintain some level of flexibility with your cache keys. As a best practice, cache availability should not be blocking to your workflows even if we aim to provide the same level of stability and availability as workflow artifacts and packages. |
Greetings.
Since today Feb 7th around 1AM CET we got a problem with the
actions/cache
.We can't use
actions/cache/save
anymore and get the following errorThis is a simple cache step as described in the docs:
Current workaround:
Fix the version to
v4.1.2
. This should solve the issue for now.The text was updated successfully, but these errors were encountered: