You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hey, I'm seeing a lot rate limiting errors at storage check (s3 backend). The ".sccache_check" file that is used for that check is on the hot path. What do you think if we make it configurable and expose it as an environment variable? Each actor can have it's own file that checks for read/write access. That would help to mitigate the issue. WDYT?
I'm also surprised to see it from AWS. We have dozens of worker nodes and thousands of builds per day but it's not a crazy number. But I frequently see that error in the logs.
I see that others also reported the same or similar issues: #1485 #1485 (comment)
And PRs to mitigate it #1557
S3 has rate limits: many reads and writes to a single key can hit rate limits far before the underlying partition is rate limited. Even 20-30 PUTs on a single key within a very short period of time will exhaust it.
On versioned buckets this is lower, especially if there are many millions of versions may exist with this key.
Hey, I'm seeing a lot rate limiting errors at storage check (s3 backend). The
".sccache_check"
file that is used for that check is on the hot path. What do you think if we make it configurable and expose it as an environment variable? Each actor can have it's own file that checks for read/write access. That would help to mitigate the issue. WDYT?Example of the error:
The code:
sccache/src/cache/cache.rs
Lines 481 to 544 in 69be532
The text was updated successfully, but these errors were encountered: