If a user has more than 20 images in their GCR registry, then this rate limit can be triggered. Is there any way to back off, or have preconfigured rate limit values for different image registries so that users do not have to manually configure this setting?
The text was updated successfully, but these errors were encountered:
--registry-rps 200 maximum registry requests per second per host
--registry-burst 125 maximum number of warmer connections to remote and memcache
We could tune them down for everyone by giving them more conservative defaults. As a beer coaster calculation: to fetch image metadata from scratch needs distinct images X avg number of tags per image requests, for which typical numbers would be 100 and (much more variation here) 100, so about 10,000 requests. At 20 rps it would take about 8 minutes to fill the DB. That seems acceptable.
... by which I mean, acceptable if you are on GCP and can't make it go faster :) I think it'd be better to tune it down just for GCP, and to do that, we'll have to alter generated config or something else.
Is there any way to back off,
We probably do get specific status codes when throttled, so this may be a possibility depending on how well (or if at all) the docker distribution lib exposes those.
Out of interest, what actually happens when the rate limit is reached? Shouldn't we still get a success at roughly the rate limit? Or to put it another way, what does rate limiting at the flux end actually achieve?
Giving the argument --registry-cache-expiry a higher value will also cut down on requests, since it will keep records around longer. If you don't care about being sensitive to tags being updated, you could set this to 24h or more.