You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Apr 10, 2025. It is now read-only.
See the testcase for Issue 788. With rate-limiting on, that test case only
rewrites about 600 images on the first try on my machine, as it spills over the
queue-length of 500 and only gets ~100 fetches in before it starts dropping
them.
These dropped fetches have 10-second expiry in the HTTP cache, however, I think
they are getting trapped as not-optimizable for 5 minutes the metadata cache.
This means that it will potentially take a long time for an HTML file with lots
of images to be fully optimized.
It's more complicated than that on a live site with lots of pages as the
queue-length limit of 500 is not scoped to the request, but is server-scoped or
process-scoped, so multiple URLs being concurrently rewritten may spill over
the 500-pending-fetch limit even if they wouldn't individually. Thus I think
it'd be better to avoid trapping the not-optimizable bit in the metadata cache.
Original issue reported on code.google.com by jmara...@google.com on 10 Oct 2013 at 3:43