New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

metadata cache will reminder not-optimizable for 5 minutes after rate-limiting drops a fetch #803

Closed
GoogleCodeExporter opened this Issue Apr 6, 2015 · 5 comments

Comments

Projects
None yet
1 participant
@GoogleCodeExporter
Copy link

GoogleCodeExporter commented Apr 6, 2015

See the testcase for Issue 788.  With rate-limiting on, that test case only 
rewrites about 600 images on the first try on my machine, as it spills over the 
queue-length of 500 and only gets ~100 fetches in before it starts dropping 
them.

These dropped fetches have 10-second expiry in the HTTP cache, however, I think 
they are getting trapped as not-optimizable for 5 minutes the metadata cache.  
This means that it will potentially take a long time for an HTML file with lots 
of images to be fully optimized.

It's more complicated than that on a live site with lots of pages as the 
queue-length limit of 500 is not scoped to the request, but is server-scoped or 
process-scoped, so multiple URLs being concurrently rewritten may spill over 
the 500-pending-fetch limit even if they wouldn't individually.  Thus I think 
it'd be better to avoid trapping the not-optimizable bit in the metadata cache.

Original issue reported on code.google.com by jmara...@google.com on 10 Oct 2013 at 3:43

@GoogleCodeExporter

This comment has been minimized.

Copy link

GoogleCodeExporter commented Apr 6, 2015

I cannot repro this with the 'worker' MPM, only with the 'prefork' MPM.

This is annoying as it makes it harder to debug.

Original comment by jmara...@google.com on 10 Oct 2013 at 6:47

@GoogleCodeExporter

This comment has been minimized.

Copy link

GoogleCodeExporter commented Apr 6, 2015

The rate-limiting is designed to allow 1 concurrent lookup in prefork, and (I 
think) 3 or 4 in worker.

If I set the # of threads to 3 or 4 in prefork, then prefork works fine.
If I set the # of threads to 1 in worker, then worker gets stuck as well, and 
won't optimize all the images, until at least 5 minutes.

Thus I can debug in worker.

Original comment by jmara...@google.com on 10 Oct 2013 at 7:37

@GoogleCodeExporter

This comment has been minimized.

Copy link

GoogleCodeExporter commented Apr 6, 2015

With a hint from Maks I can tweak the ttl used to cache failures in the 
metadata cache so that we can optimize all 2000 images in a few 10s of seconds.

Now to write tests.

Original comment by jmara...@google.com on 10 Oct 2013 at 10:50

@GoogleCodeExporter

This comment has been minimized.

Copy link

GoogleCodeExporter commented Apr 6, 2015

Fixed in https://code.google.com/p/modpagespeed/source/detail?r=3543

Original comment by jmara...@google.com on 12 Oct 2013 at 2:45

  • Added labels: Milestone-v30, release-note
@GoogleCodeExporter

This comment has been minimized.

Copy link

GoogleCodeExporter commented Apr 6, 2015

Original comment by jmara...@google.com on 18 Oct 2013 at 1:44

  • Changed state: Fixed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment