New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refresh expired as soon as possible without queue #571
Comments
Additionally: When I run blitz/cache/refresh-expired, only the cached files are deleted, but not generated. I can ofcourse create my own command, but I would think that $forceGenerate should be true for method "0: Expire the cache, regenerate manually"? |
Hi Jonathan. It sounds like
Maybe, I’ll have to look into that a bit closer. |
Thank you for your feedback! |
I’ve changed the behaviour in 7ef3507 to forcibly generate new cached pages if they are not cleared. You can test this by running |
Released in 4.7.0. |
Hi,
I'm in search for the best way to refresh cache as soon as possible, for a website with high traffic.
So it brings me to these two refresh methods:
We are now using method "2: Expire the cache and regenerate in a queue job", but the queue jobs keep stacking up. For example when saving the globals two times, two very long jobs are generating all pages. They are actually refreshing the same pages twice, at the same time, because we have multiple queue/listeners
So I was thinking of changing to method "0: Expire the cache, regenerate manually", but in the docs I can't find which cron job is supposed to run, I'm guessing
blitz/cache/refresh-expired
. But how can we make sure it is always running, without running twice or more at the same time.I was hoping a 'listen' task could be started, so you could make a systemd service for it. An alternative is to start it every minute in a cron using flock, but in case of a crash, the lock will stay, and we want to be sure it is always running.
Thank you for your time!
Best regards,
Jonathan
The text was updated successfully, but these errors were encountered: