-
Notifications
You must be signed in to change notification settings - Fork 60
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
eliminate duplicates in cache invalidation #125
Comments
Would it make sense to throttle the invalidation requests? |
We could store the queued requests hashed by URL and headers, as I think multiple invalidation requests to the same URL but with different headers should still be possible.
Makes sense to me, but wouldn’t be strictly necessary if we remove duplicates (hopefully). |
i like the idea of hashing by all things that make the requests distinct. its just nice if the client can be lazy with this. and yeah, just the url would not be enough, most notable for everything BAN. i will have a look. about throttling, i am not so sure. if the multi request curl allows to set a maximum of parallel requests, we could use that (but this would probably be configured by passing in an accordingly configured guzzle client, no?). anything on our side seems too much to me. in a really heavy setup, one could use a message queue to queue requests and then have workers that work on that queue. or simply invalidate / ban and queue refresh requests into a message queue. both could be solved with special proxy clients. but that seems out of scope for this bundle to me, as it would depend on the message queue used and when you have such a big setup you do want to fine-tune anyways. |
because of other bugs, i ended up refreshing the same url 100 times. this pretty much knocks out apache. oviously my application should be more careful, and when i ask to invalidate 100 different urls i don't see how we could prevent this. however, i think it would be nice if we could eliminate duplicates and only refresh / invalidate each of them once. but as we store requests, this is a bit more tricky than just using array_unique.
The text was updated successfully, but these errors were encountered: