Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

eliminate duplicates in cache invalidation #125

Closed
dbu opened this issue Sep 1, 2014 · 3 comments · Fixed by #126
Closed

eliminate duplicates in cache invalidation #125

dbu opened this issue Sep 1, 2014 · 3 comments · Fixed by #126

Comments

@dbu
Copy link
Contributor

dbu commented Sep 1, 2014

because of other bugs, i ended up refreshing the same url 100 times. this pretty much knocks out apache. oviously my application should be more careful, and when i ask to invalidate 100 different urls i don't see how we could prevent this. however, i think it would be nice if we could eliminate duplicates and only refresh / invalidate each of them once. but as we store requests, this is a bit more tricky than just using array_unique.

@dbu dbu added the enhancement label Sep 1, 2014
@staabm
Copy link

staabm commented Sep 1, 2014

Would it make sense to throttle the invalidation requests?

@ddeboer
Copy link
Member

ddeboer commented Sep 1, 2014

eliminate duplicates and only refresh / invalidate each of them once

We could store the queued requests hashed by URL and headers, as I think multiple invalidation requests to the same URL but with different headers should still be possible.

throttle the invalidation requests

Makes sense to me, but wouldn’t be strictly necessary if we remove duplicates (hopefully).

@dbu
Copy link
Contributor Author

dbu commented Sep 1, 2014

i like the idea of hashing by all things that make the requests distinct. its just nice if the client can be lazy with this. and yeah, just the url would not be enough, most notable for everything BAN. i will have a look.

about throttling, i am not so sure. if the multi request curl allows to set a maximum of parallel requests, we could use that (but this would probably be configured by passing in an accordingly configured guzzle client, no?). anything on our side seems too much to me.

in a really heavy setup, one could use a message queue to queue requests and then have workers that work on that queue. or simply invalidate / ban and queue refresh requests into a message queue. both could be solved with special proxy clients. but that seems out of scope for this bundle to me, as it would depend on the message queue used and when you have such a big setup you do want to fine-tune anyways.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants