New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[RFC][Cache] Limit number of locations we tag to not cripple performance #2399
Conversation
Reported on slack on some install with several hundred locations for a given content, as Symfony Cache is not designed for such amount of tags it made the whole system immensely slow. eZ is not really made for this either atm, as we don't really have paging even on listing locations. Which is why this patch just ignores anything beyond 25 locations. Todo: - [ ] check if main location is among the first, as this one is somewhat more important then the others
Tool version : PHP CS Fixer 2.7.1 Sandy Pool by Fabien Potencier and Dariusz Ruminski diff --git a/eZ/Publish/Core/Persistence/Cache/ContentHandler.php b/eZ/Publish/Core/Persistence/Cache/ContentHandler.php
index 0cc919f..7ea226d 100644
--- a/eZ/Publish/Core/Persistence/Cache/ContentHandler.php
+++ b/eZ/Publish/Core/Persistence/Cache/ContentHandler.php
@@ -441,7 +441,6 @@ class ContentHandler extends AbstractHandler implements ContentHandlerInterface
),
E_USER_WARNING
);
-
}
foreach ($locations as $location) { |
Q: what are the expected effect of this? will the system give back to devs always the exact data, but simply cache fewer of it, or will it just make any locations above the 25th not-available? |
It wasn't slow, it was rather unusable, it was trying to fetch several thousand tags from Redis in bulk and Redis did not accept the request as far as I can remember. That said, I kind of agree, so safest is maybe to not fix this and rather argue this should be fixed in Symfony. The use of separate keys for tags with timestamp for possible expiry is what is killing performance on memcached as well, and if it can be avoided we can shave of 50% of backend cache request even when using Redis. This again is due to the Tag system is designed on top of PSR-6 which severly limits it in what kind of queries it could do natively to backend, ref: https://github.com/symfony/symfony/blob/master/src/Symfony/Component/Cache/Adapter/TagAwareAdapter.php#L42 If tags where done at a deeper level it could for instance use lists for tag to key mapping, however lists in Redis have limits to, so some other design is probably needed. The safest alternative is to do like D8 does, just store the tag to key mapping in database, hence make sure it only needs lookup on invalidation and also make sure it never gets prematurely evicted by cache store when it starts to reach it's max size. |
Random ideas for alternatives:
|
this is on cache storing time (editing, publishing, ...) ;) Same concept with ezplatform-http-cache, calculate tags when generating response for caching.
Config for what is essentially a workaround? :) |
"enforce the limit at editing time" => sorry, badly phrased. More like: "throw an exception at this layer and let upper layer transform this into a en error msg for the editor" |
ah, yeah but that would still be throwing on a issue further down, so I'll be closing this for now and look for alternative ways ;) |
Reported on slack on some install with several hundred locations for a given content, as Symfony Cache is not designed for such amount of tags it made the whole system immensely slow. eZ is not really made for this either atm, as we don't really have paging even on listing locations.
Which is why this patch just ignores anything beyond 25 locations.
Todo: