-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Default prefix $_SERVER['HTTP_HOST'] breaks drush + chained fast #8
Comments
Actually, this isn't just a problem with chained fast, also the actual caches themself would be written with different prefixes. |
Redis default prefix is computed against the database host, database and credentials which should make it safe. It won't make any difference whether it's being used from the CLI or through the HTTPd |
Yep, so we should port that logic from the new redis branch or maybe use getApcuPrefix() but likely shortened somehow. |
What's the logic in getApcuPrefix() ? Could you link the code please ? |
From what you said on IRC, if I understood well the getApcuPrefix() function will work upon the site hash and not use APCu itself, so if it is able to identify a site based upon a value that never change whether it's being used via CLI or via HTTPd, then I guess it's OK to use it instead of the database credentials, I don't see any blocking reason to not to use it. |
Chained fast needs a consistent backend that is always the same.
When using the redis cache backend and e.g. doing changes in drush, then it writes into the default_ which means that it only invalidates that. When you then access it on the website again, chained fast will check against the different prefix and consider the data in apcu to be still valid.
The only way out that I can see is to drop the default fallback, likely combined with a warning/info on the requirements page about it. warning isn't really good because in managed environments with containers and so on, you are the only one accessing it, so not using a prefix is perfectly fine.
The text was updated successfully, but these errors were encountered: