-
Notifications
You must be signed in to change notification settings - Fork 25.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[I+D] tools/cache.py: Clean cache for just one method instead of a full clean cache #35012
Comments
tl;dr: don't waste time with this, you can only make the cache management more expensive on average, and more difficult to synchronize across workers - and you will not be able to get meaningful speedups with this. Our current cache implementation is based on simple requirements:
The current system is excellent to meet these requirements: in multi-worker we only need a "cache version" DB sequence, which is the simplest and cheapest IPC we could find. And the invalidation itself is super-trivial: empty the single cache LRU. It may seem extremely brutal as an invalidation method, but in practice it is very efficient. If you use the debugging signals to watch worker cache on a heavily used production instance, you will notice several things:
Example: on odoo.com today we had 172 invalidation since last daily log rotation, for about 3 million requests served, so 1 invalidation / 17k requests. For us this corresponds to 1 invalidation every 2-3 minutes. These surprising results are explained by a simple reason: the cache is mostly useful within a single transactional request, when the same function is called many many times by the same transaction, because it's working on batches of the same records (e.g ACLs, record rules, group membership are always the same). The same cache entries can be marginally helpful for subsequent requests, but they will make much less difference, it's a small bonus, because the cached methods are not that slow. If they were, it would be critical to reduce the number of cache invalidation we trigger, and recycling an HTTP worker would be a bug perf hit because the new one starts with an empty cache. But it's not a problem in practice. If you think about it, we are using a single LRU cache of 8k entries for all the orm_cached entries of the database. Doesn't that seem a little bit too small? Aren't we evicting cache entries all the time and e.g. wasting precious compiled QWeb templates and replacing them with dumb translations? I encourage you to verify what I'm saying on a really active production instance by sending |
Thanks for quickly answer and very good explanation (as ever) Maybe our case of use has another solution. We are using
Then, we will need a clear cache for each change of:
Then the calls for clear all caches will increase considerably and it is counterproductive. I will looking for a workaround to minimize the problem. |
idea
|
Can you be more specific on this idea? We have a lot of such "expensive" rules, but as far as I can see, we would end up with one rule per user - and all rules would have to be evaluated on each check. |
@odony imagine we have odoo instance running in 10+ workers, user is browsing website pages, it's pretty different from the /web backend (where you only need to load once all cache, and then send small ajax requests while browsing UI menus), on the website every page is like opening /web every time. If system cleans cache between all workers every 2-3 min, and user may not go to then same process (worker) every time opening page, so every next N(num of workers) page clicks he may trigger cache computing and storing , and he will feel very well the slowness of the website. the number of workers multiplies the problem in this case. |
Steps to reproduce:
odoo/odoo/addons/base/models/ir_rule.py
Lines 72 to 75 in c492cec
odoo/odoo/addons/base/models/ir_rule.py
Lines 130 to 146 in c492cec
Current behavior:
Clearing cache send signal to clean:
It is so expensive even if you want to clean just one particular method.
Expected behavior:
For python3 we have a built-in method similar: lru-cache
It adds the option of clean cache just for that method.
I mean,
Then we can use a similar way to clean cache for just one method or even better we can use the built-in method.
The text was updated successfully, but these errors were encountered: