-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Performance] Investigate performance characteristics of settings.__getattr__
memcached access
#5765
Comments
cc @matburt @AlanCoding @chrismeyersfsu since we talked about this briefly |
For things that have a limited level of context like processing events we could use local in-memory memoization as a shortcut, without having to make major caching changes. |
Is this still valid with Redis now? |
Yep I think this is probably still a valid concern. I still maintain we could do some basic memoization and probably mitigate it. |
This still isn't clicking for me. We already have local caching on top of redis. Line 400 in 4e97492
Granted, we don't use this local caching for most things. Line 34 in 4e97492
So is this issue just saying "use a value other than 0 here"? Seems important to note that the |
settings.__getattr__
memcached accesssettings.__getattr__
memcached access
I'm putting this into the Needs Refinement bucket, because we want to collect more actionable performance-enhancing items. If we changed the value from 0 to 5, that would be likely to fail a lot of integration tests... which is fine, we would just need to add some logic to be more patient in those tests. We can also get a lot of use from Django Debug Toolbar, which will show the cache hits, we want to have some specific numbers of how much this will improve things by. |
Update SETTING_MEMORY_TTL See: ansible#5765
Update SETTING_MEMORY_TTL See: ansible#5765
Update SETTING_MEMORY_TTL See: ansible#5765
This is not something that goes through normal QE, so I'll close it directly here. I notified the perfscale team, but don't expect any particular response from them. Capturing from the PR, some writeup from this: https://gist.github.com/AlanCoding/3bff66c63db36ae7cb8c326df2c38e29 |
Every
conf.settings.__getattr__()
everywhere in every AWX process incurs a memcached round trip lookup. This is fine for some use cases (within most HTTP requests), but for things like callback event processing where throughput is very high, these domain socket reads really add up (see: #5762)As an extension to this, every
logger.info()
(ordebug()
, orerror()
or etc...) in AWX's logging implementation does per-log-linesettings.__getattr__()
lookups to determine external logging settings every time we log a message (so, it's not usual to do 10+ memcached round trips per log line).I'd like to investigate some new ideas for managing in-process setting caching that involve less per-process overhead, especially for the callback receiver, and I'd like to rethink our current approach of cache-based "pull on demand".
The text was updated successfully, but these errors were encountered: