-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Receiver: cache matchers for series calls #7353
base: main
Are you sure you want to change the base?
Receiver: cache matchers for series calls #7353
Conversation
598b480
to
d56e024
Compare
0528b9c
to
a58508d
Compare
23c786a
to
34e4852
Compare
We have tried caching matchers before with a time-based expiration cache, this time we are trying with LRU cache. We saw some of our receivers busy with compiling regexes and with high CPU usage, similar to the profile of the benchmark I added here: * Adding matcher cache for method `MatchersToPromMatchers` and a new version which uses the cache. * The main change is in `matchesExternalLabels` function which now receives a cache instance. adding matcher cache and refactor matchers Co-authored-by: Andre Branchizio <andre.branchizio@shopify.com> Signed-off-by: Pedro Tanaka <pedro.tanaka@shopify.com> Using the cache in proxy and tsdb stores (only receiver) Signed-off-by: Pedro Tanaka <pedro.tanaka@shopify.com> fixing problem with deep equality Signed-off-by: Pedro Tanaka <pedro.tanaka@shopify.com> adding some docs Signed-off-by: Pedro Tanaka <pedro.tanaka@shopify.com> Adding benchmark Signed-off-by: Pedro Tanaka <pedro.tanaka@shopify.com> undo unecessary changes Signed-off-by: Pedro Tanaka <pedro.tanaka@shopify.com> Adjusting metric names Signed-off-by: Pedro Tanaka <pedro.tanaka@shopify.com> adding changelog Signed-off-by: Pedro Tanaka <pedro.tanaka@shopify.com> wiring changes to the receiver Signed-off-by: Pedro Tanaka <pedro.tanaka@shopify.com> Fixing linting Signed-off-by: Pedro Tanaka <pedro.tanaka@shopify.com>
34e4852
to
3f852a5
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The results indicate that the "store-proxy-cache-matchers" branch considerably outperforms the "main" branch in all observed aspects of the BenchmarkProxySeriesRegex function. It is roughly 10 times faster regarding execution time while using about 9 times less memory and making about 4 times fewer allocations per operation. These improvements suggest significant optimizations in the regex handling or related data processing in the "store-proxy-cache-matchers" branch compared to the "main" branch
Was this AI generated? 😄
|
||
func (c *MatchersCache) GetOrSet(key LabelMatcher, newItem NewItemFunc) (*labels.Matcher, error) { | ||
c.metrics.requestsTotal.Inc() | ||
if item, ok := c.cache.Get(key); ok { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suggest using singleflight
here to reduce allocations even more
@@ -973,6 +986,8 @@ func (rc *receiveConfig) registerFlag(cmd extkingpin.FlagClause) { | |||
"about order."). | |||
Default("false").Hidden().BoolVar(&rc.allowOutOfOrderUpload) | |||
|
|||
cmd.Flag("matcher-cache-size", "The size of the cache used for matching against external labels. Using 0 disables caching.").Default("0").IntVar(&rc.matcherCacheSize) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we add this to other components as well like Thanos Store?
tms []*labels.Matcher | ||
err error | ||
) | ||
if cache != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we could put *storepb.MatchersCache
behind an interface to avoid this if cache != nil { ... } else { ... }
everywhere?
} | ||
} | ||
|
||
func NewMatchersCache(opts ...MatcherCacheOption) (*MatchersCache, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we can just use pkg/cache/inmemory.go
? It's another LRU implementation that already exists in the tree.
Summary
We have tried caching matchers before with a time-based expiration cache, this time we are trying with LRU cache.
We saw some of our receivers busy with compiling regexes and with high CPU usage, similar to the profile of the benchmark I added here:
Benchmark results
Expand!
The results indicate that the "store-proxy-cache-matchers" branch considerably outperforms the "main" branch in all observed aspects of the BenchmarkProxySeriesRegex function. It is roughly 10 times faster regarding execution time while using about 9 times less memory and making about 4 times fewer allocations per operation. These improvements suggest significant optimizations in the regex handling or related data processing in the "store-proxy-cache-matchers" branch compared to the "main" branch
Changes
MatchersToPromMatchers
and a new version which uses the cache.matchesExternalLabels
function which now receives a cache instance.Verification