-
Notifications
You must be signed in to change notification settings - Fork 610
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Time based downsampling during compaction #2880
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome work @aleks-p! I'm gong to update block querier to use the downsampled data wherever possible
One more thing we may want to add later: compaction should be aware of downsampled data and use them if available. This is only useful, if multiple compaction ranges are configured |
* feat: add downsampled tables to block querier * add low res profile source * fix lint issues * better heuristic * time range split with resolutions * read downsampled tables concurrently * add SplitTimeRangeByResolution tests * Fix test (avg is not supported currently) * add basic metric to track table access * reuse WithPartitionSamples * fix store-gateway metrics registration * Update pkg/util/time.go * fix store-gateway metrics multi-tenancy * post-review updates * fix metric registration in ingester * Fix failing test * go mod tidy --------- Co-authored-by: Aleksandar Petrov <8142643+aleks-p@users.noreply.github.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 👍🏻
# Conflicts: # pkg/phlaredb/compact.go
An early and unoptimized implementation of time-based downsampling.
Notes:
profiles.parquet
in the same compacted blocks (see pkg/phlaredb/downsample/downsample_test.go for an example)source
->5m
->1h
)Opening this PR to get early feedback about the general direction. See #2118 for more information.