-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
compactor: does not compact 4 consecutive 2-hour blocks #7287
Comments
Is |
@vincent-olivert-riera can you show us some information about the level 4 blocks you mentioned? What's their duration? |
Sure. This is its meta.json
|
@vincent-olivert-riera if you grep your Compactor's log with block IDs of the blocks that didn't get compacted, do you see anything that stands out? If possible, maybe increase the Compactor's log level to generate more logs (then revert it, otherwise logs might be too spammy). 🤔 |
@douglascamata , I haven't increased the Compactor's log level yet, but this is what the Compactor is doing (in a loop):
I have search for all the block IDs, but Kibana does not return anything at all. |
Thanos, Prometheus and Golang version used:
Thanos: 0.32.4
Golang: go1.20.8
Prometheus: 2.45.0
goVersion: go1.20.5
Object Storage Provider:
Openstack S3 compatible
What happened:
I have a Thanos compactor with the following metrics:
It is tracking a bucket where almost all blocks have been compacted up to level-4.
However, there are some level-1 blocks that are not compacted, and I was expecting them to be compacted into a level-2 block. I have made this animated gif to show it more clearly:
None of those blocks has been marked as no-compaction, so they should be compacted.
These are the
meta.json
for each one of them:01HT1G02DF2W21A1KTHDVPX0BR
01HT1PVSMCNYF8ZSDW53123NJX
01HT1XQGXB5CHQB21YT5DNXFC8
01HT24K86QTXJ1HV2NW252DAEV
This is the command line that I'm using:
Contents of /etc/thanos/objstore.yml
Contents of /etc/thanos/relabel_config.yml
What could be the reason for this behavior?
The text was updated successfully, but these errors were encountered: