You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanos, Prometheus and Golang version used:
Thanos version: thanosio/thanos:main-2022-12-13-e58a3f2 from docker hub
Prometheus version: v2.31.1
Object Storage Provider: S3
What happened:
Thanos compactor shuts down do to internal server error.
This happens do to the compactor trying to get XXXXX/no-compact-mark.json and recieving an I/O timeout.
Upon investigating this issue, there really is no 'no-compact-mark.json' file in the specified bucket.
What you expected to happen:
Either for compactor to skip current bucket while compacting or for compactor to compact current bucket.
Or event just a way to tell compactor to ignore or skip errors and continue deleting and compacting instead of shutting down.
How to reproduce it (as minimally and precisely as possible):
I'm not quite sure why compactor determined that the specified bucket should not be compacted,
but in order to reproduce create a bucket with a no-compact-mark.json and than delete that file.
Full logs to relevant components:
err="syncing metas: filter blocks marked for no compaction: get file: XXXXXX/no-compact-mark.json: Get "S3 Route/bucket name/XXXXX/no-compact-mark.json" dial tcp : i/o timeout."
Anything else we need to know:
We upload the compactor as a deployment to the Openshift 4 environment.
The text was updated successfully, but these errors were encountered:
Thanos, Prometheus and Golang version used:
Thanos version: thanosio/thanos:main-2022-12-13-e58a3f2 from docker hub
Prometheus version: v2.31.1
Object Storage Provider: S3
What happened:
Thanos compactor shuts down do to internal server error.
This happens do to the compactor trying to get XXXXX/no-compact-mark.json and recieving an I/O timeout.
Upon investigating this issue, there really is no 'no-compact-mark.json' file in the specified bucket.
What you expected to happen:
Either for compactor to skip current bucket while compacting or for compactor to compact current bucket.
Or event just a way to tell compactor to ignore or skip errors and continue deleting and compacting instead of shutting down.
How to reproduce it (as minimally and precisely as possible):
I'm not quite sure why compactor determined that the specified bucket should not be compacted,
but in order to reproduce create a bucket with a no-compact-mark.json and than delete that file.
Full logs to relevant components:
err="syncing metas: filter blocks marked for no compaction: get file: XXXXXX/no-compact-mark.json: Get "S3 Route/bucket name/XXXXX/no-compact-mark.json" dial tcp : i/o timeout."
Anything else we need to know:
We upload the compactor as a deployment to the Openshift 4 environment.
The text was updated successfully, but these errors were encountered: