Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

compact does not delete expired blocks in s3 #3436

Closed
diemus opened this issue Nov 12, 2020 · 30 comments
Closed

compact does not delete expired blocks in s3 #3436

diemus opened this issue Nov 12, 2020 · 30 comments

Comments

@diemus
Copy link

diemus commented Nov 12, 2020

thanos version 0.16.0

thanos compact 
--wait 
--compact.concurrency=32 
--data-dir=/data/thanos-compact 
--objstore.config-file=/thanos/bucket.yml 
--retention.resolution-raw=30d 
--retention.resolution-5m=30d 
--retention.resolution-1h=30d

the oldest block is 2020-09-18 when I start using object storage, it's about 2 month ago. yet the block is still there. the size of object storage is keep growing, and I didn't see any log about deleting blocks in compact. only marking compacted block for deletion. is there anything wrong?

@diemus
Copy link
Author

diemus commented Nov 12, 2020

thanos tools bucket cleanup seems work, but I thought it should be a work for compact

@GiedriusS
Copy link
Member

Cleaning up used to happen only at the end of an iteration. Has it ever finished one? The current Thanos version does that concurrently in addition to one time cleanup on boot. Only retention policies are applied at the end.

@diemus
Copy link
Author

diemus commented Nov 13, 2020

the log shows nothing to do for a long time

level=info ts=2020-11-13T01:44:28.111577438Z caller=fetcher.go:453 component=block.BaseFetcher msg="successfully synchronized block metadata" duration=2.523405394s cached=6764 returned=6764 partial=1
level=info ts=2020-11-13T01:45:27.865940917Z caller=fetcher.go:453 component=block.BaseFetcher msg="successfully synchronized block metadata" duration=2.27885666s cached=6764 returned=6764 partial=1
level=info ts=2020-11-13T01:46:28.203166318Z caller=fetcher.go:453 component=block.BaseFetcher msg="successfully synchronized block metadata" duration=2.616543013s cached=6765 returned=6765 partial=1
level=info ts=2020-11-13T01:47:28.173953351Z caller=fetcher.go:453 component=block.BaseFetcher msg="successfully synchronized block metadata" duration=2.587369586s cached=6765 returned=6765 partial=1
level=info ts=2020-11-13T01:48:28.125676465Z caller=fetcher.go:453 component=block.BaseFetcher msg="successfully synchronized block metadata" duration=2.538805653s cached=6765 returned=6765 partial=1
level=info ts=2020-11-13T01:49:28.247706551Z caller=fetcher.go:453 component=block.BaseFetcher msg="successfully synchronized block metadata" duration=2.661060133s cached=6765 returned=6765 partial=1
level=info ts=2020-11-13T01:50:27.920338715Z caller=fetcher.go:453 component=block.BaseFetcher msg="successfully synchronized block metadata" duration=2.333640673s cached=6765 returned=6765 partial=1
level=info ts=2020-11-13T01:51:28.07630018Z caller=fetcher.go:453 component=block.BaseFetcher msg="successfully synchronized block metadata" duration=2.489110364s cached=6765 returned=6765 partial=1
level=info ts=2020-11-13T01:52:28.029436084Z caller=fetcher.go:453 component=block.BaseFetcher msg="successfully synchronized block metadata" duration=2.442636106s cached=6765 returned=6765 partial=1
level=info ts=2020-11-13T01:53:30.970500945Z caller=fetcher.go:453 component=block.BaseFetcher msg="successfully synchronized block metadata" duration=5.383400221s cached=6765 returned=6765 partial=1
level=info ts=2020-11-13T01:54:27.860106108Z caller=fetcher.go:453 component=block.BaseFetcher msg="successfully synchronized block metadata" duration=2.273630935s cached=6765 returned=6765 partial=1
level=info ts=2020-11-13T01:55:27.80910023Z caller=fetcher.go:453 component=block.BaseFetcher msg="successfully synchronized block metadata" duration=2.222031408s cached=6765 returned=6765 partial=1
level=info ts=2020-11-13T01:56:27.690576515Z caller=fetcher.go:453 component=block.BaseFetcher msg="successfully synchronized block metadata" duration=2.103269366s cached=6766 returned=6766 partial=1
level=info ts=2020-11-13T01:57:28.177227392Z caller=fetcher.go:453 component=block.BaseFetcher msg="successfully synchronized block metadata" duration=2.590467052s cached=6766 returned=6766 partial=1
level=info ts=2020-11-13T01:58:27.986121338Z caller=fetcher.go:453 component=block.BaseFetcher msg="successfully synchronized block metadata" duration=2.398838272s cached=6766 returned=6766 partial=1
level=info ts=2020-11-13T01:59:28.13868311Z caller=fetcher.go:453 component=block.BaseFetcher msg="successfully synchronized block metadata" duration=2.550371056s cached=6766 returned=6766 partial=1
level=info ts=2020-11-13T02:00:28.203230738Z caller=fetcher.go:453 component=block.BaseFetcher msg="successfully synchronized block metadata" duration=2.61610033s cached=6766 returned=6766 partial=1
level=info ts=2020-11-13T02:01:28.200301023Z caller=fetcher.go:453 component=block.BaseFetcher msg="successfully synchronized block metadata" duration=2.613524653s cached=6766 returned=6766 partial=1
level=info ts=2020-11-13T02:02:27.834959614Z caller=fetcher.go:453 component=block.BaseFetcher msg="successfully synchronized block metadata" duration=2.24789892s cached=6766 returned=6766 partial=1
level=info ts=2020-11-13T02:03:28.460833533Z caller=fetcher.go:453 component=block.BaseFetcher msg="successfully synchronized block metadata" duration=2.873806921s cached=6766 returned=6766 partial=1
level=info ts=2020-11-13T02:04:27.712386276Z caller=fetcher.go:453 component=block.BaseFetcher msg="successfully synchronized block metadata" duration=2.125585756s cached=6766 returned=6766 partial=1
level=info ts=2020-11-13T02:05:27.770157862Z caller=fetcher.go:453 component=block.BaseFetcher msg="successfully synchronized block metadata" duration=2.182509793s cached=6766 returned=6766 partial=1
level=info ts=2020-11-13T02:06:27.950000374Z caller=fetcher.go:453 component=block.BaseFetcher msg="successfully synchronized block metadata" duration=2.362901017s cached=6767 returned=6767 partial=1

does this means a lot of iterations?

the log does shows start of GC at the boot, but no cleanup actually happened since the first line of log. it has been 12 days

@diemus
Copy link
Author

diemus commented Nov 17, 2020

I run thanos tools bucket cleanup to clean up manually. but it's seems only grouped blocks are deleted. the oldest data is still 2020-09-18. looks like the retention is not working as expect. no deletion-mark.json have been upload to that block.

@fightdou
Copy link

I have the same problem. The data stored in the object will not be deleted

@GiedriusS
Copy link
Member

All, what is the metric thanos_compact_iterations_total equal to on your Thanos Compactor instance(-s)?

@diemus
Copy link
Author

diemus commented Nov 19, 2020

thanos_compactor_iterations_total=0
thanos_compactor_retries_total=10

I have only one instance, and it has been running for half a month

@diemus
Copy link
Author

diemus commented Nov 19, 2020

I notice 2 things. might related to this issue.

  1. the compact does not downsample metrics. I have to run thanos tools bucket downsample to downsample.
  2. the bucket viewer shows not all blocks are grouped, blocks upload by thanos ruler are not grouped. the duration of these blocks are 10minutes. is this the reason why thanos_compactor_iterations_total=0 ?

@diemus
Copy link
Author

diemus commented Nov 21, 2020

found some errors in the log

level=error ts=2020-11-14T11:11:17.061560166Z caller=compact.go:375 msg="critical error detected; halting" err="compaction: group 0@6881101226606891260: compact blocks [/home/work/data/thanos-compact/compact/0@6881101226606891260/01EP7S24YSCAQZWZCMG2DFNS4J /home/work/data/thanos-compact/compact/0@6881101226606891260/01EP7WK96Z0G8BHKRAXP96XNC1 /home/work/data/thanos-compact/compact/0@6881101226606891260/01EP802RYC4BKQX1Q974V03965 /home/work/data/thanos-compact/compact/0@6881101226606891260/01EP83GY7A6CY5CCJPZB5MFBMH /home/work/data/thanos-compact/compact/0@6881101226606891260/01EP873HKNE28YM8JTRHZ00YKN /home/work/data/thanos-compact/compact/0@6881101226606891260/01EP8ARNDBKQ1AFCQ7BHHF2G9E]: populate block: chunk iter: cannot populate chunk 172067127071: segment doesn't include enough bytes to read the chunk - required:268435459, available:268435456"
level=error ts=2020-11-18T20:53:31.666989085Z caller=runutil.go:98 msg="function failed. Retrying in next tick" err="incomplete view: 11 errors: meta.json file exists: 01EQ0SM49MRX0VYZHVZEDDH23S/meta.json: stat s3 object: Last-Modified time format is invalid, failed with parsing time \"\" as \"Mon, 02 Jan 2006 15:04:05 GMT\": cannot parse \"\" as \"Mon\"; meta.json file exists: 01EQ0WKSXKKCWQ7Z0N3DZJVCJK/meta.json: stat s3 object: Last-Modified time format is invalid, failed with parsing time \"\" as \"Mon, 02 Jan 2006 15:04:05 GMT\": cannot parse \"\" as \"Mon\"; meta.json file exists: 01EM0AVWF9D6DX5H9282T7W66M/meta.json: stat s3 object: Last-Modified time format is invalid, failed with parsing time \"\" as \"Mon, 02 Jan 2006 15:04:05 GMT\": cannot parse \"\" as \"Mon\"; meta.json file exists: 01EQ0XM9BG35VFHQYWPD4H38FF/meta.json: stat s3 object: Last-Modified time format is invalid, failed with parsing time \"\" as \"Mon, 02 Jan 2006 15:04:05 GMT\": cannot parse \"\" as \"Mon\"; meta.json file exists: 01EQ10D2XSDMHG46AVVX5YS8DW/meta.json: stat s3 object: Last-Modified time format is invalid, failed with parsing time \"\" as \"Mon, 02 Jan 2006 15:04:05 GMT\": cannot parse \"\" as \"Mon\"; meta.json file exists: 01EQ1076Y7KFWDPHEME2NF6SYH/meta.json: stat s3 object: Last-Modified time format is invalid, failed with parsing time \"\" as \"Mon, 02 Jan 2006 15:04:05 GMT\": cannot parse \"\" as \"Mon\"; meta.json file exists: 01EQ10D11FPV0AS1XS1FRA41C7/meta.json: stat s3 object: Last-Modified time format is invalid, failed with parsing time \"\" as \"Mon, 02 Jan 2006 15:04:05 GMT\": cannot parse \"\" as \"Mon\"; meta.json file exists: 01EQ0WFNFGK9NHX8QGFYA68B7C/meta.json: stat s3 object: Last-Modified time format is invalid, failed with parsing time \"\" as \"Mon, 02 Jan 2006 15:04:05 GMT\": cannot parse \"\" as \"Mon\"; meta.json file exists: 01EQ1124ZEJGFXR8Q9HHKVH421/meta.json: stat s3 object: Last-Modified time format is invalid, failed with parsing time \"\" as \"Mon, 02 Jan 2006 15:04:05 GMT\": cannot parse \"\" as \"Mon\"; meta.json file exists: 01EQ0WN5DXXBX10ZN9ZCK61CFK/meta.json: stat s3 object: Last-Modified time format is invalid, failed with parsing time \"\" as \"Mon, 02 Jan 2006 15:04:05 GMT\": cannot parse \"\" as \"Mon\"; meta.json file exists: 01EQ0TRR5MMK542WM00X2YXPCS/meta.json: stat s3 object: Last-Modified time format is invalid, failed with parsing time \"\" as \"Mon, 02 Jan 2006 15:04:05 GMT\": cannot parse \"\" as \"Mon\""
level=error ts=2020-11-20T14:55:28.574462775Z caller=runutil.go:98 msg="function failed. Retrying in next tick" err="incomplete view: meta.json file exists: 01EKZFCZET1CBJPT5C0GKA89AM/meta.json: stat s3 object: 505 HTTP Version Not Supported"
level=error ts=2020-11-20T15:38:27.650442268Z caller=runutil.go:98 msg="function failed. Retrying in next tick" err="incomplete view: meta.json file exists: 01EQAPX9SV7CK9H7RPZ2DZ2301/meta.json: stat s3 object: 505 HTTP Version Not Supported"

this one seems stop the group process

populate block: chunk iter: cannot populate chunk 172067127071: segment doesn't include enough bytes to read the chunk - required:268435459, available:268435456"

after this error, the log start repeat successfully synchronized block metadata for a lot of days. seems like it stucked

@kaynAw
Copy link
Contributor

kaynAw commented Nov 23, 2020

same problem

@Shadi
Copy link

Shadi commented Nov 24, 2020

Have you tried the latest version(0.17.0)? I see in the changelog that #3115 was part of that release and it should decouple blocks deletion from compaction iterations.

@diemus
Copy link
Author

diemus commented Nov 26, 2020

the problem is not to clean the expired blocks. it's seems the retention policy is not work. so the block is not mark for deletion

@GiedriusS
Copy link
Member

found some errors in the log

level=error ts=2020-11-14T11:11:17.061560166Z caller=compact.go:375 msg="critical error detected; halting" err="compaction: group 0@6881101226606891260: compact blocks [/home/work/data/thanos-compact/compact/0@6881101226606891260/01EP7S24YSCAQZWZCMG2DFNS4J /home/work/data/thanos-compact/compact/0@6881101226606891260/01EP7WK96Z0G8BHKRAXP96XNC1 /home/work/data/thanos-compact/compact/0@6881101226606891260/01EP802RYC4BKQX1Q974V03965 /home/work/data/thanos-compact/compact/0@6881101226606891260/01EP83GY7A6CY5CCJPZB5MFBMH /home/work/data/thanos-compact/compact/0@6881101226606891260/01EP873HKNE28YM8JTRHZ00YKN /home/work/data/thanos-compact/compact/0@6881101226606891260/01EP8ARNDBKQ1AFCQ7BHHF2G9E]: populate block: chunk iter: cannot populate chunk 172067127071: segment doesn't include enough bytes to read the chunk - required:268435459, available:268435456"
level=error ts=2020-11-18T20:53:31.666989085Z caller=runutil.go:98 msg="function failed. Retrying in next tick" err="incomplete view: 11 errors: meta.json file exists: 01EQ0SM49MRX0VYZHVZEDDH23S/meta.json: stat s3 object: Last-Modified time format is invalid, failed with parsing time \"\" as \"Mon, 02 Jan 2006 15:04:05 GMT\": cannot parse \"\" as \"Mon\"; meta.json file exists: 01EQ0WKSXKKCWQ7Z0N3DZJVCJK/meta.json: stat s3 object: Last-Modified time format is invalid, failed with parsing time \"\" as \"Mon, 02 Jan 2006 15:04:05 GMT\": cannot parse \"\" as \"Mon\"; meta.json file exists: 01EM0AVWF9D6DX5H9282T7W66M/meta.json: stat s3 object: Last-Modified time format is invalid, failed with parsing time \"\" as \"Mon, 02 Jan 2006 15:04:05 GMT\": cannot parse \"\" as \"Mon\"; meta.json file exists: 01EQ0XM9BG35VFHQYWPD4H38FF/meta.json: stat s3 object: Last-Modified time format is invalid, failed with parsing time \"\" as \"Mon, 02 Jan 2006 15:04:05 GMT\": cannot parse \"\" as \"Mon\"; meta.json file exists: 01EQ10D2XSDMHG46AVVX5YS8DW/meta.json: stat s3 object: Last-Modified time format is invalid, failed with parsing time \"\" as \"Mon, 02 Jan 2006 15:04:05 GMT\": cannot parse \"\" as \"Mon\"; meta.json file exists: 01EQ1076Y7KFWDPHEME2NF6SYH/meta.json: stat s3 object: Last-Modified time format is invalid, failed with parsing time \"\" as \"Mon, 02 Jan 2006 15:04:05 GMT\": cannot parse \"\" as \"Mon\"; meta.json file exists: 01EQ10D11FPV0AS1XS1FRA41C7/meta.json: stat s3 object: Last-Modified time format is invalid, failed with parsing time \"\" as \"Mon, 02 Jan 2006 15:04:05 GMT\": cannot parse \"\" as \"Mon\"; meta.json file exists: 01EQ0WFNFGK9NHX8QGFYA68B7C/meta.json: stat s3 object: Last-Modified time format is invalid, failed with parsing time \"\" as \"Mon, 02 Jan 2006 15:04:05 GMT\": cannot parse \"\" as \"Mon\"; meta.json file exists: 01EQ1124ZEJGFXR8Q9HHKVH421/meta.json: stat s3 object: Last-Modified time format is invalid, failed with parsing time \"\" as \"Mon, 02 Jan 2006 15:04:05 GMT\": cannot parse \"\" as \"Mon\"; meta.json file exists: 01EQ0WN5DXXBX10ZN9ZCK61CFK/meta.json: stat s3 object: Last-Modified time format is invalid, failed with parsing time \"\" as \"Mon, 02 Jan 2006 15:04:05 GMT\": cannot parse \"\" as \"Mon\"; meta.json file exists: 01EQ0TRR5MMK542WM00X2YXPCS/meta.json: stat s3 object: Last-Modified time format is invalid, failed with parsing time \"\" as \"Mon, 02 Jan 2006 15:04:05 GMT\": cannot parse \"\" as \"Mon\""
level=error ts=2020-11-20T14:55:28.574462775Z caller=runutil.go:98 msg="function failed. Retrying in next tick" err="incomplete view: meta.json file exists: 01EKZFCZET1CBJPT5C0GKA89AM/meta.json: stat s3 object: 505 HTTP Version Not Supported"
level=error ts=2020-11-20T15:38:27.650442268Z caller=runutil.go:98 msg="function failed. Retrying in next tick" err="incomplete view: meta.json file exists: 01EQAPX9SV7CK9H7RPZ2DZ2301/meta.json: stat s3 object: 505 HTTP Version Not Supported"

this one seems stop the group process

populate block: chunk iter: cannot populate chunk 172067127071: segment doesn't include enough bytes to read the chunk - required:268435459, available:268435456"

after this error, the log start repeat successfully synchronized block metadata for a lot of days. seems like it stucked

What implementation of S3 are you using? Sounds like it doesn't implement the API properly

@diemus
Copy link
Author

diemus commented Nov 27, 2020

it's an object storage service from my company and it implements S3 api. but I don't think its an error from implementation. most of the request are successful. the compactor can compact,clean, downsample blocks. it just the retention policy not work, no deletion-mark.json have been upload to that block.

@Kampe
Copy link

Kampe commented Nov 30, 2020

Seeing the same issues with GCP.

@diemus
Copy link
Author

diemus commented Jan 11, 2021

after I delete some blocks manually, the compactor now works as expected. I think there are some blocks with some problems that stop the compactor from working. it stuck at the grouping process

@diemus
Copy link
Author

diemus commented Jan 11, 2021

I extract the code that I can apply the retention manually, I think it might help people with the same problem.

package main

import (
	"context"
	"github.com/prometheus/client_golang/prometheus"
	"github.com/prometheus/client_golang/prometheus/promauto"
	"github.com/prometheus/common/model"
	"github.com/prometheus/prometheus/pkg/relabel"
	"github.com/thanos-io/thanos/pkg/block"
	"github.com/thanos-io/thanos/pkg/compact"
	"github.com/thanos-io/thanos/pkg/logging"
	"github.com/thanos-io/thanos/pkg/objstore/client"
	"time"
)

func main() {
	filter := block.NewLabelShardedMetaFilter([]*relabel.Config{
		{
			SourceLabels: []model.LabelName{"thanos"},
			Action:       relabel.Keep,
			Regex:        relabel.MustNewRegexp(".*"),
		},
	})

	filters := []block.MetadataFilter{filter}

	logger := logging.NewLogger("debug", "logfmt", "test")
	ctx := context.TODO()

	retentionByResolution := map[compact.ResolutionLevel]time.Duration{
		compact.ResolutionLevelRaw: 30 * 24 * time.Hour,
		compact.ResolutionLevel5m:  30 * 24 * time.Hour,
		compact.ResolutionLevel1h:  30 * 24 * time.Hour,
	}

	confContentYaml := `
type: S3
config:
 bucket: "xxxxxx"
 endpoint: "xxxxx"
 access_key: "xxxxx"
 secret_key: "xxxxxx"
 signature_version2: true
 insecure: false
`

	bkt, err := client.NewBucket(logger, []byte(confContentYaml), nil, "aaa")
	if err != nil {
		logger.Log("msg", err)
		return
	}

	metaFetcher, err := block.NewMetaFetcher(logger, 32, bkt, "", nil, filters, nil)
	if err != nil {
		logger.Log("msg", err)
		return
	}

	blocksMarkedForDeletion := promauto.With(nil).NewCounter(prometheus.CounterOpts{})

	metas, _, err := metaFetcher.Fetch(ctx)
	if err != nil {
		logger.Log("msg", err)
		return
	}

	err = compact.ApplyRetentionPolicyByResolution(ctx, logger, bkt, metas, retentionByResolution, blocksMarkedForDeletion)
	if err != nil {
		logger.Log("msg", err)
		return
	}
}

@diemus
Copy link
Author

diemus commented Jan 27, 2021

the problems shows again, I have a strong feelings it's casued by some blocks with same time range. because this section of blocks are not grouped when other blocks are still grouping, this block may cause compactor stuck, I have to add a crontab to restart it every 6 hour.

The extra block was created in a migration of prometheus instance, it create 2 blocks with same time range. one has 2 hours duration the other has less than one hour.

image

@diemus
Copy link
Author

diemus commented Jan 28, 2021

I delete the block with same time range, and it works again.

I think it's either because the block are not two hours of complete data, or because the two blocks overlap in time with same compact level and same duration

@kakkoyun
Copy link
Member

@diemus Do you think it is worth adding that code piece as a tool until we find the cause and fix the issue?

@diemus
Copy link
Author

diemus commented Feb 14, 2021

@kakkoyun I think it might help people who does not know how to run that code piece. as it for now, users can not find another way to delete expired blocks when compactor does not work as expected

@stale
Copy link

stale bot commented Apr 18, 2021

Hello 👋 Looks like there was no activity on this issue for the last two months.
Do you mind updating us on the status? Is this still reproducible or needed? If yes, just comment on this PR or push a commit. Thanks! 🤗
If there will be no activity in the next two weeks, this issue will be closed (we can always reopen an issue if we need!). Alternatively, use remind command if you wish to be reminded at some point in future.

@stale stale bot added the stale label Apr 18, 2021
@stale
Copy link

stale bot commented Jun 3, 2021

Closing for now as promised, let us know if you need this to be reopened! 🤗

@stale stale bot closed this as completed Jun 3, 2021
@rsommer
Copy link

rsommer commented Sep 17, 2021

We are encountering similar problems. Compactor is running with the following settings:
thanos compact --data-dir /var/cache/thanos --objstore.config-file /etc/prometheus/object_store.yml --retention.resolution-raw=30d --retention.resolution-5m=26w --retention.resolution-1h=26w --wait --deduplication.replica-label=replica --deduplication.func=penalty
(the deduplication flags have just been added after the last update to 0.22)

From the thanos bucket view, there seem to be several problems:
thanos_bucket_view

At first, old blocks are not deleted, there should be no blocks older than 26 weeks.

But second (aside the time problems), I would expect to see blocks of all levels over the whole time period, not just for August 2020.

As far as I can see, there are no errors in the logs. Even if I run a manual cleanup, no error is logged but the blocks are not deleted.

The underlying object store is S3 on ceph with radosgw if that is of any importance.

@JohanElmis
Copy link

JohanElmis commented Sep 29, 2021

We have similar issues on a Google bucket: (Compact View in the graph).
All the raw-blocks still exist - even if the retention is set to only keep them for 8 days.
I have not tried to do any forced cleanup with the cmd. All? the old blocks contains a deletion-mark file.
We did have storage loading-issues before which seems to have been caused by that we used a single MemcacheD cache for several store-components (fetching data from AWS S3 and GCP store, but 1 cache).
Despite different global labels - it seems like it looked in the wrong place if the cache was in use.
Maybe messed up the cleanup due to this. Will do some more investigations.
Screenshot 2021-09-29 at 13 56 45
We just moved to 0.23.0 - from 0.22.X but it made no difference.

@yuanhuaiwang
Copy link

yuanhuaiwang commented Aug 31, 2022

We are encountering similar problems.

@yuanhuaiwang
Copy link

Closing for now as promised, let us know if you need this to be reopened! 🤗

reopen

@GiedriusS GiedriusS reopened this Aug 31, 2022
@stale stale bot removed the stale label Aug 31, 2022
@yeya24
Copy link
Contributor

yeya24 commented Aug 31, 2022

@stale
Copy link

stale bot commented Nov 13, 2022

Hello 👋 Looks like there was no activity on this issue for the last two months.
Do you mind updating us on the status? Is this still reproducible or needed? If yes, just comment on this PR or push a commit. Thanks! 🤗
If there will be no activity in the next two weeks, this issue will be closed (we can always reopen an issue if we need!). Alternatively, use remind command if you wish to be reminded at some point in future.

@stale stale bot added the stale label Nov 13, 2022
@yeya24
Copy link
Contributor

yeya24 commented Jan 14, 2023

I will close this issue as it is not a bug, rather a scaling issue. For solving this problem https://thanos.io/tip/operating/compactor-backlog.md/ could help.

@yeya24 yeya24 closed this as completed Jan 14, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests