Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

compact: Improved memory usage while downsampling #529

Merged

Conversation

xjewer
Copy link
Contributor

@xjewer xjewer commented Sep 19, 2018

Changes

Add instant writer implementation to shrink memory consumption during the downsampling stage.
Encoded chunks are written to chunks blob files right away after series was handled.
Flush method closes chunk writer and sync all symbols, series, labels, posting and meta data to files.
It still works in one thread, hence operates only on one core.

Estimated memory consumption is unlikely more than 1Gb, but depends on data set, labels size and series' density: chunk data size (512MB) + encoded buffers + index data

Fixes #297

Verification

Downsampling on the same origin block with previous version and the current one shows the same result.

Memory flame graph:
image

cc @bwplotka

@xjewer xjewer force-pushed the feature_downsampling_instant_writer branch 3 times, most recently from 9aa8061 to e8c13e0 Compare September 20, 2018 01:04
@xjewer xjewer changed the title [WIP] compact: avoid memory leak while downsampling compact: avoid memory leak while downsampling Sep 20, 2018
@bwplotka bwplotka added this to In progress in v0.2.0 Sep 21, 2018
@bwplotka bwplotka moved this from In progress to Needs review in v0.2.0 Sep 21, 2018
Copy link
Member

@bwplotka bwplotka left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice, really good work! Thanks! Some comments, mostly naming.

Let's also add CHANGELOG entry.

It looks nice, the main worry is that we had this done by TSDB compact, so maintained outside of thanos. Clearly the tsdb implementation was not streamed, so we had to write our modified version. But now we have:

  • tsdb compactor with random version (as we support multiple versions of Prometheus) compacting our non-compacted TSDB block
  • tsdb compactor with version we pin writing blocks for compacted blocks by thanos compactor
  • custom streamed TSBD block write for downsampling.
    It can be quite worrying, right?

I think ideally we could proposed improved streamed block to TSDB in future and also use this for thanos compaction itself.

Wonder if we are not missing writer tests then... plus it's not that tied to downsampling, so we could put it in "github.com/improbable-eng/thanos/pkg/block" maybe? (not sure - we currently use it for downsampling only)

pkg/compact/downsample/downsample.go Outdated Show resolved Hide resolved
pkg/compact/downsample/downsample.go Outdated Show resolved Hide resolved
pkg/compact/downsample/downsample.go Show resolved Hide resolved
pkg/compact/downsample/downsample.go Outdated Show resolved Hide resolved
pkg/compact/downsample/downsample_test.go Show resolved Hide resolved
pkg/compact/downsample/writer.go Outdated Show resolved Hide resolved
pkg/compact/downsample/writer.go Outdated Show resolved Hide resolved
pkg/compact/downsample/writer.go Outdated Show resolved Hide resolved
pkg/compact/downsample/writer.go Outdated Show resolved Hide resolved
pkg/compact/downsample/writer.go Outdated Show resolved Hide resolved
@bwplotka
Copy link
Member

Is it rdy for review?

@xjewer
Copy link
Contributor Author

xjewer commented Oct 12, 2018

Is it rdy for review?

yep

But I left stream_block_writer in downsample package. Don't see if this is valuable, as our main goal is to move it to tsdb

@xjewer
Copy link
Contributor Author

xjewer commented Oct 12, 2018

I guess this work should be proceed for the compaction as well. I've got errors on the compaction stage:

level=error ts=2018-10-10T19:35:05.60981672Z caller=compact.go:196 msg="critical error detected; halting" err="compaction failed: compaction: compact blocks [/var/thanos/data/compact/0@{prom=\"1\"}/01CPQ1R4M7CFCBJHE9904K00CR /var/thanos/data/compact/0@{prom=\"1\"}/01CQVTTB08YYA1SW1K03CKHMZE /var/thanos/data/compact/0@{prom=\"1\"}/01CQVVZ7PE4YKJB3WJ1HZER5DC /var/thanos/data/compact/0@{prom=\"1\"}/01CQVWAFTMZFQM3D147VK258Q0 /var/thanos/data/compact/0@{prom=\"1\"}/01CQVXX40S8J32WQTVPNBAM4MN /var/thanos/data/compact/0@{prom=\"1\"}/01CQVYJWKTN17QY1853SGKA35G]: write compaction: write chunks: cannot allocate memory"
$ kubectl exec -it thanos-compactor-0 thanos bucket ls -o json --objstore.config-file=/etc/thanos/bucket.yaml | jq -s 'def tt(t): t/1000 | todate; map(select(.minTime >= 1536213600000 and .maxTime <= 1536364800000 )) | map({ulid: .ulid, minTime: tt(.minTime), maxTime: tt(.maxTime), stats: .stats, thanos: {source: .thanos.source}})'
[
  {
    "ulid": "01CPQ1R4M7CFCBJHE9904K00CR",
    "minTime": "2018-09-06T06:00:00Z",
    "maxTime": "2018-09-06T08:00:00Z",
    "stats": {
      "numSamples": 715119814,
      "numSeries": 6173318,
      "numChunks": 6192734
    },
    "thanos": {
      "source": "sidecar"
    }
  },
  {
    "ulid": "01CQVTTB08YYA1SW1K03CKHMZE",
    "minTime": "2018-09-06T08:00:00Z",
    "maxTime": "2018-09-06T16:00:00Z",
    "stats": {
      "numSamples": 2862862948,
      "numSeries": 6727848,
      "numChunks": 24799606
    },
    "thanos": {
      "source": "compactor"
    }
  },
  {
    "ulid": "01CQVVZ7PE4YKJB3WJ1HZER5DC",
    "minTime": "2018-09-06T16:00:00Z",
    "maxTime": "2018-09-07T00:00:00Z",
    "stats": {
      "numSamples": 2851257287,
      "numSeries": 6781947,
      "numChunks": 24807578
    },
    "thanos": {
      "source": "compactor"
    }
  },
  {
    "ulid": "01CQVWAFTMZFQM3D147VK258Q0",
    "minTime": "2018-09-07T00:00:00Z",
    "maxTime": "2018-09-07T08:00:00Z",
    "stats": {
      "numSamples": 2847793580,
      "numSeries": 6554082,
      "numChunks": 24585629
    },
    "thanos": {
      "source": "compactor"
    }
  },
  {
    "ulid": "01CQVXX40S8J32WQTVPNBAM4MN",
    "minTime": "2018-09-07T08:00:00Z",
    "maxTime": "2018-09-07T16:00:00Z",
    "stats": {
      "numSamples": 2852939529,
      "numSeries": 6601240,
      "numChunks": 24645491
    },
    "thanos": {
      "source": "compactor"
    }
  },
  {
    "ulid": "01CQVYJWKTN17QY1853SGKA35G",
    "minTime": "2018-09-07T16:00:00Z",
    "maxTime": "2018-09-08T00:00:00Z",
    "stats": {
      "numSamples": 2854566446,
      "numSeries": 6711599,
      "numChunks": 24739338
    },
    "thanos": {
      "source": "compactor"
    }
  }
]

5 blocks 4 from compactor and 1 from sidecar

$ kubectl exec -it thanos-compactor-0 thanos bucket ls -o json --objstore.config-file=/etc/thanos/bucket.yaml | jq -s 'map(select(.minTime >= 1536213600000 and .maxTime <= 1536364800000 )) | (map(.stats.numChunks) | add)'
$ 129770376

as you can see, number of chunks is 129770376, which should be in memory in the current implementation

@xjewer
Copy link
Contributor Author

xjewer commented Oct 23, 2018

I guess this work should be proceed for the compaction as well. I've got errors on the compaction stage: ...

As it turned out, the only consumption is a postings, and is not as bad as I thought

screen shot 2018-10-23 at 18 24 00

The wrong assumption I had, that the k8s pod memory usage is accounting the emptyDir volume as a memory resource (which is odd, as there wasn't an enabled option https://kubernetes.io/docs/concepts/storage/volumes/#emptydir medium=Memory), at the same time and more important, emptyDir mounts to the root fs, which is much smaller than the one for the storage in my case, and it wasn't enough even to download the blocks from the bucket. See the graphs below.

mount with emptyDir:
image

mount with the hostPath:
screen shot 2018-10-23 at 18 40 37

@dmitriy-lukyanchikov
Copy link

dmitriy-lukyanchikov commented Nov 7, 2018

@bwplotka @xjewer Hello guys i tried this code build on existing data, and looks like after downsampling i loose some metrics, dont know why because metadata shows the same amount of metrics in 5m downsample block, bot some metrics are not exist in 5m block, i can give you the block of data that i use to replicate this issue and query that i use. Original code is working well. I can give any advise you need in this, because it will be very helpful if huge block will not consume all memory on 256GB server

@xjewer
Copy link
Contributor Author

xjewer commented Nov 12, 2018

i can give you the block of data that i use to replicate this issue and query that i use. Original code is working well

would be nice to have a block and more details.

@dmitriy-lukyanchikov
Copy link

dmitriy-lukyanchikov commented Nov 12, 2018

block is almost 20gb size, where i can put it so you can download?
details is pretty simple, i run your code builded from branch with fix, a after this i run query with metric server_labels and this label was not found at all during all time in block, but metric job:node_cpu_usage_percentage:irate5m was found, i think yo can start with this metric and block that i will upload. i test it twice and result the same
block can be found on s3 public bucket s3://thanos-test-downsample-block region us-west-2
You can downaload it by running

aws --region us-west-2 s3 --no-sign-request sync s3://thanos-test-downsample-block/01CTD38FVF9NHAQ1NN2K4HX5DJ /tmp/01CTD38FVF9NHAQ1NN2K4HX5DJ

I will delete this bucket in one week

@xjewer
Copy link
Contributor Author

xjewer commented Nov 13, 2018

thanks, I downloaded this block and will see, what the issue is

@bwplotka
Copy link
Member

Nice finding with emptyDir! Why you used that for compactor though?

@xjewer xjewer force-pushed the feature_downsampling_instant_writer branch from 3546c3d to 8b1bafa Compare November 14, 2018 01:22
@xjewer
Copy link
Contributor Author

xjewer commented Nov 14, 2018

thanks @dmitriy-lukyanchikov, found and fixed bug.

need to add more tests for downsampling and for stream writer in particular

Why you used that for compactor though?

One of the reason - k8s would clean that garbage after pod's died or exited

@bwplotka bwplotka added this to In progress in v0.3.0 via automation Dec 11, 2018
@bwplotka bwplotka removed this from Needs review in v0.2.0 Dec 11, 2018
@milesbxf
Copy link
Contributor

We've been running this in production for over a month now, and it's hugely cut down our thanos-compact memory usage (from around 120GB to less than 20GB 🤯)

@bwplotka bwplotka changed the title compact: avoid memory leak while downsampling compact: Improved memory usage while downsampling Dec 14, 2018
@xjewer
Copy link
Contributor Author

xjewer commented Dec 19, 2018

actually, I'm still seeing leaks on downsampling state. It affects us with the ~ 0.5Tb data blocks (253e9 data samples), but that's probably a rare case for having such huge blocks, so we can merge it for now 🤔

@bwplotka

@xjewer
Copy link
Contributor Author

xjewer commented Feb 5, 2019

yep, I'm gonna add that commit in here and handle @domgreen remarks

@xjewer xjewer force-pushed the feature_downsampling_instant_writer branch from 8b1bafa to 8f06dfc Compare February 6, 2019 02:32
}
}()

sw.chunkWriter, err = chunks.NewWriter(filepath.Join(blockDir, block.ChunksDirname))
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have an idea to pass writers as a dependency, but in that case NewStreamedBlockWriter wouldn't be responsible for block's consistency. any thoughts? @domgreen @bwplotka

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point, was thinking about this as well.

so if we pass it as arguments, it will make testing easier, however by doing it like this we make streamwriter responsible for all writers making out interface better and with deep functionality, so I think it's fine for now. 👍

@xjewer xjewer force-pushed the feature_downsampling_instant_writer branch 3 times, most recently from 22aa6f2 to 5e99ac3 Compare February 6, 2019 03:16
Copy link
Member

@bwplotka bwplotka left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is really really good.

Some small suggestions only and LGTM! Thanks @xjewer really really good work on this 👍 Happy to finally get it in. ;p Sorry for delay in review.

.errcheck_excludes.txt Show resolved Hide resolved
pkg/compact/downsample/downsample.go Show resolved Hide resolved
pkg/compact/downsample/downsample.go Outdated Show resolved Hide resolved
pkg/compact/downsample/downsample.go Outdated Show resolved Hide resolved
pkg/compact/downsample/downsample.go Outdated Show resolved Hide resolved
pkg/compact/downsample/streamed_block_writer.go Outdated Show resolved Hide resolved
pkg/compact/downsample/streamed_block_writer.go Outdated Show resolved Hide resolved
pkg/compact/downsample/streamed_block_writer.go Outdated Show resolved Hide resolved
pkg/compact/downsample/streamed_block_writer.go Outdated Show resolved Hide resolved
pkg/compact/downsample/streamed_block_writer.go Outdated Show resolved Hide resolved
Add instant writer implementation to shrink memory consumption during the downsampling stage.
Encoded chunks are written to chunks blob files right away after series was handled.
Flush method closes chunk writer and sync all symbols, series, labels, posting and meta data to files.
It still works in one thread, hence operates only on one core.

Estimated memory consumption is unlikely more than 1Gb, but depends on data set, labels size and series' density: chunk data size (512MB) + encoded buffers + index data

Fixes thanos-io#297
Add comments and close resources properly.
Use proper posting index to fetch series data with label set and chunks
the downsampling process.

One of the trade-offs is to preserve symbols from raw blocks, as we have to write them
before preserving the series.

Stream writer allows downsample a huge data blocks with no needs to keep
all series in RAM, the only need it preserve label values and postings references.
@xjewer xjewer force-pushed the feature_downsampling_instant_writer branch 2 times, most recently from fb50a78 to 55d755a Compare February 6, 2019 17:39
Copy link
Member

@bwplotka bwplotka left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Essentially just constructor and LGTM.

pkg/compact/downsample/streamed_block_writer.go Outdated Show resolved Hide resolved
@xjewer xjewer force-pushed the feature_downsampling_instant_writer branch from 55d755a to caf4d96 Compare February 6, 2019 18:04
Copy link
Member

@bwplotka bwplotka left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perfect. LGTM! @domgreen ?

@bwplotka
Copy link
Member

bwplotka commented Feb 6, 2019

Got offline 👍 from @domgreen

@bwplotka bwplotka self-assigned this Feb 6, 2019
Reduce of use public Flush method to finalize index and meta files.
In case of error, a caller has to remove block directory with a preserved
garbage inside.

Rid of use tmp directories and renaming, syncing the final block on disk
before upload.
@xjewer xjewer force-pushed the feature_downsampling_instant_writer branch from caf4d96 to 12ffbe3 Compare February 6, 2019 18:47
v0.3.0 automation moved this from Needs review to Reviewer approved Feb 6, 2019
@bwplotka bwplotka merged commit c5acb9c into thanos-io:master Feb 6, 2019
v0.3.0 automation moved this from Reviewer approved to Done Feb 6, 2019
@bwplotka
Copy link
Member

bwplotka commented Feb 6, 2019

🎉

@salapat11
Copy link

I'm using commit c5acb9c and still noticing OOM. The block is 6.5GB. Compactor would not start until the memory was set to 15GB. It fails at the downsampling step.

"version": 1,
"ulid": "01D8XE7SZ01MHP5WRBTEVPPFC5",
"minTime": 1554336000000,
"maxTime": 1555545600000,
"stats": {
"numSamples": 201091519,
"numSeries": 21197285,
"numChunks": 22644937
},
"compaction": {
"level": 4,
"sources": [
"01D7K4MFMAGGM3NCAB082WV410",
"01D7KBGCEKYP7JD352NKENZE5H",
"01D7KJBY5HHBFCNA0PQSYKCN3Q",
"01D7KS7NB92S3GXTV06F1MTC98",
"01D7M03D6J3ZH12GMSCX0V72AX",
"01D7M6Z6YEKRCC6DAFTJ1RTF8A",
"01D7MDTWEXXMR1JSCGMTTRBXQM",
"01D7MMP2Y4GAAKJ3V56YG7D9JN",
"01D7MVHT7XV3K84TAMS7BQXXS2",
"01D7N2DGZESPM6966MSN3PJ4Z4",
"01D7N9987C6M9Z8NWFCS1TB9TM",
"01D7NG4ZPMWFDDEPPKVXER9AP3",
"01D7NQ0PRQ6JB7RQAA3AX282ZY",
"01D7NXWE0THT6G64SZNEJ1EEF6",
"01D7P4R58SDD1J4RPCQSFYXM3B",
"01D7PBKWQ505ZKYMS2TGVJX82F",
"01D7PJFM0PKAEWZV2J56S3BKTQ",
"01D7PSBB0V520K104GGWWP1KPM",
"01D7Q072QW4F011RSM4B323200",
"01D7Q72T5023BJDW2MVRBZSSE6",
"01D7QDYGRZ59YY4ZWCN340QYEZ",
"01D7QMT8BPEDSPJDJJW074KA5G",
"01D7QVNZF25YB8227W06FBYFJJ",
"01D7R2HPGK8K37EER11SX9W3X0",
"01D7R9DDRJ2S6CA3WNN94NMST5",
"01D7RG950K95CRRHH151QFDJVS",
"01D7RQ4W8HYJW3EHS9Q37S5N7G",
"01D7RY0KGMMJ3KDCF6GW8DSX1Y",
"01D7S4WAYKJ0QG1V3RSPVF58GT",
"01D7SBR20RQG6J8A7JYEARPYWV",
"01D7SJKS8H1FK2HMGD0HFXEN6A",
"01D7SSFGMGK9BA6TZ7WFAJSJJ5",
"01D7T0B7RSBMZK48YFH4G1PBAQ",
"01D7T76Z39WFA1ETVVXM5CK6S7",
"01D7TE2P8M1GX8V2SD6MG5XZSH",
"01D7TMYDNNZHGXGFHRPZGH08AT",
"01D7TVT4RKVB8MN9R4DTGBPNZK",
"01D7V2NW7KJVHCGK311BMATX8T",
"01D7V9HK92HRADJWEPSY47122R",
"01D7VGDAJPRSYHGHET65SV2T2G",
"01D7VQ91RPDBBZCBANVMTBH33Y",
"01D7VY4S0WCX26PPWF4D26TBC8",
"01D7W50GCTMAN7Z8SYY2NQGMNF",
"01D7WBW7G5J5AVNSNYQT8012B0",
"01D7WJQYV91CKXE7TQQJYSGA1Q",
"01D7WSKP0AT73VYB7J2WB5C81E",
"01D7X0FDDBNQ7D01CW2EGVDFGD",
"01D7X7B4QBCMTC53EVJQDZSGYT",
"01D7XE6W7RAN1006DFENBWT3A8",
"01D7XN2K5XYGF0W4EM7YV8964X",
"01D7XVYAEMRFNTMRC4D0B3PFH5",
"01D7Y2T1P1A4TDG0M7R5RNFEKS",
"01D7Y9NS0F8TWFR62PCGNWVYS1",
"01D7YGHGBXE4BEB0X1MYVJE7BD",
"01D7YQD7QKE3XYV0TVNME7BH0X",
"01D7YY8YYQ09H9J8JGBYD7QMCW",
"01D7Z54P5JEH3276RYEP7MVHTC",
"01D7ZC0D0V0VKEQ0V8MRXZS44X",
"01D7ZJW48QZ0SHFGB7B3YQZFSQ",
"01D7ZSQW5ZNMS0BMFX07P1WPJQ",
"01D800KJRJ40BCRP6RAHT5GFF2",
"01D807FAA9DG8DF6G2MCCS9Q3V",
"01D80EB1K69RXYJ4KQJTD63PT2",
"01D80N6RQ91VVMEG774XBJ6F7M",
"01D80W2G029HSP3E2VB9XNKWVS",
"01D812Y71TX1B9CR96VQ0TKBVT",
"01D819SZ22BDZV42G3E2BRQMHX",
"01D81GNTHDPCCSRBFAGZPPXY94",
"01D81QHHWCSTF6DA50KZBNQBB6",
"01D81YD41ETCY9PC44M3647BAK",
"01D8258VHS0VZQK2JA4ZQJC3KM",
"01D82C4JHDE0ZKQJG6JRBQ8J2B",
"01D82K09S5X3VK5C1NJKN9EW1W",
"01D82SW117FD8QJC8WFSQRZ1DH",
"01D830QRGMB2M9ZSEBM71960KH",
"01D837KFHCT6Y6MKG2S6JMHMQT",
"01D83EF706TTYRM8DAXH0JYNWV",
"01D83NAY1KBM52XMS8NAD4R4X6",
"01D83W6N9DFW6723VHYEM8ZKJP",
"01D8432HHEJA938A2CR6GWWX1J",
"01D849Y3SF6D6D3ZJB05TANCWZ",
"01D84GSV1KYVKRJD563HSCSMGG",
"01D84QNJKG13AEEFHHZNQ2PQBC",
"01D84YH9HEY44QTCEYP7H1AMPB",
"01D855D0SA2WC39PXSAN61B4WY",
"01D85C8R1CXTVWXH4GD3WD3T5W",
"01D85K4FZZ17V4BAVBZ0BX70FZ",
"01D85T06HAPSN4WPHTH7SFSD04",
"01D860VY32WPC4W6288KV3WER6",
"01D867QNDHCDY907H45SMCJN3N",
"01D86EKC96V930APQM2RNP27E7",
"01D86NF3HH8VP1VTK8003ARYKF",
"01D86WATVCPTQAR8C7FR3JTC0M",
"01D8736J1BWB2507ZRBDWB9DJJ",
"01D87A2ECSBYY6XDDNRMTNVQ6K",
"01D87GY0J7TJXKBD33C5CDCR8G",
"01D87QSQSEMKMK3R1ACDTF9TJK",
"01D87YNF1RMYMADR6A4HGZJXAS",
"01D885H69RY8QR5GA87QBBM6KG",
"01D88CCXHG65THXZCQ9Y77FHNX",
"01D88K8MSJHSMMHV3PQ0FFVWVW",
"01D88T4C1MPW7C14SNES9XG8XA",
"01D891039WE42AFA2MYC76BRAZ",
"01D897VTHYF3CNP4T66FV5N1TD",
"01D89EQHSWEZQCPKXYE2WB1N04",
"01D89NK91HHS6AERHS86FG57J1",
"01D89WF0H8BDKCV23VZMTB463W",
"01D8A3AQQEWEYNHRZDGPBB79K0",
"01D8AA6ESKGWMFYB0E3TZTAYG4",
"01D8AH26EEX1HEQHY76WKH5AKG",
"01D8AQXX9WH3EE93R9B7M1M2MN",
"01D8AYSMPQY22Y9XYMT0TKBWTX",
"01D8B5NBSH7V6Z3EFB444CDV56",
"01D8BCH36FJ92M4F7DGRFMHQMW",
"01D8BKCT9J5A4P1Y796JFNAJ2A",
"01D8BT8HHV1W2M9RY261Y9JA9K",
"01D8C148X8P0TXCMTBYK0JP9BG",
"01D8C8001P0ZNGC6YRHB2G8VD9",
"01D8CEVQF14JAE77KBW3J1QHGJ",
"01D8CNQEHRY3KKX9NZ6PZ2R0JT",
"01D8CWK63F09QE6TFM8H1NDG3S",
"01D8D3EX7YMS6BFFKJ07W4BSCM",
"01D8DAAMJ2G8JQZW5SGA0FPVR5",
"01D8DH6BHH8CB8VASGBN1SGQJV",
"01D8DR22SNH090DHF77VEND7NV",
"01D8DYXT1M0PH6TA54SQGBW0XP",
"01D8E5SHCC2SMJ723AZPDY6HXX",
"01D8ECN8HKFPD3YCC45BA4NZGG",
"01D8EKGZSJ3BYBQB8GW6DEPZ7G",
"01D8ETCQ64P6NS1C77Y1DMCFYH",
"01D8F18E9NP194ZSBN6YDSYMG5",
"01D8F845RKB20KFYF63CZVAR21",
"01D8FEZX0TTYE7WPKHZ65R9TKJ",
"01D8FNVM1R2GBR6X4YF1SPAGVB",
"01D8FWQB9MGHFQGBFC7JX1R1NR",
"01D8G3K2HGBC4409V7N1JRBYZY",
"01D8GAET134JGC9ZD0TDA155CX",
"01D8GHAH1JV3P6192K37X7S57E",
"01D8GR68Q6GHRH89E67KXVYDDC",
"01D8GZ1ZJ14ERAYC5Y6Q7W7YFD",
"01D8H5XQ73FDRSB4CAPFFM5GM2",
"01D8HCSE29S6AD03T13C9XQW52",
"01D8HKN59MFXYWH1C4TFQ9MFA8",
"01D8HTGWHQQZX6B3YAJ1PRBBXM",
"01D8J1CKVQTWTD8F9YNPPD4WQ7",
"01D8J88BADRFXV4N9MBN19Q2PN",
"01D8JF429K0MWQF1692KMQA0PZ",
"01D8JNZSQ9X7TVN3AFHD0JWE50",
"01D8JWVGTASV6EGVB788SK6ZYW",
"01D8K3Q82124CMBAMCH20SP2PE",
"01D8KAJZ9VBX9XDWSSCHKS9PC3",
"01D8KHEPZPD7K26FD48W39W7K8",
"01D8KRADSYSVQ9KX62RCMBPJZA",
"01D8KZ65BW6ZK07NFX5MCK5PSQ",
"01D8M61W9N1FHV60AJX99Q93HR",
"01D8MCXKV0CA9BTJ49K1PTQQ01",
"01D8MKSASFJJVBXC6KA0EW18BZ",
"01D8MTN21KX234W7CFD0J1KDEX",
"01D8N1GS9YZ9GFFT1NRGTKCT6F",
"01D8N8CGHVJT89NE24DQGDBV78",
"01D8NF87SX1S3CD2P5NRFH1CKP",
"01D8NP3Z1NBW7J6H2GFKXZ90C8",
"01D8NWZP9PCFFPH7KVJ4GX0BEE",
"01D8P3VE0XM2FXS4JZXJV8R4GQ",
"01D8PAQ57Q570NWYYASGG4TE67",
"01D8PHJWBDDQ58ZRC0YEK55VC1",
"01D8PREK9QSQWHH21KXFA0ZJ6V",
"01D8PZAFMRZWZY62ENNE0E5APJ"
],
"parents": [
{
"ulid": "01D7RBG9K4ZA1BVQV3RTF9315Q",
"minTime": 1554336000000,
"maxTime": 1554508800000
},
{
"ulid": "01D7XG9ACQG65WFVC8SYCBXFNW",
"minTime": 1554508800000,
"maxTime": 1554681600000
},
{
"ulid": "01D82N31EP3RWH160BXJ8ZHK2V",
"minTime": 1554681600000,
"maxTime": 1554854400000
},
{
"ulid": "01D87SX76F2XA9QVVTTMEPRKGK",
"minTime": 1554854400000,
"maxTime": 1555027200000
},
{
"ulid": "01D8CYNSBR8EH1DCNXFAQP8MZE",
"minTime": 1555027200000,
"maxTime": 1555200000000
},
{
"ulid": "01D8J3AXJW68NGSBNC7RHF4H57",
"minTime": 1555200000000,
"maxTime": 1555372800000
},
{
"ulid": "01D8Q8D60TJKP3NX9TW17MECBV",
"minTime": 1555372800000,
"maxTime": 1555545600000
}
]
},
"thanos": {
"labels": {
"label": "metrics"
},
"downsample": {
"resolution": 300000
},
"source": "compactor"
}

Downsampled block:

"version": 1,
"ulid": "01D92HFHSAQEB95J6D4F64292Q",
"minTime": 1554336000000,
"maxTime": 1555545600000,
"stats": {
"numSamples": 36234626,
"numSeries": 21197285,
"numChunks": 21250376
},
"compaction": {
"level": 4,
"sources": [
"01D7K4MFMAGGM3NCAB082WV410",
"01D7KBGCEKYP7JD352NKENZE5H",
"01D7KJBY5HHBFCNA0PQSYKCN3Q",
"01D7KS7NB92S3GXTV06F1MTC98",
"01D7M03D6J3ZH12GMSCX0V72AX",
"01D7M6Z6YEKRCC6DAFTJ1RTF8A",
"01D7MDTWEXXMR1JSCGMTTRBXQM",
"01D7MMP2Y4GAAKJ3V56YG7D9JN",
"01D7MVHT7XV3K84TAMS7BQXXS2",
"01D7N2DGZESPM6966MSN3PJ4Z4",
"01D7N9987C6M9Z8NWFCS1TB9TM",
"01D7NG4ZPMWFDDEPPKVXER9AP3",
"01D7NQ0PRQ6JB7RQAA3AX282ZY",
"01D7NXWE0THT6G64SZNEJ1EEF6",
"01D7P4R58SDD1J4RPCQSFYXM3B",
"01D7PBKWQ505ZKYMS2TGVJX82F",
"01D7PJFM0PKAEWZV2J56S3BKTQ",
"01D7PSBB0V520K104GGWWP1KPM",
"01D7Q072QW4F011RSM4B323200",
"01D7Q72T5023BJDW2MVRBZSSE6",
"01D7QDYGRZ59YY4ZWCN340QYEZ",
"01D7QMT8BPEDSPJDJJW074KA5G",
"01D7QVNZF25YB8227W06FBYFJJ",
"01D7R2HPGK8K37EER11SX9W3X0",
"01D7R9DDRJ2S6CA3WNN94NMST5",
"01D7RG950K95CRRHH151QFDJVS",
"01D7RQ4W8HYJW3EHS9Q37S5N7G",
"01D7RY0KGMMJ3KDCF6GW8DSX1Y",
"01D7S4WAYKJ0QG1V3RSPVF58GT",
"01D7SBR20RQG6J8A7JYEARPYWV",
"01D7SJKS8H1FK2HMGD0HFXEN6A",
"01D7SSFGMGK9BA6TZ7WFAJSJJ5",
"01D7T0B7RSBMZK48YFH4G1PBAQ",
"01D7T76Z39WFA1ETVVXM5CK6S7",
"01D7TE2P8M1GX8V2SD6MG5XZSH",
"01D7TMYDNNZHGXGFHRPZGH08AT",
"01D7TVT4RKVB8MN9R4DTGBPNZK",
"01D7V2NW7KJVHCGK311BMATX8T",
"01D7V9HK92HRADJWEPSY47122R",
"01D7VGDAJPRSYHGHET65SV2T2G",
"01D7VQ91RPDBBZCBANVMTBH33Y",
"01D7VY4S0WCX26PPWF4D26TBC8",
"01D7W50GCTMAN7Z8SYY2NQGMNF",
"01D7WBW7G5J5AVNSNYQT8012B0",
"01D7WJQYV91CKXE7TQQJYSGA1Q",
"01D7WSKP0AT73VYB7J2WB5C81E",
"01D7X0FDDBNQ7D01CW2EGVDFGD",
"01D7X7B4QBCMTC53EVJQDZSGYT",
"01D7XE6W7RAN1006DFENBWT3A8",
"01D7XN2K5XYGF0W4EM7YV8964X",
"01D7XVYAEMRFNTMRC4D0B3PFH5",
"01D7Y2T1P1A4TDG0M7R5RNFEKS",
"01D7Y9NS0F8TWFR62PCGNWVYS1",
"01D7YGHGBXE4BEB0X1MYVJE7BD",
"01D7YQD7QKE3XYV0TVNME7BH0X",
"01D7YY8YYQ09H9J8JGBYD7QMCW",
"01D7Z54P5JEH3276RYEP7MVHTC",
"01D7ZC0D0V0VKEQ0V8MRXZS44X",
"01D7ZJW48QZ0SHFGB7B3YQZFSQ",
"01D7ZSQW5ZNMS0BMFX07P1WPJQ",
"01D800KJRJ40BCRP6RAHT5GFF2",
"01D807FAA9DG8DF6G2MCCS9Q3V",
"01D80EB1K69RXYJ4KQJTD63PT2",
"01D80N6RQ91VVMEG774XBJ6F7M",
"01D80W2G029HSP3E2VB9XNKWVS",
"01D812Y71TX1B9CR96VQ0TKBVT",
"01D819SZ22BDZV42G3E2BRQMHX",
"01D81GNTHDPCCSRBFAGZPPXY94",
"01D81QHHWCSTF6DA50KZBNQBB6",
"01D81YD41ETCY9PC44M3647BAK",
"01D8258VHS0VZQK2JA4ZQJC3KM",
"01D82C4JHDE0ZKQJG6JRBQ8J2B",
"01D82K09S5X3VK5C1NJKN9EW1W",
"01D82SW117FD8QJC8WFSQRZ1DH",
"01D830QRGMB2M9ZSEBM71960KH",
"01D837KFHCT6Y6MKG2S6JMHMQT",
"01D83EF706TTYRM8DAXH0JYNWV",
"01D83NAY1KBM52XMS8NAD4R4X6",
"01D83W6N9DFW6723VHYEM8ZKJP",
"01D8432HHEJA938A2CR6GWWX1J",
"01D849Y3SF6D6D3ZJB05TANCWZ",
"01D84GSV1KYVKRJD563HSCSMGG",
"01D84QNJKG13AEEFHHZNQ2PQBC",
"01D84YH9HEY44QTCEYP7H1AMPB",
"01D855D0SA2WC39PXSAN61B4WY",
"01D85C8R1CXTVWXH4GD3WD3T5W",
"01D85K4FZZ17V4BAVBZ0BX70FZ",
"01D85T06HAPSN4WPHTH7SFSD04",
"01D860VY32WPC4W6288KV3WER6",
"01D867QNDHCDY907H45SMCJN3N",
"01D86EKC96V930APQM2RNP27E7",
"01D86NF3HH8VP1VTK8003ARYKF",
"01D86WATVCPTQAR8C7FR3JTC0M",
"01D8736J1BWB2507ZRBDWB9DJJ",
"01D87A2ECSBYY6XDDNRMTNVQ6K",
"01D87GY0J7TJXKBD33C5CDCR8G",
"01D87QSQSEMKMK3R1ACDTF9TJK",
"01D87YNF1RMYMADR6A4HGZJXAS",
"01D885H69RY8QR5GA87QBBM6KG",
"01D88CCXHG65THXZCQ9Y77FHNX",
"01D88K8MSJHSMMHV3PQ0FFVWVW",
"01D88T4C1MPW7C14SNES9XG8XA",
"01D891039WE42AFA2MYC76BRAZ",
"01D897VTHYF3CNP4T66FV5N1TD",
"01D89EQHSWEZQCPKXYE2WB1N04",
"01D89NK91HHS6AERHS86FG57J1",
"01D89WF0H8BDKCV23VZMTB463W",
"01D8A3AQQEWEYNHRZDGPBB79K0",
"01D8AA6ESKGWMFYB0E3TZTAYG4",
"01D8AH26EEX1HEQHY76WKH5AKG",
"01D8AQXX9WH3EE93R9B7M1M2MN",
"01D8AYSMPQY22Y9XYMT0TKBWTX",
"01D8B5NBSH7V6Z3EFB444CDV56",
"01D8BCH36FJ92M4F7DGRFMHQMW",
"01D8BKCT9J5A4P1Y796JFNAJ2A",
"01D8BT8HHV1W2M9RY261Y9JA9K",
"01D8C148X8P0TXCMTBYK0JP9BG",
"01D8C8001P0ZNGC6YRHB2G8VD9",
"01D8CEVQF14JAE77KBW3J1QHGJ",
"01D8CNQEHRY3KKX9NZ6PZ2R0JT",
"01D8CWK63F09QE6TFM8H1NDG3S",
"01D8D3EX7YMS6BFFKJ07W4BSCM",
"01D8DAAMJ2G8JQZW5SGA0FPVR5",
"01D8DH6BHH8CB8VASGBN1SGQJV",
"01D8DR22SNH090DHF77VEND7NV",
"01D8DYXT1M0PH6TA54SQGBW0XP",
"01D8E5SHCC2SMJ723AZPDY6HXX",
"01D8ECN8HKFPD3YCC45BA4NZGG",
"01D8EKGZSJ3BYBQB8GW6DEPZ7G",
"01D8ETCQ64P6NS1C77Y1DMCFYH",
"01D8F18E9NP194ZSBN6YDSYMG5",
"01D8F845RKB20KFYF63CZVAR21",
"01D8FEZX0TTYE7WPKHZ65R9TKJ",
"01D8FNVM1R2GBR6X4YF1SPAGVB",
"01D8FWQB9MGHFQGBFC7JX1R1NR",
"01D8G3K2HGBC4409V7N1JRBYZY",
"01D8GAET134JGC9ZD0TDA155CX",
"01D8GHAH1JV3P6192K37X7S57E",
"01D8GR68Q6GHRH89E67KXVYDDC",
"01D8GZ1ZJ14ERAYC5Y6Q7W7YFD",
"01D8H5XQ73FDRSB4CAPFFM5GM2",
"01D8HCSE29S6AD03T13C9XQW52",
"01D8HKN59MFXYWH1C4TFQ9MFA8",
"01D8HTGWHQQZX6B3YAJ1PRBBXM",
"01D8J1CKVQTWTD8F9YNPPD4WQ7",
"01D8J88BADRFXV4N9MBN19Q2PN",
"01D8JF429K0MWQF1692KMQA0PZ",
"01D8JNZSQ9X7TVN3AFHD0JWE50",
"01D8JWVGTASV6EGVB788SK6ZYW",
"01D8K3Q82124CMBAMCH20SP2PE",
"01D8KAJZ9VBX9XDWSSCHKS9PC3",
"01D8KHEPZPD7K26FD48W39W7K8",
"01D8KRADSYSVQ9KX62RCMBPJZA",
"01D8KZ65BW6ZK07NFX5MCK5PSQ",
"01D8M61W9N1FHV60AJX99Q93HR",
"01D8MCXKV0CA9BTJ49K1PTQQ01",
"01D8MKSASFJJVBXC6KA0EW18BZ",
"01D8MTN21KX234W7CFD0J1KDEX",
"01D8N1GS9YZ9GFFT1NRGTKCT6F",
"01D8N8CGHVJT89NE24DQGDBV78",
"01D8NF87SX1S3CD2P5NRFH1CKP",
"01D8NP3Z1NBW7J6H2GFKXZ90C8",
"01D8NWZP9PCFFPH7KVJ4GX0BEE",
"01D8P3VE0XM2FXS4JZXJV8R4GQ",
"01D8PAQ57Q570NWYYASGG4TE67",
"01D8PHJWBDDQ58ZRC0YEK55VC1",
"01D8PREK9QSQWHH21KXFA0ZJ6V",
"01D8PZAFMRZWZY62ENNE0E5APJ"
],
"parents": [
{
"ulid": "01D7RBG9K4ZA1BVQV3RTF9315Q",
"minTime": 1554336000000,
"maxTime": 1554508800000
},
{
"ulid": "01D7XG9ACQG65WFVC8SYCBXFNW",
"minTime": 1554508800000,
"maxTime": 1554681600000
},
{
"ulid": "01D82N31EP3RWH160BXJ8ZHK2V",
"minTime": 1554681600000,
"maxTime": 1554854400000
},
{
"ulid": "01D87SX76F2XA9QVVTTMEPRKGK",
"minTime": 1554854400000,
"maxTime": 1555027200000
},
{
"ulid": "01D8CYNSBR8EH1DCNXFAQP8MZE",
"minTime": 1555027200000,
"maxTime": 1555200000000
},
{
"ulid": "01D8J3AXJW68NGSBNC7RHF4H57",
"minTime": 1555200000000,
"maxTime": 1555372800000
},
{
"ulid": "01D8Q8D60TJKP3NX9TW17MECBV",
"minTime": 1555372800000,
"maxTime": 1555545600000
}
]
},
"thanos": {
"labels": {
"label": "metrics"
},
"downsample": {
"resolution": 3600000
},
"source": "compactor"
}

Is this expected? Over the period, block size may increase. Should the memory be increased ti support bigger blocks?

@xjewer xjewer deleted the feature_downsampling_instant_writer branch April 24, 2019 09:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
No open projects
v0.3.0
  
Done
Development

Successfully merging this pull request may close these issues.

donwsampling: Optimize downsample memory usage
7 participants