-
Notifications
You must be signed in to change notification settings - Fork 2.2k
Closed
Description
Thanos, Prometheus and Golang version used
thanos v0.1.0rc2
What happened
Thanos-store is consuming 50gb of memory during startup
What you expected to happen
Thanos-store does not consume so much memory for starting up
Full logs to relevant components
store:
level=debug ts=2018-07-27T15:51:21.415788856Z caller=cluster.go:132 component=cluster msg="resolved peers to following addresses" peers=100.96.232.51:10900,100.99.70.149:10900,100.110.182.241:10900,100.126.12.148:10900
level=debug ts=2018-07-27T15:51:21.416254389Z caller=store.go:112 msg="initializing bucket store"
level=warn ts=2018-07-27T15:52:05.28837034Z caller=bucket.go:240 msg="loading block failed" id=01CKE41VDSJMSAJMN6N6K8SABE err="new bucket block: load index cache: download index file: copy object to file: write /var/thanos/store/01CKE41VDSJMSAJMN6N6K8SABE/index: cannot allocate memory"
level=warn ts=2018-07-27T15:52:05.293692332Z caller=bucket.go:240 msg="loading block failed" id=01CKE41VE4XXTN9N55YPCJSPP2 err="new bucket block: load index cache: download index file: copy object to file: write /var/thanos/store/01CKE41VE4XXTN9N55YPCJSPP2/index: cannot allocate memory"
Anything else we need to know
Some time after initialization the ram usage goes down to normal levels, something around 8Gb
Another thing that's happening is that my thanos-compactor consumer way too much ram memory as well, the last time it ran, it used up to 60Gb of memory.
I run store with this args:
containers:
- args:
- store
- --log.level=debug
- --tsdb.path=/var/thanos/store
- --s3.endpoint=s3.amazonaws.com
- --s3.access-key=xxx
- --s3.bucket=xxx
- --cluster.peers=thanos-peers.monitoring.svc.cluster.local:10900
- --index-cache-size=2GB
- --chunk-pool-size=8GB
Environment:
- OS (e.g. from /etc/os-release): kubernetes running on debian
realdimas, sasah, jinlinux, l4pg, carlosjgp and 4 more