Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.
Sign upRecommended settings for running a machine with 2GB ram #1836
Comments
This comment has been minimized.
This comment has been minimized.
|
See https://prometheus.io/docs/operating/storage/ I'd start with |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
|
Yeah, light load. You are only running out of memory because Prometheus tries to utilize ~4GiB by default. With the setting above, you should be all good. |
This comment has been minimized.
This comment has been minimized.
|
Ok, that sounds reasonable to me. Will try it out. Thx for the support! |
svenmueller
closed this
Jul 20, 2016
svenmueller
reopened this
Jul 21, 2016
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
|
With the settings, I expect Prometheus to fully utilize your memory. If you still run OOM, you can tweak the flag even lower. More context and tweaks are described at https://prometheus.io/docs/operating/storage/ Ideally, Prometheus would auto-tune memory usage, but that's a non-trivial problem, see #455 . |
This comment has been minimized.
This comment has been minimized.
|
In the meantime i was trying |
This comment has been minimized.
This comment has been minimized.
|
Try lower values until you don't run into OOMs anymore. You can also perform heap profiling to find out where the memory is gone, start with |
This comment has been minimized.
This comment has been minimized.
philicious
commented
Aug 1, 2016
|
@svenmueller I'm struggling with a similar issue. if you found a |
fluxrad
referenced this issue
Aug 11, 2016
Closed
Kubernetes: Memory usage continually increases. Process enters crash recovery loop. #1885
brian-brazil
closed this
Oct 26, 2016
This comment has been minimized.
This comment has been minimized.
lock
bot
commented
Mar 24, 2019
|
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |



svenmueller commentedJul 20, 2016
Hi,
We are using a machine with 2CPUs/2GB RAM (ubuntu 14.04) where prometheus and alertmanager are running as Docker containers.
Currently the prometheus application runs OOM at least once day. The Prometheus server instance scrapes < 30 targets (every 15s) to collect the node machine metrics exported by node-exporter on the targets.
Which settings would you recommend to run this kind of setup? Can the settings be tuned to get a stable Prometheus or is the machine simply to small to handle this scenario?
Thx,
Sven
Current cmd line flags for docker container
Stackstrace
fatal error: runtime: out of memory runtime stack: runtime.throw(0xfca050, 0x16) /usr/local/go/src/runtime/panic.go:547 +0x90 runtime.sysMap(0xc8864e0000, 0x100000, 0x0, 0x1528e98) /usr/local/go/src/runtime/mem_linux.go:206 +0x9b runtime.(*mheap).sysAlloc(0x150e380, 0x100000, 0x1061ea670) /usr/local/go/src/runtime/malloc.go:429 +0x191 runtime.(*mheap).grow(0x150e380, 0x28, 0x0) /usr/local/go/src/runtime/mheap.go:651 +0x63 runtime.(*mheap).allocSpanLocked(0x150e380, 0x28, 0xc81ccd9400) /usr/local/go/src/runtime/mheap.go:553 +0x4f6 runtime.(*mheap).alloc_m(0x150e380, 0x28, 0x100000000, 0x150f8f0) /usr/local/go/src/runtime/mheap.go:437 +0x119 runtime.(*mheap).alloc.func1()Runtime Information
Uptime: 2016-07-20 10:44:09.210337115 +0000 UTC
Build Information
Version: 0.20.0
Revision: aeab25c
Branch: master
BuildUser: root@77050118f904
BuildDate: 20160616-08:38:14
GoVersion: go1.6.2