Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.
Sign upPrometheus Memory increasing frequently #4299
Comments
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
|
1.8.0? - wow , why not use the latest 2.3 ? |
This comment has been minimized.
This comment has been minimized.
|
while deploying prometheus with 2.3 i am getting the below error: level=error ts=2018-06-22T09:52:18.439015663Z caller=manager.go:483 component="rule manager" msg="loading groups failed" err="yaml: unmarshal errors:\n line 7: cannot unmarshal !!str |
This comment has been minimized.
This comment has been minimized.
|
It looks as if this is actually a question about usage and not development. no crash, no panic = support question To make support question, and all replies, easier to find, I suggest you move this to our user mailing list. If you haven't looked already you might find your answer in the official docs and examples or by searching in the users or devs groups. Once your questions have been answered, please add a link to the solution to help other Prometheans in trouble reaching this from a search |
krasi-georgiev
closed this
Jun 22, 2018
This comment has been minimized.
This comment has been minimized.
|
kubectl logs alertmanager-deployment-0 level=info ts=2018-06-24T09:29:18.918038222Z caller=main.go:136 msg="Starting Alertmanager" version="(version=0.14.0, branch=HEAD, revision=30af4d051b37ce817ea7e35b56c57a0e2ec9dbb0)" |
This comment has been minimized.
This comment has been minimized.
|
still not a bug please move to the user mailing list. |
This comment has been minimized.
This comment has been minimized.
|
@krasi-georgiev : i have deployed prometheus with 2.3 version but i am facing the same issue and any way to restrict memory utilization of prometheus in 2.3 version |
This comment has been minimized.
This comment has been minimized.
|
@sanjay2916 you should definitely move your question to the user mailing list. Prometheus 2.x doesn't have any flag to control the memory usage. |
This comment has been minimized.
This comment has been minimized.
|
@krasi-georgiev : thanks |

sanjay2916 commentedJun 21, 2018
I have deployed prometheus(1.8.0) with alertmanager(0.5.0) and i have done below setting while deploying
prometheus memory keep on increasing and it never decreasing and I need to run prometheus with 512MB heapsize and pod memory limit 1024MB without crashing prometheus server with constant memory
prometheus logs:
kubectl logs prometheus-deployment-0 prometheus
time="2018-06-21T12:57:04Z" level=info msg="Starting prometheus (version=1.8.0, branch=HEAD, revision=3569eef8b1bc062bb5df43181b938277818f365b)" source="main.go:87"
time="2018-06-21T12:57:04Z" level=info msg="Build context (go=go1.9.1, user=root@bd4857492255, date=20171006-22:12:46)" source="main.go:88"
time="2018-06-21T12:57:04Z" level=info msg="Host details (Linux 4.14.48-coreos-r2 #1 SMP Thu Jun 14 08:23:03 UTC 2018 x86_64 prometheus-deployment-0 (none))" source="main.go:89"
time="2018-06-21T12:57:04Z" level=info msg="Loading configuration file /etc/prometheus/prometheus.yml" source="main.go:254"
time="2018-06-21T12:57:04Z" level=info msg="Listening on :9090" source="web.go:341"
time="2018-06-21T12:57:04Z" level=info msg="Loading series map and head chunks..." source="storage.go:428"
time="2018-06-21T12:57:04Z" level=info msg="0 series loaded." source="storage.go:439"
time="2018-06-21T12:57:04Z" level=info msg="Server is Ready to receive requests." source="main.go:230"
time="2018-06-21T12:57:04Z" level=info msg="Starting target manager..." source="targetmanager.go:63"
time="2018-06-21T12:57:04Z" level=info msg="Using pod service account via in-cluster config" source="kubernetes.go:105"
time="2018-06-21T12:57:04Z" level=info msg="Using pod service account via in-cluster config" source="kubernetes.go:105"
time="2018-06-21T12:57:04Z" level=info msg="Using pod service account via in-cluster config" source="kubernetes.go:105"
time="2018-06-21T13:02:04Z" level=info msg="Checkpointing in-memory metrics and chunks..." source="persistence.go:633"
time="2018-06-21T13:02:04Z" level=info msg="Done checkpointing in-memory metrics and chunks in 387.927908ms." source="persistence.go:665"