Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.
Sign upStrange performance issues #3189
Comments
This comment has been minimized.
This comment has been minimized.
|
150MiB is way too little for the 73k time series you have in your setup. Prometheus stops ingestion because it has hit the memory limit you have given to it. Since this is not a bug but more a discussion about how to use Prometheus correctly in a certain scenario, it makes more sense to bring this to the prometheus-users mailing list rather than seeking support in a GitHub issue. On the mailing list, more people are available to potentially respond to your question, and the whole community can benefit from the answers provided. |
beorn7
closed this
Sep 19, 2017
This comment has been minimized.
This comment has been minimized.
|
@beorn7 I'll try it, thank you! |
This comment has been minimized.
This comment has been minimized.
lock
bot
commented
Mar 23, 2019
|
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
sashgorokhov commentedSep 18, 2017
For the last few days i've noticed a great prometheus performance leak. It is literally under 100% and memory and I/O usage has gone beyond limits (i set -storage.local.target-heap-size to 150mb). I also noticed a periodic gaps in metrics values in grafana. First time, I've thought it was some sort of a bug, and stopped prometheus, removed its volume and skipped a day or two without prometheus. Yesterday i launched it again, and again i need to shut it down because of the same insane performance usage.
If you need an additional info, please let me know. You may close this issue, but please, i want to know what went wrong with my poor little prometheus server.
P.S. May be the flag
-storage.local.retention=604800swhich i added recently has made prometheus insane?Environment
Docker containers