Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.
Sign upExpensive recording rules cause high memory/malloc use in 2.0.0 #3450
Comments
This comment has been minimized.
This comment has been minimized.
|
Was the memory usage any different in Prometheus 1.x? I'd assume the allocations happen in the query layer, so 1.x and 2.x should be affected in the same way. And BTW, we have found |
This comment has been minimized.
This comment has been minimized.
|
Attaching the heap output for 1.8.1. I don't think it's much different, but it was being masked by the target-heap-size use. |
This comment has been minimized.
This comment has been minimized.
|
I would expect it's the time frame rather than the function. Both of those should be constant memory, as there's just a handful of floats to be tracked. |
This comment has been minimized.
This comment has been minimized.
|
So is this 2.0 specific or not? If not we should remove the 2.0 tag as it's not a regression but a general issue then. |
This comment has been minimized.
This comment has been minimized.
|
I also have issues with memory usage on 2.0, so not sure it's unrelated.
It's possible that something in my infra caused this increase but I can't explain it. I assume the only thing that could cause such growth would be an growing number of timeseries but not sure if that's what open head series shows? I have a few recording rules which are more or less expensive:
Either way, this was running just fine until yesterday. Now it runs out of memory within minutes. |
gouthamve
added
priority/P2
component/rules
component/promql
not-as-easy-as-it-looks
labels
Jan 18, 2018
jacksontj
referenced this issue
Jun 5, 2018
Closed
storage.Querier doesn't handle offsets efficiently #4224
This comment has been minimized.
This comment has been minimized.
|
We've made quite a few performance improvements to PromQL since this was filed, so this should be a lot better now. |
brian-brazil
closed this
Jul 31, 2018
This comment has been minimized.
This comment has been minimized.
lock
bot
commented
Mar 22, 2019
|
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
SuperQ commentedNov 9, 2017
What did you do?
Have some recording rules that require a large amount of data to evaluate.
What did you expect to see?
Moderate heap use/growth.
What did you see instead? Under which circumstances?
Large heap use/growth.
Environment
This recording rule requires about 12k metrics, which is admittedly expensive, but the heap grows to 10-15GB.
System information:
insert output of
uname -srmherePrometheus version:
See attached pprof heap.svg.gz.