Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.
Sign up[Feature request] config parameter (or option) to limit max acceptable samples/size/lines in single scrape #4342
Comments
This comment has been minimized.
This comment has been minimized.
|
You're looking for |
This comment has been minimized.
This comment has been minimized.
|
Doh! Apologies -- never spotted that! |
KevinAMurray
closed this
Jul 3, 2018
This comment has been minimized.
This comment has been minimized.
lock
bot
commented
Mar 22, 2019
|
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
lock
bot
locked and limited conversation to collaborators
Mar 22, 2019
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
KevinAMurray commentedJul 3, 2018
Proposal
A config parameter, possibly global or even as command line option, but ideally for each target that specifies the maximum size (time series, data or lines would do) that is acceptable for a single scrape. If the target exceeds that value, the scrape is ignored. (e.g. if ContentLength is greater than a certain value, then the scrape is treated as failed.)
We have had OOM issues with scraping that have been caused by the target returning "too much" data. To have a bit more resilence it would be good to have prometheus "reject" metric where the volume returned is too large. (Whilst for some targets the time taken to generate the data causes a timeout which has the same effect, that is not always the case, especially if the timeout is deliberately set to a large value).
Occasionally we can identify the issue via alerts on scrape_samples_scraped, once the problem has been tripped, prometheus gets into a crash loop.