New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
/metrics endpoint is way too slow under Wildfly 8 #246
Comments
Have you tried latest version version currently in master branch? Several perf improvements were made in master that haven't been released yet. |
This is something to take up with Wildfly, we don't control how slow JMX endpoints are. |
Just tried your suggestion and compiled a fresh version of the master branch: The results are better, down to 30 seconds!
Still not ideal, but it does fall under an acceptable range for my team. Thanks a lot! |
I'm having the same same issue as yours where the usual scrape time takes about 43 seconds even with the latest code. In my case, I already the know the object names in advance so I set
This improves scrape time from 43s to only 0.028s. |
see prometheus#246 (comment) With this patch, the scrape time (`jmx_scrape_duration_seconds`) dropped from 15 seconds to 0.9s. (using with jmx_explorer 0.7) I can therefore avoid increasing `scrape_timeout` and `scrape_interval`.
see prometheus#246 (comment) With this patch, the scrape time (`jmx_scrape_duration_seconds`) dropped from 15 seconds to 0.9s. (using with jmx_explorer 0.7) I can therefore avoid increasing `scrape_timeout` and `scrape_interval`. Signed-off-by: Frank Lin Piat <fpiat@klabs.be>
I have submitted a PR to add whitelistObjectNames, see #284 Thanks @n3v3rf411 |
see #246 (comment) With this patch, the scrape time (`jmx_scrape_duration_seconds`) dropped from 15 seconds to 0.9s. (using with jmx_explorer 0.7) I can therefore avoid increasing `scrape_timeout` and `scrape_interval`. Signed-off-by: Frank Lin Piat <fpiat@klabs.be>
see prometheus#246 (comment) With this patch, the scrape time (`jmx_scrape_duration_seconds`) dropped from 15 seconds to 0.9s. (using with jmx_explorer 0.7) I can therefore avoid increasing `scrape_timeout` and `scrape_interval`. Signed-off-by: Frank Lin Piat <fpiat@klabs.be>
This was earlier reported on #175, which was closed by adding a footnote in the documentation.
I'm running Wildfly instances in a few Docker containers and the /metrics entpoint takes way too long. From inside the container:
We are having better results using Glassfish 3.1.2.2 or Wildfly 10. Any particular reason why this is taking so long under Wildfly 8? Any tips on how to filter metrics to make the endpoint faster?
Otherwise, we'll have to configure Prometheus with a
scrape_interval
of at least 2 minutes!The text was updated successfully, but these errors were encountered: