-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
help request: prometheus collection indicator interface timeout #11274
Comments
what is your apisix pod resource request & limit config ? is it enough ? |
can you try add some log to check what process cost most time ? |
My guess is that the amount of data is too large, resulting in time-consuming data transmission. The question I am more concerned about is why the monitoring fluctuates. Is it because the shared_dict elimination policy is triggered, resulting in data loss? Do you have any better suggestions for monitoring fluctuations, or can I try to expand shared_dict? |
yes, you can try to expand shared_dict |
thank you for your reply. I want to know about APISIX's shared_dict configuration of prometheus' default configuration is 10mb. However, when collecting monitoring data, it will be 2-3 times the default configuration. |
Description
When I used prometheus to collect apisix monitoring data, I found that the /apisix/prometheus/metrics data interface occasionally took a long time, causing grafana monitoring data to be unstable. what is the reason?
How should we optimize or solve the time-consuming problem of APISIX data interface?
Environment
apisix version
): 3.2.0uname -a
): Linux localhost.localdomain 3.10.0-327.el7.x86_64 SMP Thu Oct 29 17:29:29 EDT 2015 x86_64 x86_64 x86_64 GNU/Linuxopenresty -V
ornginx -V
): openresty/1.21.4.1curl http://127.0.0.1:9090/v1/server_info
): 3.5.4luarocks --version
):The text was updated successfully, but these errors were encountered: