-
Notifications
You must be signed in to change notification settings - Fork 8.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
nginx_ingress_controller_requests metrics seems way off #3256
Comments
In Prometheus, |
Thanks for the replay. I have tried increase(), but in that case the metrics seems to be to high (compared with the NLB metrics). I'm not too familiar with the Prometheus query language, so any help or suggestions are welcome. |
Seems like what I try to achieve is maybe not so simple |
The bug you link to is a deficiency in Prometheus, IMO, but probably not related to your issue. Your Grafana graph seems to average around 100req/s, which would be 6000req/m. Maybe this includes more requests than what the LB sees? When I correlate counted requests from access log in Elasticsearch with |
NGINX Ingress controller version:
0.20.0
Kubernetes version):
1.10.5
It seems like the Prometheus statistics scraped from the ingress engine and my NLB in front of my K8s cluster do not agree upon the load. And I happen to know that the NLB are displaying correct statistics. The old ingress engine version using VTS stats does agree with the NLB on the load.
This is my grafana query.
round(sum(irate(nginx_ingress_controller_requests[1m])) by (status),1.0)
The real load is around 100 to 2000 requests pr minute, as the NLB correctly states.
Please have a look at the images below.
![image](https://user-images.githubusercontent.com/9726307/47012148-d7e62b80-d143-11e8-82be-238f77ea8223.png)
NLB metrics
Ingress engine metrics
![image](https://user-images.githubusercontent.com/9726307/47012191-f4826380-d143-11e8-87d2-aee406256022.png)
Metrics are scraped and collected by Prometheus each 30 seconds.
The text was updated successfully, but these errors were encountered: