Description
What happened:
I noticed a problem with metrics that can lead to metric granularity explosion.
In default config ingress will accept any HTTP method, even nonsense one. HTTP methods are exposed as values of the method
label in metrics. This means that requests with random, unique, http methods ("AAA", "AAB", "AAC" and so on) will cause metric numbers to explode causing monitoring issues or possibly even affecting the controller itself (DoS).
What you expected to happen:
Prometheus best practice is for label values to be bound so I expected non-standard HTTP request to be handled in a way that doesn't affect metrics.
NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.):
This was tested with version 1.8.0
Kubernetes version (use kubectl version
):
Server Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.14", GitCommit:"0018fa8af8ffaff22f36d3bd86289753ca61da81", GitTreeState:"clean", BuildDate:"2023-05-17T16:21:16Z", GoVersion:"go1.19.9", Compiler:"gc", Platform:"linux/amd64"}
Environment:
-
Cloud provider or hardware configuration: AWS EC2
-
OS (e.g. from /etc/os-release): Ubuntu 20.04
-
Kernel (e.g.
uname -a
):5.11.0-1021-aws
-
Install tools: kubeadm
-
Kubectl info:
Client Version: version.Info{Major:"1", Minor:"27", GitVersion:"v1.27.3", GitCommit:"25b4e43193bcda6c7328a6d147b1fb73a33f1598", GitTreeState:"clean", BuildDate:"2023-06-15T02:15:11Z", GoVersion:"go1.20.5", Compiler:"gc", Platform:"linux/amd64"}
-
How was the ingress-nginx-controller installed:
We installed using helm template
that was stored in a git repo:
helm template --values=- --namespace=ingress-nginx --repo https://kubernetes.github.io/ingress-nginx ingress-nginx ingress-nginx
- Current State of the controller:
Controller is full functional and it works without problems. The only visible problem is very high number of metrics being exported.
- Current state of ingress object, if applicable:
All Ingress objects are fine and the controller handles requests fine.
How to reproduce this issue:
- Deploy controller and add an ingress object.
- Confirm what HTTP methods are exposed as metrics using the following prometheus query:
sum(nginx_ingress_controller_requests) by (method)
- Make a a few http reqeuests with random HTTP methods:
curl --request "AAA" https://example.com
curl --request "AAB" https://example.com
curl --request "AAC" https://example.com
curl --request "AAD" https://example.com
- Confirm that new methods now appear as label values:
sum(nginx_ingress_controller_requests{method=~"AA.*"}) by (method)
- Above label values persist until controller is restarted
Anything else we need to know:
Kubernetes security team's assessment is that this but doesn't require CVE.
With regards to the best solution - we could have valid codes listed in the controller code and report any methods not matching the allowlist as method="invalid"
or similar.
Metadata
Metadata
Assignees
Labels
Type
Projects
Status
Activity
k8s-ci-robot commentedon Jul 17, 2023
This issue is currently awaiting triage.
If Ingress contributors determines this is a relevant issue, they will accept it by applying the
triage/accepted
label and provide further guidance.The
triage/accepted
label can be added by org members by writing/triage accepted
in a comment.Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
github-actions commentedon Aug 17, 2023
This is stale, but we won't close it automatically, just bare in mind the maintainers may be busy with other tasks and will reach your issue ASAP. If you have any question or request to prioritize this, please reach
#ingress-nginx-dev
on Kubernetes Slack.Improve HTTP method label handling in prometheus metrics
Improve HTTP method label handling in prometheus metrics