Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Still default status code when defining limit-req-status-code #2250

Closed
akoenig opened this issue Mar 23, 2018 · 11 comments
Closed

Still default status code when defining limit-req-status-code #2250

akoenig opened this issue Mar 23, 2018 · 11 comments

Comments

@akoenig
Copy link

akoenig commented Mar 23, 2018

NGINX Ingress controller version:

0.12.0

Kubernetes version (use kubectl version):

v1.8.6

Environment:

  • Cloud provider or hardware configuration: GKE
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

What happened:

When defining limit-req-status-code: "429" in the ConfigMap and deploying the DaemonSet afterwards, I still receive a 503 status code when rate limiting kicks in.

What you expected to happen:

The HTTP response status code should be 429 (in this case).

How to reproduce it:

Take this ConfigMap ...

apiVersion: v1
data:
  limit-req-status-code: "429"
  proxy-connect-timeout: "15"
  proxy-read-timeout: "600"
  proxy-send-imeout: "600"
  hsts-include-subdomains: "false"
  body-size: "64m"
  server-name-hash-bucket-size: "256"
kind: ConfigMap
metadata:
  name: nginx-cm
  namespace: ingress

... deploy the DaemonSet and apply an Ingress resource with this annotation: nginx.ingress.kubernetes.io/limit-connections: "1".

Afterwards, use a tool like vegeta and perform a rate limit test:

echo "GET https://my-host.com/" | vegeta attack -duration=10s | tee results.bin | vegeta report

The output will be:

Requests      [total, rate]            500, 50.10
Duration      [total, attack, wait]    10.409704726s, 9.979999s, 429.705726ms
Latencies     [mean, 50, 95, 99, max]  132.304924ms, 70.759949ms, 402.001642ms, 574.450653ms, 789.396767ms
Bytes In      [total, mean]            106500, 213.00
Bytes Out     [total, mean]            1500, 3.00
Success       [ratio]                  0.00%
Status Codes  [code:count]             503:500  
Error Set:
503 Service Unavailable

The Error Set indicates that the configuration hasn't been applied.

@akoenig
Copy link
Author

akoenig commented Apr 6, 2018

Usually, I'm not a person who sends "any update on this?" comments, but this is killing us in production. So forgive me to be penetrant here 🙂 Any data I could provide for debugging?

@aledbf
Copy link
Member

aledbf commented Apr 6, 2018

@akoenig I am sorry for the delay. I will take a look at this today.

@akoenig
Copy link
Author

akoenig commented Apr 6, 2018

@aledbf No worries :) Thanks for investigating this problem. 👍

@aledbf
Copy link
Member

aledbf commented Apr 6, 2018

@akoenig how are you testing this? You need to trigger the tests against a large file. Please check https://forum.nginx.org/read.php?2,277820,277825#msg-277825
(NGINX is too fast with small request to trigger that connection limit 😃)

@aledbf
Copy link
Member

aledbf commented Apr 6, 2018

If you change the annotation to limit-rps: 1 you will see 429 as error code

@akoenig
Copy link
Author

akoenig commented Apr 6, 2018

@aledbf Okay, I see. Yeah nginx is a beast 😆 – Hm, wouldn't limit-rps: 1 limit to one connection per second? What I would like to have is max 20 requests per second from one IP address.

@aledbf
Copy link
Member

aledbf commented Apr 6, 2018

limit to one connection per second?

From the nginx docs:

: Not all connections are counted. A connection is counted only if 
: it has a request processed by the server and the whole request 
: header has already been read.

limit_conn != limit_req

@akoenig
Copy link
Author

akoenig commented Apr 6, 2018

Ah, I see. Thanks for the heads up! 🙂 – Now I receive a 429. Cool! Just wondering if the rate limiting only works for GET requests. Hitting a rate limit with a POST verb leads to a 404 🙁

@aledbf
Copy link
Member

aledbf commented Apr 6, 2018

Hitting a rate limit with a POST verb leads to a 404 slightly_frowning_face

That's weird (the limits are agnostic to HTTP verbs)

@aledbf
Copy link
Member

aledbf commented Apr 6, 2018

@akoenig I cannot reproduce your issue:

while true; do curl -v http://$(minikube ip):31321/ -H 'Host: foo.bar' -d '' ;done
.....
*   Trying 192.168.39.232...
* TCP_NODELAY set
* Connected to 192.168.39.232 (192.168.39.232) port 31321 (#0)
> POST / HTTP/1.1
> Host: foo.bar
> User-Agent: curl/7.55.1
> Accept: */*
> Content-Length: 0
> Content-Type: application/x-www-form-urlencoded
> 
< HTTP/1.1 429 Too Many Requests
< Server: nginx/1.13.9
< Date: Fri, 06 Apr 2018 17:41:09 GMT
< Content-Type: text/html
< Content-Length: 185
< Connection: keep-alive
* HTTP error before end of send, stop sending
< 
<html>
<head><title>429 Too Many Requests</title></head>
<body bgcolor="white">
<center><h1>429 Too Many Requests</h1></center>
<hr><center>nginx/1.13.9</center>
</body>
</html>
.....

@aledbf
Copy link
Member

aledbf commented Apr 25, 2018

Closing. Please reopen if you still have this issue (also send how to reproduce it)

@aledbf aledbf closed this as completed Apr 25, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants