Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

keepalive requests are slow #478

Closed
yawnt opened this issue Sep 17, 2013 · 20 comments
Closed

keepalive requests are slow #478

yawnt opened this issue Sep 17, 2013 · 20 comments

Comments

@yawnt
Copy link
Contributor

yawnt commented Sep 17, 2013

see #477 (comment)

@Rush
Copy link
Contributor

Rush commented Sep 17, 2013

and some failing (probably due to the same issue)

@yawnt
Copy link
Contributor Author

yawnt commented Sep 17, 2013

i'm not sure if it's related to setting maxSock (so that the global agent can create more connections)

@yawnt
Copy link
Contributor Author

yawnt commented Sep 17, 2013

yeah it's a problem related to connection pooling.. if i disable the agent completely i get 530 req/sec which is definitely more acceptable..

i'll have to look into the way agents are setup'd.. maybe there's a bug there..

thanks for reporting!

yawnt added a commit that referenced this issue Sep 17, 2013
@yawnt
Copy link
Contributor Author

yawnt commented Sep 17, 2013

i've pushed a fix that removes the connection pool .. i'm using wrk to do benchmarking (i trust very little ab) and here's what i got

with the default agent it's real slow

┌(yawnt@quetzalcoatl) (node 0.10.18)
╰─(~)> wrk -c 100 -r 100 http://127.0.0.1:8000/
Making 100 requests to http://127.0.0.1:8000/
  2 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    59.10ms   68.88ms 184.79ms   72.73%
    Req/Sec     0.00      0.00     0.00    100.00%
  100 requests in 839.71ms, 23.34KB read
Requests/sec:    119.09

after the patch (please note that v8's JIT still has to warm up, subsequent requests perform around 800+ req/sec)

┌(yawnt@quetzalcoatl) (node 0.10.18)
╰─(~)> wrk -c 100 -r 100 http://127.0.0.1:8000/
Making 100 requests to http://127.0.0.1:8000/
  2 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     0.00us    0.00us   0.00us  100.00%
    Req/Sec     0.00      0.00     0.00    100.00%
  100 requests in 179.37ms, 21.97KB read
Requests/sec:    557.50

ab performs very strangely.. it measures 500 req/sec one time, when i run it again it hangs (at random).. i checked and the connection isn't even received by the node server (if you have any clue about this, please do share :D )

@Rush
Copy link
Contributor

Rush commented Sep 17, 2013

I think you should first test with -t1 and -c1 which is single connection and one thread ... in which wrk is strangely terribly slow. No idea why ... I think ab2 shows good numbers with single thread single connection testing. Furthermore wrk does not show if it got valid responses ...

@Rush
Copy link
Contributor

Rush commented Sep 17, 2013

As a proof that ab2 is not in the wrong, please try the following:
# ab2 -c 1 -k -n 10000 http://127.0.0.1:9000/
Which tests keep alive without proxying. You should see a huge number of req/s. For me it is 4600/sec

@yawnt
Copy link
Contributor Author

yawnt commented Sep 17, 2013

mm it doesn't look slow to me.. what is weird is that ab hangs in the sense that the node server stops receiving requests .. and that happens randomly

┌(yawnt@quetzalcoatl) (node 0.10.18) (git|caronte)
╰─(~/code/caronte)> wrk -t 1 -c 1 -r 100 http://127.0.0.1:8000/
Making 100 requests to http://127.0.0.1:8000/
  1 threads and 1 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     1.03ms    0.00us   1.03ms  100.00%
    Req/Sec     0.00      0.00     0.00    100.00%
  100 requests in 141.76ms, 21.97KB read
Requests/sec:    705.43

@Rush
Copy link
Contributor

Rush commented Sep 17, 2013

Well, as I pointed out it does not hang if it connects directly to the http server. With wrk (-r option does not work for me) and latest commit "[fix] pooled connections" afc4d09 I get only about 25req/s - I have triple checked that I have the right source and I see your latest commit ...

> wrk -t 1 -c 1 http://127.0.0.1:9000
Running 10s test @ http://127.0.0.1:9000
  1 threads and 1 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    40.24ms    2.69ms  69.19ms   99.15%
    Req/Sec    24.89      1.20    25.00     99.15%
  250 requests in 10.00s, 50.24KB read
Requests/sec:     25.00
Transfer/sec:      5.02KB

Compared to connecting directly to the http server:

> wrk -t 1 -c 1 http://127.0.0.1:8000
Running 10s test @ http://127.0.0.1:8000
  1 threads and 1 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     1.32ms    1.12ms  11.91ms   98.20%
    Req/Sec     0.87k   234.62     1.33k    70.54%
  8209 requests in 10.00s, 1.76MB read
Requests/sec:    820.85
Transfer/sec:    180.36KB

Maybe the pooling is still on?

@Rush
Copy link
Contributor

Rush commented Sep 17, 2013

I wonder if this is connected: nodejs/node-v0.x-archive#4769 - It says closed only 2 months ago, so perhaps 0.10 still has problems with keep-alive.

@yawnt
Copy link
Contributor Author

yawnt commented Sep 17, 2013

i'm gonna look into that.. but i tested with both siege

┌(yawnt@quetzalcoatl) (node 0.10.18)
╰─(~)> siege -c 500 --time=1m http://localhost:8000
....
Transactions:     50764 hits <-- this is around 860 req/sec if math isn't failing me
Availability:       100.00 %
Elapsed time:   59.44 secs
...

and weighttp

┌(yawnt@quetzalcoatl) (node 0.10.18)
╰─(~)> weighttp -n 100 -c 1 -k "http://127.0.0.1:8000/"
weighttp - a lightweight and simple webserver benchmarking tool

starting benchmark...
spawning thread #1: 1 concurrent requests, 100 total requests
progress:  10% done
progress:  20% done
progress:  30% done
progress:  40% done
progress:  50% done
progress:  60% done
progress:  70% done
progress:  80% done
progress:  90% done
progress: 100% done

finished in 0 sec, 144 millisec and 731 microsec, 690 req/s, 104 kbyte/s
requests: 100 total, 100 started, 100 done, 16 succeeded, 84 failed, 0 errored
status codes: 100 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 15528 bytes total, 13400 bytes http, 2128 bytes data

@Rush
Copy link
Contributor

Rush commented Sep 17, 2013

I have just tried with node 0.11, I installed it with Node Version Manager. The behaviour is different in ab2, as in much worse. With wrk still 25req/s.

@Rush
Copy link
Contributor

Rush commented Sep 17, 2013

Also, in wrk headers there is no keep-alive.

@yawnt
Copy link
Contributor Author

yawnt commented Sep 17, 2013

i'm making progress.. slowly.. it seems to be a bug related to how CLRF is sent (something that should be node's responsability).. more likely is ab being nasty

http://stackoverflow.com/questions/4345602/problem-with-apachebench-ab-and-transfer-encoding-chunked

it thinks the content length is a header apparently

┌(yawnt@quetzalcoatl) (node 0.10.18)
╰─(~)> ab -v 4 -n 2 -c 1 -k 'http://127.0.0.1:8000/'                                                  8 ↵
This is ApacheBench, Version 2.3 <$Revision: 1430300 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 127.0.0.1 (be patient)...INFO: GET header == 
---
GET / HTTP/1.0
Connection: Keep-Alive
Host: 127.0.0.1:8000
User-Agent: ApacheBench/2.3
Accept: */*


---
LOG: header received:
HTTP/1.1 200 OK
content-type: text/plain
date: Tue, 17 Sep 2013 12:59:29 GMT
connection: Keep-Alive
transfer-encoding: chunked


LOG: Response code = 200
LOG: header received:
4
TEST

LOG: header received:
4
TEST
0


WARNING: Response code not 2xx (500)

@Rush
Copy link
Contributor

Rush commented Sep 17, 2013

Either ab is nasty or simply more strict to the HTTP standard. I on the other hand forced the proxy -> server requests to be "connection: close" and keep client -> proxy to be connection: keep-alive but it did not help which means you are on the right track. What strucks me as odd is that shouldn't this issue occur as well when connecting directly to the HTTP server without the proxy? It leads me to believe that it has to be a bug in the proxy.

@yawnt
Copy link
Contributor Author

yawnt commented Sep 17, 2013

no, because when you query a node.js server with ab it sends a HTTP 1.0 request (that defaults to connection: close and doesn't use chunked) .. on the other hand when i create the request through http.request() it uses connection: keep-alive with HTTP 1.1.. please see nodejs/node-v0.x-archive#940

@yawnt
Copy link
Contributor Author

yawnt commented Sep 17, 2013

both siege and weighttp use keep-alive and HTTP 1.1 and they seem to work just fine

@Rush
Copy link
Contributor

Rush commented Sep 17, 2013

A workaround - just to get it working - would be to always set:
proxyRes.headers.connection = 'close': in web-outgoing.js.

@yawnt
Copy link
Contributor Author

yawnt commented Sep 17, 2013

i can't do that.. i would kill every keep-alive connection ever.. just use HTTP/1.1 (which means siege or weighttp to test perfs)

thanks for reporting this tho :)

@yawnt yawnt closed this as completed Sep 17, 2013
@Rush
Copy link
Contributor

Rush commented Sep 17, 2013

I see, so simply not worth fixing the use case solely for ab.

@yawnt
Copy link
Contributor Author

yawnt commented Sep 17, 2013

i'm willing to implement that if you can find a way to send http/1.0 requests via http.request() .. if i do that manually (removing transfer-encoding chunked and setting content-length.. and i tried).. it's a huge performance hit

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants