Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

confusing results when run with -H 'Connection: Close' for non-keepAlive http benchmark #5

Open
xfeep opened this issue Jan 24, 2015 · 12 comments

Comments

@xfeep
Copy link

xfeep commented Jan 24, 2015

Thanks for this excellent tools! We can use it get more accurate records about latency.
But when we try to use wrk2 do non-keepAlive http benchmark the results are confusing. e.g.

wrk2 -c 32  -t 16 -d 60s -R 60000 -H 'Connection: Close' http://127.0.0.1:8082
Running 1m test @ http://127.0.0.1:8082
  16 threads and 32 connections
  Thread calibration: mean lat.: 9223372036854776.000ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 9223372036854776.000ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 9223372036854776.000ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 9223372036854776.000ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 9223372036854776.000ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 9223372036854776.000ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 9223372036854776.000ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 9223372036854776.000ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 9223372036854776.000ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 9223372036854776.000ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 9223372036854776.000ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 9223372036854776.000ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 9223372036854776.000ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 9223372036854776.000ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 9223372036854776.000ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 9223372036854776.000ms, rate sampling interval: 10ms
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     -nanus    -nanus   0.00us    0.00%
    Req/Sec     0.00      0.00     0.00    100.00%
  0 requests in 1.00m, 2.05GB read
  Socket errors: connect 0, read 1675550, write 0, timeout 0
Requests/sec:      0.00
Transfer/sec:     35.01MB

For this real example it is a jetty web server which listens on port 8082.

@xfeep
Copy link
Author

xfeep commented Jan 25, 2015

We also try small number of connections and requests

wrk2 -c 16  -t 16 -d 60s -R 16 -H 'Connection: Close' http://127.0.0.1:8082

The result is not changed.

@giltene
Copy link
Owner

giltene commented Jan 25, 2015

Looks like lots of read errors, and zero successful requests. Since there are no successful requests, there is no latency information...

What does the original wrk (as opposed to wrk) produce for the same thing (when used without the -R flag)?

For comparison, I get this on my mac against the built-in apache server:

Lumpy.local-21% wrk -c 16 -t 16 -d 60s -R 16 -H 'Connection: Close' http://127.0.0.1:80/index.html
Running 1m test @ http://127.0.0.1:80/index.html
16 threads and 16 connections
Thread calibration: mean lat.: 5.317ms, rate sampling interval: 14ms
Thread calibration: mean lat.: 5.414ms, rate sampling interval: 14ms
Thread calibration: mean lat.: 5.878ms, rate sampling interval: 15ms
Thread calibration: mean lat.: 5.830ms, rate sampling interval: 15ms
Thread calibration: mean lat.: 5.738ms, rate sampling interval: 14ms
Thread calibration: mean lat.: 5.083ms, rate sampling interval: 13ms
Thread calibration: mean lat.: 5.370ms, rate sampling interval: 14ms
Thread calibration: mean lat.: 4.378ms, rate sampling interval: 11ms
Thread calibration: mean lat.: 4.105ms, rate sampling interval: 10ms
Thread calibration: mean lat.: 4.178ms, rate sampling interval: 10ms
Thread calibration: mean lat.: 4.758ms, rate sampling interval: 12ms
Thread calibration: mean lat.: 3.883ms, rate sampling interval: 10ms
Thread calibration: mean lat.: 3.874ms, rate sampling interval: 10ms
Thread calibration: mean lat.: 4.266ms, rate sampling interval: 11ms
Thread calibration: mean lat.: 4.303ms, rate sampling interval: 11ms
Thread calibration: mean lat.: 3.712ms, rate sampling interval: 10ms
Thread Stats Avg Stdev Max +/- Stdev
Latency 2.13ms 1.00ms 7.90ms 74.56%
Req/Sec 1.12 9.59 111.00 98.62%
976 requests in 1.00m, 347.89KB read
Requests/sec: 16.27
Transfer/sec: 5.80KB

@xfeep
Copy link
Author

xfeep commented Jan 26, 2015

Maybe it is a bug of original wrk or jetty 7 because when we use only 1 connection it still get no successful requests.

 wrk -c 1  -t 1 -d 10s  -H 'Connection: Close' http://127.0.0.1:8082
Running 10s test @ http://127.0.0.1:8082
  1 threads and 1 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     0.00us    0.00us   0.00us    -nan%
    Req/Sec     0.00      0.00     0.00      -nan%
  0 requests in 10.00s, 50.31MB read
  Socket errors: connect 0, read 40117, write 0, timeout 0
Requests/sec:      0.00
Transfer/sec:      5.03MB

When we use curl it's OK.

curl -v -H 'Connection: Close' http://127.0.0.1:8082
* About to connect() to 127.0.0.1 port 8082 (#0)
*   Trying 127.0.0.1...
* Connected to 127.0.0.1 (127.0.0.1) port 8082 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 127.0.0.1:8082
> Accept: */*
> Connection: Close
> 
< HTTP/1.1 200 OK
< Date: Sun, 25 Jan 2015 02:29:33 GMT
< Content-Type: text/html;charset=ISO-8859-1
< Connection: close
< Server: Jetty(7.6.13.v20130916)
> ..........................

When use weighttp without keepAlive option -k or when we add header 'Connection: Close' all requests also fail.

@xfeep
Copy link
Author

xfeep commented Jan 26, 2015

And when we use ab it is OK.

$ ab  -c 1  -n 1000  -H 'Connection: close'   http://127.0.0.1:8082/
This is ApacheBench, Version 2.3 <$Revision: 1430300 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 127.0.0.1 (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests


Server Software:        Jetty(7.6.13.v20130916)
Server Hostname:        127.0.0.1
Server Port:            8082

Document Path:          /
Document Length:        1163 bytes

Concurrency Level:      1
Time taken for tests:   0.329 seconds
Complete requests:      1000
Failed requests:        0
Write errors:           0
Total transferred:      1318000 bytes
HTML transferred:       1163000 bytes
Requests per second:    3036.90 [#/sec] (mean)
Time per request:       0.329 [ms] (mean)
Time per request:       0.329 [ms] (mean, across all concurrent requests)
Transfer rate:          3908.82 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:     0    0   0.1      0       2
Waiting:        0    0   0.1      0       2
Total:          0    0   0.1      0       2

Percentage of the requests served within a certain time (ms)
  50%      0
  66%      0
  75%      0
  80%      0
  90%      0
  95%      0
  98%      0
  99%      0
 100%      2 (longest request)

@giltene
Copy link
Owner

giltene commented Jan 26, 2015

Is the 1 connection (non working) thing above from wrk or wrk2? (hard to tell from cmd line, because the wrk2 command is still "wrk")

If [the original] wrk doesn't work right with your jetty setup, I'd open the issue with wrk (https://github.com/wg/wrk). I don't really know that much about wrk's detailed inner workings, and wrk2 is focused purely on the latency measurement part (and the associated rate limiting need). It it is determined to be a problem in wrk and gets fixed there, I'd happily merge the fix back here.

I'd try the same line against other web servers before posting though, as (for example) it seems to work against the build-in apache in my mac. And as you note above, the same seems to work with your jetty server and the road gens, so this may be a wrk+jetty specific issue.

@xfeep
Copy link
Author

xfeep commented Jan 26, 2015

It is wrk which need not to be with -R option but wrk2 does need it.
In short, when with -H 'Connection: close' option

  • wrk/wrk2 + jetty fails
  • weighttp + jetty fails
  • ab + jetty ok
  • curl + jetty ok

@xfeep
Copy link
Author

xfeep commented Jan 30, 2015

Hi @giltene ,

wrk can be fixed with this patch

I have tried it on wrk2 it is OK!

@giltene
Copy link
Owner

giltene commented Jan 30, 2015

Cool! I assume @wg will commit something into wrk for this soon. I'll apply the same once he does...

@xfeep
Copy link
Author

xfeep commented Feb 16, 2015

Hi @giltene the latest source of wrk has been merged with this path from the commit wg/wrk@522ec60 .

@dfdx
Copy link

dfdx commented Jun 14, 2016

Has this issue been fixed? I can see that the corresponding line in net.c has been changed, but I experience the same issue with latest master of wrk2: I can get very good performance without Connection: Close, but with it almost all requests just fail.

@janmejay
Copy link

@xfeep I noticed this and close-wait connection accumulation under load (leading to lower rate due to established connections depleting) with a build off master (as of a few days back). This #33 solves that problem, it'll be very useful if you can merge this locally (or fetch master from the fork) and see if you still experience the same issue.

@janmejay
Copy link

I just tried this with an arbitrarily picked website (sourceware.org) and it seems to work correctly (verified using curl -v that connection was left intact otherwise, and Connection: close was indeed respected by the website).

The scenario that I fixed was similar, where server was closing the connections after response-write (an opinionated choice for load-balancing reasons). I didn't need 'close' header, but essentially the effect was exactly the same (as far as connection life-cycle goes).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants