Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[feature-request] Add option to disable TCP_NODELAY for connection socket #141

Closed
itrofimow opened this issue Oct 15, 2022 · 2 comments
Closed

Comments

@itrofimow
Copy link
Contributor

Doing -3 lines of code makes wonders when running TechEmpower pipelined plaintext benchmarks on 24-cores VM (16 worker threads, 4 ev threads, 6 threads for wrk):

with TCP_NODELAY:

Concurrency: 1024 for plaintext
 wrk -H 'Host: tfb-server' -H 'Accept: text/plain,text/html;q=0.9,application/xhtml+xml;q=0.9,application/xml;q=0.8,*/*;q=0.7' -H 'Connection: keep-alive' --latency -d 15 -c 1024 --timeout 8 -t 24 http://tfb-server:8090/plaintext -s pipeline.lua -- 16
---------------------------------------------------------
Running 15s test @ http://tfb-server:8090/plaintext
  24 threads and 1024 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    23.35ms   18.20ms 164.49ms   77.32%
    Req/Sec    20.61k     3.52k   78.08k    81.24%
  Latency Distribution
     50%   19.16ms
     75%   31.32ms
     90%   45.71ms
     99%   90.12ms
  7417080 requests in 15.09s, 2.19GB read
Requests/sec: 491395.99
Transfer/sec:    148.56MB

without TCP_NODELAY:

Concurrency: 1024 for plaintext
 wrk -H 'Host: tfb-server' -H 'Accept: text/plain,text/html;q=0.9,application/xhtml+xml;q=0.9,application/xml;q=0.8,*/*;q=0.7' -H 'Connection: keep-alive' --latency -d 15 -c 1024 --timeout 8 -t 24 http://tfb-server:8090/plaintext -s pipeline.lua -- 16
---------------------------------------------------------
Running 15s test @ http://tfb-server:8090/plaintext
  24 threads and 1024 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    20.91ms   16.76ms 192.81ms   74.69%
    Req/Sec    24.92k     3.27k   93.68k    88.27%
  Latency Distribution
     50%   15.95ms
     75%   28.38ms
     90%   46.78ms
     99%   71.08ms
  8957662 requests in 15.09s, 2.64GB read
Requests/sec: 593597.99
Transfer/sec:    179.45MB

I do understand that the case is somewhat specific, but hey, it's 20% speedup!

P.S. would be great if someone could recheck me here

@itrofimow
Copy link
Contributor Author

itrofimow commented Oct 15, 2022

flames.zip

Some flamegraphs from samples/hello_service with/without TCP_NODELAY (beware, 4Mb each, 1Mb .zip)

@itrofimow itrofimow changed the title [feature-request] Add option to disable TCP_NODELAY for connections socket [feature-request] Add option to disable TCP_NODELAY for connection socket Oct 15, 2022
@itrofimow
Copy link
Contributor Author

Playing more with it i tend to believe that responses pipeliling in this specific benchmark would render nagle useless, so closing this for now

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant