Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

perf: fairer ops scheduling #3128

Merged
merged 2 commits into from Oct 15, 2019

Conversation

@bartlomieju
Copy link
Contributor

bartlomieju commented Oct 15, 2019

With eager poll introduced in #3046 we got nice req/sec increase but at the same time latency suffered heavily.

As suggested by @piscisaureus should bail out of eager polling once every N polls to be more fair for other futures, so this PR is PoC for this mechanism.

Basically every 50 eager polls we don't eager pol and let pending_ops be polled. This has to be adjusted experimentally, I chose 50 because it gives <1ms latency without much req/sec impact.

Benchmark
$ cargo run --release --example deno_core_http_bench

master

$ third_party/wrk/mac/wrk -d 10s --latency http://127.0.0.1:4544
Running 10s test @ http://127.0.0.1:4544
  2 threads and 10 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   560.87us    2.41ms  45.77ms   95.34%
    Req/Sec    51.42k     4.75k   60.37k    77.72%
  Latency Distribution
     50%   78.00us
     75%   87.00us
     90%  219.00us
     99%   12.66ms
  1033676 requests in 10.10s, 50.28MB read
Requests/sec: 102342.41
Transfer/sec:      4.98MB

this PR

$ third_party/wrk/mac/wrk -d 10s --latency http://127.0.0.1:4544
Running 10s test @ http://127.0.0.1:4544
  2 threads and 10 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   145.89us  186.67us   2.59ms   89.59%
    Req/Sec    48.09k     4.21k   53.19k    75.25%
  Latency Distribution
     50%   73.00us
     75%  129.00us
     90%  348.00us
     99%    0.95ms
  965903 requests in 10.10s, 46.98MB read
Requests/sec:  95634.67
Transfer/sec:      4.65MB
@bartlomieju

This comment has been minimized.

Copy link
Contributor Author

bartlomieju commented Oct 15, 2019

Ultimately we should set our eyes on switching to Tokio 0.2 as soon as it comes out. Looks like it's new scheduler rocks.

@ry ry requested a review from piscisaureus Oct 15, 2019
}
Op::Sync(buf) => Op::Sync(buf),
} else {
isolate.eager_count = 0;

This comment has been minimized.

Copy link
@piscisaureus

piscisaureus Oct 15, 2019

Collaborator

I think resetting this to 0 should be done when we poll the pending_ops stream. Otherwise the 50th op will not be polled eagerly will but the 51st will.

This comment has been minimized.

Copy link
@bartlomieju

bartlomieju Oct 15, 2019

Author Contributor

Thanks, you're right! It's even better latency now 🎉

$ third_party/wrk/mac/wrk -d 10s --latency http://127.0.0.1:4544
Running 10s test @ http://127.0.0.1:4544
  2 threads and 10 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    97.23us   40.92us   2.09ms   92.81%
    Req/Sec    50.06k     3.38k   52.60k    89.11%
  Latency Distribution
     50%   87.00us
     75%  101.00us
     90%  109.00us
     99%  258.00us
  1005713 requests in 10.10s, 48.92MB read
Requests/sec:  99578.13
Transfer/sec:      4.84MB
Copy link
Collaborator

piscisaureus left a comment

👍🏻

@piscisaureus piscisaureus merged commit 54db12c into denoland:master Oct 15, 2019
10 checks passed
10 checks passed
test macOS-10.14
Details
test_std macOS-10.14
Details
test windows-2016
Details
test_std windows-2016
Details
test ubuntu-16.04
Details
test_debug ubuntu-16.04
Details
test_std ubuntu-16.04
Details
bench ubuntu-16.04
Details
lint ubuntu-16.04
Details
license/cla Contributor License Agreement is signed.
Details
@bartlomieju bartlomieju deleted the bartlomieju:perf-fair_scheduler branch Oct 15, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
2 participants
You can’t perform that action at this time.