Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: Max latency #1975

Merged
merged 6 commits into from
Mar 25, 2019
Merged

WIP: Max latency #1975

merged 6 commits into from
Mar 25, 2019

Conversation

bernard-lin
Copy link
Contributor

This is a PR for #1963. Is there any way to check if this works without building to master?

@CLAassistant
Copy link

CLAassistant commented Mar 20, 2019

CLA assistant check
All committers have signed the CLA.

@ry
Copy link
Member

ry commented Mar 20, 2019

Thanks ! There’s an error in CI

http_benchmark testing RUST hyper.
Listening on http://127.0.0.1:4544
third_party/wrk/linux/wrk -d 10s http://127.0.0.1:4544/
Running 10s test @ http://127.0.0.1:4544/
  2 threads and 10 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   596.15us    6.48ms 117.12ms   99.03%
    Req/Sec    48.39k     1.14k   57.79k    84.65%
  972350 requests in 10.10s, 81.60MB read
Requests/sec:  96274.62
Transfer/sec:      8.08MB
Traceback (most recent call last):
  File "./tools/benchmark.py", line 224, in <module>
    main(sys.argv)
  File "./tools/benchmark.py", line 211, in main
    new_data["req_per_sec"] = stats["req_per_sec"]
KeyError: 'req_per_sec'

tools/util.py Outdated
line)
if stats['max_latency'] is None:
stats['max_latency'] = extract_number(
r'Latency(?:\s+(\d+).\d+[a-z]+){3}', line)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that the latency can be expressed with different units: "s" (seconds), "ms" (milliseconds), "us" (microseconds)
Before returning the value, you should figure out the unit, and convert to milliseconds.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah good point. noted, thanks!

@ry
Copy link
Member

ry commented Mar 20, 2019

@bernard-lin Thanks! You can test locally by running "tools/benchmark.py" and if you run "tools/http_server.py" and visit http://localhost:4545/website/benchmarks.html

@kerm1it
Copy link
Contributor

kerm1it commented Mar 20, 2019

@bernard-lin , you should add req_per_sec and max_latency keys to {} in http_benchmark.py

def http_benchmark(deno_exe, hyper_hello_exe, core_http_bench_exe):
    r = {}
    # TODO Rename to "deno_tcp"
    r["deno"] = deno_http_benchmark(deno_exe)
    r["deno_net_http"] = deno_net_http_benchmark(deno_exe)
    r["deno_core_single"] = deno_core_single(core_http_bench_exe)
    r["deno_core_multi"] = deno_core_multi(core_http_bench_exe)
    r["node"] = node_http_benchmark()
    r["node_tcp"] = node_tcp_benchmark()
    r["hyper"] = hyper_http_benchmark(hyper_hello_exe)
    return r

Example:

def http_benchmark(deno_exe, hyper_hello_exe, core_http_bench_exe):
    r = {}
    m = {}
    # TODO Rename to "deno_tcp"
    denoHttp = deno_http_benchmark(deno_exe)
    denoNetHttp = deno_net_http_benchmark(deno_exe)
    denoCoreSingle = deno_core_single(core_http_bench_exe)
    denoCoreMulti = deno_core_multi(core_http_bench_exe)
    nodeHttp = node_http_benchmark()
    nodeTcp = node_tcp_benchmark()
    hyperHttp = hyper_http_benchmark(hyper_hello_exe)

    r["deno"] = denoHttp["req_per_sec"]
    r["deno_net_http"] = denoNetHttp["req_per_sec"]
    r["deno_core_single"] = denoCoreSingle["req_per_sec"]
    r["deno_core_multi"] = denoCoreMulti["req_per_sec"]
    r["node"] = nodeHttp["req_per_sec"]
    r["node_tcp"] = nodeTcp["req_per_sec"]
    r["hyper"] = hyperHttp["req_per_sec"]

    m["deno"] = denoHttp["max_latency"]
    m["deno_net_http"] = denoNetHttp["max_latency"]
    m["deno_core_single"] = denoCoreSingle["max_latency"]
    m["deno_core_multi"] = denoCoreMulti["max_latency"]
    m["node"] = nodeHttp["max_latency"]
    m["node_tcp"] = nodeTcp["max_latency"]
    m["hyper"] = hyperHttp["max_latency"]

    return { "req_per_sec": r, "max_latency": m }

@bernard-lin
Copy link
Contributor Author

bernard-lin commented Mar 20, 2019

Thanks @kermit-xuan! I'm just realizing now that I missed that. I'm not too familiar with Python - would that result in calling each benchmark function (e.g. deno_http_benchmark(deno_exe)) twice? Or would it use the same result?

@kerm1it
Copy link
Contributor

kerm1it commented Mar 20, 2019

You are right, it should call once and use the same result.

@@ -82,8 +82,9 @@ def parse_unit_test_output_test():
def parse_wrk_output_test():
print "Testing util.parse_wrk_output_test()..."
f = open(os.path.join(util.root_path, "tools/testdata/wrk1.txt"))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be cool to test another example where the output used units other than "ms"

Maybe you can use this as wrk2.txt:

Running 10s test @ http://127.0.0.1:4544/
  2 threads and 10 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   402.90us    1.15ms  1.25us   94.86%
    Req/Sec    26.86k     2.01k   31.81k    78.71%
  539721 requests in 10.10s, 26.25MB read
Requests/sec:  53435.75
Transfer/sec:      2.60MB

@ry
Copy link
Member

ry commented Mar 23, 2019

Any updates on this? I'd love to land this feature.

@bernard-lin
Copy link
Contributor Author

will finish this tonight!

@bernard-lin
Copy link
Contributor Author

@ry - ready for another look


for key, value in dic.items():
r[key] = value["req_per_sec"]
m[key] = value["max_latency"]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It feels weird to munge the return types here rather than do:

new_data["req_per_sec"] = {k: v["req_per_sec"] for k, v in stats.items()}
new_data["max_latency"] = {k: v["max_latency"] for k, v in stats.items()}

in benchmark.py.

hyperHttp = hyper_http_benchmark(hyper_hello_exe)

dic = {
"deno": denoHttp,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you don't need to create these temp variables, why not use the dict comprehension:

return {
  "deno": deno_http_benchmark(deno_exe)
  ...
}

that seems a little cleaner.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks! it does look much cleaner

tools/util.py Outdated
@@ -358,6 +358,20 @@ def extract_number(pattern, string):
return int(matches[0])


def extract_latency(pattern, string):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add a comment or rename the function to indicate that the returned value is in milliseconds.

f3 = open(os.path.join(util.root_path, "tools/testdata/wrk3.txt"))
stats3 = util.parse_wrk_output(f3.read())
assert stats3['req_per_sec'] == 96037
assert stats3['max_latency'] == 1630.0
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool, nice work

Copy link
Member

@ry ry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM - thanks - let’s try it

@ry ry merged commit 3cc90d9 into denoland:master Mar 25, 2019
@bernard-lin
Copy link
Contributor Author

Sounds good. Looking forward to your talk on Thurs!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants