-
Notifications
You must be signed in to change notification settings - Fork 5.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WIP: Max latency #1975
WIP: Max latency #1975
Conversation
Thanks ! There’s an error in CI
|
tools/util.py
Outdated
line) | ||
if stats['max_latency'] is None: | ||
stats['max_latency'] = extract_number( | ||
r'Latency(?:\s+(\d+).\d+[a-z]+){3}', line) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note that the latency can be expressed with different units: "s" (seconds), "ms" (milliseconds), "us" (microseconds)
Before returning the value, you should figure out the unit, and convert to milliseconds.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah good point. noted, thanks!
@bernard-lin Thanks! You can test locally by running "tools/benchmark.py" and if you run "tools/http_server.py" and visit http://localhost:4545/website/benchmarks.html |
@bernard-lin , you should add def http_benchmark(deno_exe, hyper_hello_exe, core_http_bench_exe):
r = {}
# TODO Rename to "deno_tcp"
r["deno"] = deno_http_benchmark(deno_exe)
r["deno_net_http"] = deno_net_http_benchmark(deno_exe)
r["deno_core_single"] = deno_core_single(core_http_bench_exe)
r["deno_core_multi"] = deno_core_multi(core_http_bench_exe)
r["node"] = node_http_benchmark()
r["node_tcp"] = node_tcp_benchmark()
r["hyper"] = hyper_http_benchmark(hyper_hello_exe)
return r Example: def http_benchmark(deno_exe, hyper_hello_exe, core_http_bench_exe):
r = {}
m = {}
# TODO Rename to "deno_tcp"
denoHttp = deno_http_benchmark(deno_exe)
denoNetHttp = deno_net_http_benchmark(deno_exe)
denoCoreSingle = deno_core_single(core_http_bench_exe)
denoCoreMulti = deno_core_multi(core_http_bench_exe)
nodeHttp = node_http_benchmark()
nodeTcp = node_tcp_benchmark()
hyperHttp = hyper_http_benchmark(hyper_hello_exe)
r["deno"] = denoHttp["req_per_sec"]
r["deno_net_http"] = denoNetHttp["req_per_sec"]
r["deno_core_single"] = denoCoreSingle["req_per_sec"]
r["deno_core_multi"] = denoCoreMulti["req_per_sec"]
r["node"] = nodeHttp["req_per_sec"]
r["node_tcp"] = nodeTcp["req_per_sec"]
r["hyper"] = hyperHttp["req_per_sec"]
m["deno"] = denoHttp["max_latency"]
m["deno_net_http"] = denoNetHttp["max_latency"]
m["deno_core_single"] = denoCoreSingle["max_latency"]
m["deno_core_multi"] = denoCoreMulti["max_latency"]
m["node"] = nodeHttp["max_latency"]
m["node_tcp"] = nodeTcp["max_latency"]
m["hyper"] = hyperHttp["max_latency"]
return { "req_per_sec": r, "max_latency": m } |
Thanks @kermit-xuan! I'm just realizing now that I missed that. I'm not too familiar with Python - would that result in calling each benchmark function (e.g. deno_http_benchmark(deno_exe)) twice? Or would it use the same result? |
You are right, it should call once and use the same result. |
@@ -82,8 +82,9 @@ def parse_unit_test_output_test(): | |||
def parse_wrk_output_test(): | |||
print "Testing util.parse_wrk_output_test()..." | |||
f = open(os.path.join(util.root_path, "tools/testdata/wrk1.txt")) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be cool to test another example where the output used units other than "ms"
Maybe you can use this as wrk2.txt:
Running 10s test @ http://127.0.0.1:4544/
2 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 402.90us 1.15ms 1.25us 94.86%
Req/Sec 26.86k 2.01k 31.81k 78.71%
539721 requests in 10.10s, 26.25MB read
Requests/sec: 53435.75
Transfer/sec: 2.60MB
Any updates on this? I'd love to land this feature. |
will finish this tonight! |
@ry - ready for another look |
tools/http_benchmark.py
Outdated
|
||
for key, value in dic.items(): | ||
r[key] = value["req_per_sec"] | ||
m[key] = value["max_latency"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It feels weird to munge the return types here rather than do:
new_data["req_per_sec"] = {k: v["req_per_sec"] for k, v in stats.items()}
new_data["max_latency"] = {k: v["max_latency"] for k, v in stats.items()}
in benchmark.py.
tools/http_benchmark.py
Outdated
hyperHttp = hyper_http_benchmark(hyper_hello_exe) | ||
|
||
dic = { | ||
"deno": denoHttp, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you don't need to create these temp variables, why not use the dict comprehension:
return {
"deno": deno_http_benchmark(deno_exe)
...
}
that seems a little cleaner.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks! it does look much cleaner
tools/util.py
Outdated
@@ -358,6 +358,20 @@ def extract_number(pattern, string): | |||
return int(matches[0]) | |||
|
|||
|
|||
def extract_latency(pattern, string): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add a comment or rename the function to indicate that the returned value is in milliseconds.
f3 = open(os.path.join(util.root_path, "tools/testdata/wrk3.txt")) | ||
stats3 = util.parse_wrk_output(f3.read()) | ||
assert stats3['req_per_sec'] == 96037 | ||
assert stats3['max_latency'] == 1630.0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool, nice work
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM - thanks - let’s try it
Sounds good. Looking forward to your talk on Thurs! |
This is a PR for #1963. Is there any way to check if this works without building to master?