Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance benchmark, profiling and optimization #89

Closed
iamqizhao opened this issue Mar 2, 2015 · 6 comments
Closed

Performance benchmark, profiling and optimization #89

iamqizhao opened this issue Mar 2, 2015 · 6 comments

Comments

@iamqizhao
Copy link
Contributor

We can start with a microbenchmark. But eventually we need a benchmark running as different processes.

@codahale
Copy link

codahale commented Mar 8, 2015

I did some initial benchmarking of a gRPC service here, including profiling: https://gist.github.com/codahale/b3db28bfa3f7dd59d048.

It’s slower than net/http, and @kenkeiter and I think it’s probably due to IO contention. net/http uses bufio to buffer writes; grpc-go, on the other hand, uses a bare net.Conn.

@iamqizhao
Copy link
Contributor Author

Thanks for benchmarking this. Per my reply to #108, this is because we have not done any batching IO so far, which is the focus of performance optimization in the next a couple of months.

It would be highly appreciated if you can have a pull request to wrap up what you have done to kick off the benchmark work on github.

@codahale
Copy link

codahale commented Mar 9, 2015

It’s not much, but the code’s here: https://github.com/codahale/grpc-example.

I just spun that up on two m3.2xls.

@iamqizhao
Copy link
Contributor Author

okay, I am going to try to push out a basic benchmark framework this week and then all the contributors can experiment various performance optimization ideas on the same ground.

@iamqizhao
Copy link
Contributor Author

I made some improvement on client (not checked in yet) and got the significant improvement already:

http client:
2015/03/11 15:39:40 1: 5736.642904 op/sec @ p99=0.336000ms
2015/03/11 15:39:50 2: 11015.068698 op/sec @ p99=0.321000ms
2015/03/11 15:40:00 3: 17261.079718 op/sec @ p99=0.424000ms
2015/03/11 15:40:10 4: 20806.615173 op/sec @ p99=0.674000ms
2015/03/11 15:40:20 5: 917.775790 op/sec @ p99=7.762000ms

improved grpc client:
2015/03/11 15:28:20 1: 3474.533592 op/sec @ p99=0.453000ms
2015/03/11 15:28:30 2: 8111.900305 op/sec @ p99=0.657000ms
2015/03/11 15:28:40 3: 14023.490286 op/sec @ p99=0.663000ms
2015/03/11 15:28:50 4: 21179.714876 op/sec @ p99=0.617000ms
2015/03/11 15:29:00 5: 27564.232005 op/sec @ p99=0.612000ms
2015/03/11 15:29:10 6: 33875.344691 op/sec @ p99=0.603000ms
2015/03/11 15:29:20 7: 39543.547520 op/sec @ p99=0.616000ms
2015/03/11 15:29:30 8: 43567.770711 op/sec @ p99=0.668000ms
2015/03/11 15:29:40 9: 46393.655755 op/sec @ p99=0.711000ms
2015/03/11 15:29:50 10: 47250.902113 op/sec @ p99=0.796000ms
2015/03/11 15:30:00 11: 47733.268011 op/sec @ p99=0.961000ms
2015/03/11 15:30:10 12: 47531.488503 op/sec @ p99=1.130000ms
2015/03/11 15:30:20 13: 50293.363756 op/sec @ p99=0.938000ms
2015/03/11 15:30:30 14: 50614.134504 op/sec @ p99=1.006000ms
2015/03/11 15:30:40 15: 50902.922158 op/sec @ p99=1.061000ms
2015/03/11 15:30:50 16: 51342.060561 op/sec @ p99=1.103000ms
2015/03/11 15:31:00 17: 51172.659114 op/sec @ p99=1.200000ms
2015/03/11 15:31:10 18: 51340.048872 op/sec @ p99=1.230000ms
2015/03/11 15:31:20 19: 51575.161160 op/sec @ p99=1.277000ms
2015/03/11 15:31:30 20: 51478.429739 op/sec @ p99=1.376000ms
2015/03/11 15:31:40 21: 49980.940379 op/sec @ p99=1.738000ms
2015/03/11 15:31:50 22: 51071.717198 op/sec @ p99=1.681000ms
2015/03/11 15:32:00 23: 52140.282287 op/sec @ p99=1.553000ms
2015/03/11 15:32:10 24: 52124.439668 op/sec @ p99=1.593000ms
2015/03/11 15:32:20 25: 52222.698417 op/sec @ p99=1.646000ms
2015/03/11 15:32:30 26: 52365.708092 op/sec @ p99=1.795000ms
2015/03/11 15:32:40 27: 52629.151019 op/sec @ p99=1.771000ms
2015/03/11 15:32:50 28: 52815.300753 op/sec @ p99=1.857000ms
2015/03/11 15:33:00 29: 53304.403154 op/sec @ p99=1.839000ms
2015/03/11 15:33:10 30: 53081.078234 op/sec @ p99=1.986000ms
2015/03/11 15:33:20 31: 53571.344040 op/sec @ p99=1.945000ms
2015/03/11 15:33:30 32: 53348.888919 op/sec @ p99=2.008000ms
2015/03/11 15:33:40 33: 53429.124904 op/sec @ p99=2.147000ms
2015/03/11 15:33:50 34: 53687.364050 op/sec @ p99=2.196000ms
2015/03/11 15:34:00 35: 54029.536526 op/sec @ p99=2.184000ms
2015/03/11 15:34:10 36: 53837.964570 op/sec @ p99=2.264000ms
2015/03/11 15:34:20 37: 53558.114539 op/sec @ p99=2.358000ms
2015/03/11 15:34:31 38: 54276.048533 op/sec @ p99=2.395000ms
2015/03/11 15:34:41 39: 54714.463152 op/sec @ p99=2.424000ms
2015/03/11 15:34:51 40: 54343.651778 op/sec @ p99=2.501000ms
2015/03/11 15:35:01 41: 54334.642340 op/sec @ p99=2.557000ms
2015/03/11 15:35:11 42: 52279.954783 op/sec @ p99=3.224000ms
2015/03/11 15:35:21 43: 54623.572538 op/sec @ p99=2.759000ms
2015/03/11 15:35:31 44: 55033.028392 op/sec @ p99=2.729000ms
2015/03/11 15:35:41 45: 55097.426098 op/sec @ p99=2.797000ms
2015/03/11 15:35:51 46: 54736.133552 op/sec @ p99=2.973000ms
2015/03/11 15:36:01 47: 54809.898913 op/sec @ p99=2.926000ms
2015/03/11 15:36:11 48: 55465.691515 op/sec @ p99=3.009000ms
2015/03/11 15:36:21 49: 55142.363729 op/sec @ p99=3.053000ms
2015/03/11 15:36:31 50: 54523.644725 op/sec @ p99=3.341000ms
2015/03/11 15:36:41 51: 51818.090431 op/sec @ p99=3.874000ms
2015/03/11 15:36:51 52: 55958.880606 op/sec @ p99=3.230000ms
2015/03/11 15:37:02 53: 55770.858429 op/sec @ p99=3.363000ms
2015/03/11 15:37:12 54: 55643.909388 op/sec @ p99=3.520000ms
2015/03/11 15:37:22 55: 53500.566081 op/sec @ p99=3.894000ms
2015/03/11 15:37:32 56: 55789.631428 op/sec @ p99=3.722000ms
2015/03/11 15:37:42 57: 56027.047875 op/sec @ p99=3.556000ms
2015/03/11 15:37:52 58: 56038.120343 op/sec @ p99=3.638000ms
2015/03/11 15:38:02 59: 56292.927614 op/sec @ p99=3.793000ms
2015/03/11 15:38:12 60: 55018.049146 op/sec @ p99=4.073000ms
2015/03/11 15:38:22 61: 55992.929617 op/sec @ p99=4.024000ms
2015/03/11 15:38:32 62: 55822.072635 op/sec @ p99=4.004000ms
2015/03/11 15:38:42 63: 50557.821204 op/sec @ p99=5.170000ms

I am going to
i) investigate the p99 latency increase when concurrency is 1, 2, 3;
ii) improve server side IO.

@hsaliak
Copy link

hsaliak commented Sep 12, 2016

@iamqizhao can we close this issue, given that we have such a framework in place

@lock lock bot locked as resolved and limited conversation to collaborators Sep 26, 2018
dfawley pushed a commit to dfawley/grpc-go that referenced this issue Sep 24, 2021
* Upgrade to Hugo v0.66.0

Fixes grpc#89
Closes grpc#111

* Match old support for markdown-embedded HTML

* Fix About page markdown and embedded HTML
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants