Skip to content

Conversation

zhangkun83
Copy link
Contributor

@zhangkun83 zhangkun83 commented Nov 7, 2019

This would reduce the amount of direct buffer allocations, especially with light traffic. This should mitigate internal issue b/143075435

The change is currently optional and is only effective if system property "io.grpc.netty.useCustomAllocator" is set to "true" ignoring the case.

Internal benchmark results (median of 5 runs) doesn't show any significant change:

                          Before (STDEV)           After (STDEV)
grpc-java-java-multi-qps-integrity_only
Actual QPS               717,848 (7,445)         715,061 (2,122) 
QPS per Client CPU        23,768   (799)          23,842   (295)

grpc-java-java-multi-throughput-integrity_only
Actual QPS                35,631   (204)          35,298    (25) 
QPS per Client CPU         3,362    (56)           3,316    (18)

grpc-java-java-single-latency-integrity_only
Median latency (us)          130  (1.82)             125  (5.36)

grpc-java-java-single-throughput-integrity_only
Actual QPS                    593 (5.14)             587  (3.76)
QPS per Client CPU            502 (4.51)             494  (6.92)

@zhangkun83 zhangkun83 marked this pull request as ready for review November 8, 2019 21:53
@zhangkun83 zhangkun83 requested a review from ejona86 November 8, 2019 21:53
@zhangkun83 zhangkun83 changed the title netty: lower netty allocator chunk size from 16MB to 1MB netty: lower netty allocator chunk size from 16MB to 2MB Nov 14, 2019
@zhangkun83 zhangkun83 changed the title netty: lower netty allocator chunk size from 16MB to 2MB netty: provide an option to lower netty allocator chunk size from 16MB to 2MB Nov 14, 2019
@zhangkun83 zhangkun83 merged commit 89cd643 into grpc:master Nov 14, 2019
@zhangkun83 zhangkun83 deleted the smaller_direct_chunk_size branch November 14, 2019 23:50
ericgribkoff pushed a commit to ericgribkoff/grpc-java that referenced this pull request Dec 6, 2019
…B to 2MB (grpc#6407)

This would reduce the amount of direct buffer allocations, especially with light traffic. This should mitigate internal issue b/143075435

The change is currently optional and is only effective if system property "io.grpc.netty.useCustomAllocator" is set to "true" ignoring the case.

Internal benchmark results (median of 5 runs) doesn't show any significant change:
```
                          Before (STDEV)           After (STDEV)
grpc-java-java-multi-qps-integrity_only
Actual QPS               717,848 (7,445)         715,061 (2,122) 
QPS per Client CPU        23,768   (799)          23,842   (295)

grpc-java-java-multi-throughput-integrity_only
Actual QPS                35,631   (204)          35,298    (25) 
QPS per Client CPU         3,362    (56)           3,316    (18)

grpc-java-java-single-latency-integrity_only
Median latency (us)          130  (1.82)             125  (5.36)

grpc-java-java-single-throughput-integrity_only
Actual QPS                    593 (5.14)             587  (3.76)
QPS per Client CPU            502 (4.51)             494  (6.92)

```
@lock lock bot locked as resolved and limited conversation to collaborators Feb 13, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants