-
Notifications
You must be signed in to change notification settings - Fork 24.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Set network receive predictor size to 32kb #23284
Set network receive predictor size to 32kb #23284
Conversation
Previously we calculated Netty' receive predictor size for HTTP and transport traffic based on available memory and worker nodes. This resulted in a receive predictor size between 64kb and 512kb. In our benchmarks this leads to increased GC pressure. With this commit we set Netty's receive predictor size to 32kb. This value is in a sweet spot between heap memory waste (-> GC pressure) and effect on request metrics (achieved throughput and latency numbers). Closes elastic#23185
@danielmitterdorfer should we also backport to 5.2? |
@clintongormley Sure, it's a pretty isolated change. |
@elasticmachine please test it |
It impacts networking requests, I don't think this should be considered an isolated change. My preference would be for this to bake in master/5.x for a little before pushing anywhere else. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
I meant it's pretty isolated code-wise. ;) But sure, it has global effects (that's why I ran so many benchmarks before implementing the change). I'm fine with your suggestion to only merge to master first. I'll wait a few days with the backport. @clintongormley Is it ok if I remove all 5.x related labels for in the meantime and add them back once I cherry-pick into the 5.x, 5.3 and 5.2? Thank you for the review @jasontedor! |
^ I removed the 5.x labels while letting it bake on master. I'll readd once I cherry-pick into 5.x, 5.3 and 5.2. |
@danielmitterdorfer I do think it's okay to take it to 5.x, that will give it more places to bake. 😄 |
Previously we calculated Netty' receive predictor size for HTTP and transport traffic based on available memory and worker nodes. This resulted in a receive predictor size between 64kb and 512kb. In our benchmarks this leads to increased GC pressure. With this commit we set Netty's receive predictor size to 32kb. This value is in a sweet spot between heap memory waste (-> GC pressure) and effect on request metrics (achieved throughput and latency numbers). Closes #23185
This reverts commit 2a2f3b7.
This reverts commit e1eca74.
@danielmitterdorfer I think this change has baked sufficiently. I'm comfortable with this being integrated into the 5.3 branch now. |
As discussed in #23185 (comment) we need to investigate further before backporting. |
Previously we calculated Netty' receive predictor size for HTTP and transport traffic based on available memory and worker nodes. This resulted in a receive predictor size between 64kb and 512kb. In our benchmarks this leads to increased GC pressure. With this commit we set Netty's receive predictor size to 32kb. This value is in a sweet spot between heap memory waste (-> GC pressure) and effect on request metrics (achieved throughput and latency numbers). Closes #23185
Previously we calculated Netty's receive predictor size for HTTP and transport
traffic based on available memory and worker nodes. This resulted in a receive
predictor size between 64kb and 512kb. In our benchmarks this leads to increased
GC pressure.
With this commit we set Netty's receive predictor size to 32kb. This value is in
a sweet spot between heap memory waste (-> GC pressure) and effect on request
metrics (achieved throughput and latency numbers).
Closes #23185