Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Set network receive predictor size to 32kb #23284

Merged

Conversation

danielmitterdorfer
Copy link
Member

@danielmitterdorfer danielmitterdorfer commented Feb 21, 2017

Previously we calculated Netty's receive predictor size for HTTP and transport
traffic based on available memory and worker nodes. This resulted in a receive
predictor size between 64kb and 512kb. In our benchmarks this leads to increased
GC pressure.

With this commit we set Netty's receive predictor size to 32kb. This value is in
a sweet spot between heap memory waste (-> GC pressure) and effect on request
metrics (achieved throughput and latency numbers).

Closes #23185

Previously we calculated Netty' receive predictor size for HTTP and transport
traffic based on available memory and worker nodes. This resulted in a receive
predictor size between 64kb and 512kb. In our benchmarks this leads to increased
GC pressure.

With this commit we set Netty's receive predictor size to 32kb. This value is in
a sweet spot between heap memory waste (-> GC pressure) and effect on request
metrics (achieved throughput and latency numbers).

Closes elastic#23185
@clintongormley
Copy link
Contributor

@danielmitterdorfer should we also backport to 5.2?

@danielmitterdorfer
Copy link
Member Author

@clintongormley Sure, it's a pretty isolated change.

@danielmitterdorfer
Copy link
Member Author

@elasticmachine please test it

@jasontedor
Copy link
Member

Sure, it's a pretty isolated change.

It impacts networking requests, I don't think this should be considered an isolated change.

My preference would be for this to bake in master/5.x for a little before pushing anywhere else.

Copy link
Member

@jasontedor jasontedor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

@danielmitterdorfer
Copy link
Member Author

It impacts networking requests, I don't think this should be considered an isolated change.

I meant it's pretty isolated code-wise. ;) But sure, it has global effects (that's why I ran so many benchmarks before implementing the change). I'm fine with your suggestion to only merge to master first. I'll wait a few days with the backport. @clintongormley Is it ok if I remove all 5.x related labels for in the meantime and add them back once I cherry-pick into the 5.x, 5.3 and 5.2?

Thank you for the review @jasontedor!

@danielmitterdorfer
Copy link
Member Author

^ I removed the 5.x labels while letting it bake on master. I'll readd once I cherry-pick into 5.x, 5.3 and 5.2.

@jasontedor
Copy link
Member

@danielmitterdorfer I do think it's okay to take it to 5.x, that will give it more places to bake. 😄

danielmitterdorfer added a commit that referenced this pull request Feb 21, 2017
Previously we calculated Netty' receive predictor size for HTTP and transport
traffic based on available memory and worker nodes. This resulted in a receive
predictor size between 64kb and 512kb. In our benchmarks this leads to increased
GC pressure.

With this commit we set Netty's receive predictor size to 32kb. This value is in
a sweet spot between heap memory waste (-> GC pressure) and effect on request
metrics (achieved throughput and latency numbers).

Closes #23185
droberts195 added a commit that referenced this pull request Feb 21, 2017
danielmitterdorfer added a commit that referenced this pull request Feb 22, 2017
Contrary to master, 5.x contains Netty 3 and Netty 4 as modules and they
share some properties like the receive predictor. With #23284 we made
these properties unshared but we need to make them shared in the
backport.

Relates #23284
@jasontedor
Copy link
Member

@danielmitterdorfer I think this change has baked sufficiently. I'm comfortable with this being integrated into the 5.3 branch now.

@danielmitterdorfer
Copy link
Member Author

As discussed in #23185 (comment) we need to investigate further before backporting.

danielmitterdorfer added a commit that referenced this pull request Mar 29, 2017
Previously we calculated Netty' receive predictor size for HTTP and transport
traffic based on available memory and worker nodes. This resulted in a receive
predictor size between 64kb and 512kb. In our benchmarks this leads to increased
GC pressure.

With this commit we set Netty's receive predictor size to 32kb. This value is in
a sweet spot between heap memory waste (-> GC pressure) and effect on request
metrics (achieved throughput and latency numbers).

Closes #23185
danielmitterdorfer added a commit that referenced this pull request Mar 29, 2017
Contrary to master, 5.x contains Netty 3 and Netty 4 as modules and they
share some properties like the receive predictor. With #23284 we made
these properties unshared but we need to make them shared in the
backport.

Relates #23284
cbuescher pushed a commit to cbuescher/elasticsearch that referenced this pull request Oct 2, 2023
cbuescher pushed a commit to cbuescher/elasticsearch that referenced this pull request Oct 2, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants