-
Notifications
You must be signed in to change notification settings - Fork 4.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reuse server send buffers #6608
Conversation
Under significant load, the server concurrently allocates a significant amount of buffers to serialize and compress the responses before sending them over the wire. This causes significant GC pressure and can cause large spikes in memory allocations.
|
@@ -152,6 +152,8 @@ type dataFrame struct { | |||
// onEachWrite is called every time | |||
// a part of d is written out. | |||
onEachWrite func() | |||
// onSent is called once all the bt |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this comment is incomplete?
// prepareMsg returns the hdr, payload and data | ||
// using the compressors passed or using the | ||
// passed preparedmsg |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this comment could be updated
return err | ||
|
||
var compData []byte | ||
if shouldCompress(cp, comp) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
could save the result of shouldCompress and reuse in the following lines
} | ||
hdr, payload := msgHeader(data, compData) | ||
// TODO(dfawley): should we be checking len(data) instead? | ||
if len(payload) > s.opts.maxSendMessageSize { | ||
s.opts.sendBufferPool.Put(dataBuf) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
shouldn't this be Put(data) instead? since data is the new buffer after encoding at line 1135.
return status.Errorf(codes.ResourceExhausted, "grpc: trying to send message larger than max (%d vs. %d)", len(payload), s.opts.maxSendMessageSize) | ||
} | ||
opts.OnSent = func() { | ||
s.opts.sendBufferPool.Put(dataBuf) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same here, shouldn't this be Put(data)?
@PapaCharlie, seems like your github account has an issue with CLA. Please fix using this Help Article as mentioned in the comment above. |
@PapaCharlie Also please consider filing an issue with us before sending the PR. Changes like this require more investigation and proof - which needs to happen in an issue. |
Hey @arvindbr8, I didn't get a chance to link these issues when I opened the PR but here you go: v1.58.0 CPU profilev1.58.0 Heap profile (alloc_space)Fork CPU profileFork Heap profile (alloc_space)If you want me to repro this using simpler code (and/or a benchmark) I can give it a shot, it shouldn't be difficult. I just had these profile results handy. As you can see, the allocation overhead of not reusing the send buffers means the server is spending a lot its time collecting them instead of doing meaningful work. By reusing the send buffers, the performance drastically increases |
Under significant load, the server concurrently allocates a significant amount of buffers to serialize and compress the responses before sending them over the wire. This causes significant GC pressure and can cause large spikes in memory allocations.
Tests are benchmark results are pending.