-
-
Notifications
You must be signed in to change notification settings - Fork 15.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory Wasting in Netty chanel read byeBuf #5460
Comments
@lshmouse what netty version you are using ? |
As far as I can see, they're using |
Can you guys point me to the code where they bootstrap the server ? |
@lshmouse usually Netty uses a |
@normanmaurer But according the code of AdaptiveRecvByteBufAllocator, the buffer size is adapted when the actual read bytes is less than a quarter of the current buffer size for two times continually. In the worst case, the memory used may be 8 times of the memory actually needed.
|
@normanmaurer In spark we may receive GBs data and keep them in memory, so this is a big problem. Now we can only consolidate the Is there a way to let the handlers give a hint to the Thanks. |
@Apache9 You could write your own RecvByteBufAllocator for this ? |
@normanmaurer I do not think I can implement the above logic with the current |
@Apache9 see RecvByteBufAllocator.guess() and RecvByteBufAllocator.allocate() ... these are what is used by Netty to allocate the buffer. Can you have a reference from your handler to your custom |
I will close this for no.. Please re-open if you still think there is something needed to be changed in netty |
This looks like the same problem, and it's fixed #9555 |
When testing the memory usage of spark shuffle which uses Netty for data communication, we found that the memory used by Netty usually are more than 2x of the size of data transferred. The reason is that the ByteBuf fired by the channel is no fully used. For an example, the capacity of the ByteBuf is 16K, but the size of readable bytes is 2K sometimes, 14K memory is wasted.
If the handler reuse the ByteBuf, the memory used by rpc can not be controlled accurately.
See: https://github.com/eBay/Spark/blob/master/network/common/src/main/java/org/apache/spark/network/util/TransportFrameDecoder.java#L61
As an newbie for Netty, I don't know if this behavior is expected and we need to handle this situation in the handler? Or any option to avoid this waste of memory.
The text was updated successfully, but these errors were encountered: