-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bufferPool put() should be re-sliced to full BufferSize #3
Comments
After more investigation and a better understanding of how the sync.Pool works, it seems as if the issue might be that the items put back into the bufferPool need to be re-sliced to the BufferSize |
korymiller1489
changed the title
bufferPool runs out of space, even with small packets
bufferPool put() should be re-sliced to full BufferSize
Jun 24, 2020
yeah, this is a bug indeed, maybe we could send the size into the structs that goes through the channels, slice for using the data only and then putting it back into the buffer... |
test #4 pls |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hi,
We've noticed the following issue:
the bufferPool appears to run out of space and during a series of packets, at points the only space remaining in the pool is the size of the previous packet. In this situation if the next packet is larger, an error will occur: "A message sent on a datagram socket was larger than the internal message buffer or some other network limit, or the buffer used to receive a datagram into was smaller than the datagram itself."
This issue occurs even with a large bufferSize (8192 bytes) and small packet size (~800 bytes)
Our test case:
Send 10 packets at 803 bytes size
Send 10 packets at 802 bytes size
Send 10 packets at 805 bytes size (the errors will occur here, some packets may get thru, but not all of them)
This same pattern will reproduce the error, even at smaller number of packets. We see the error consistently in the middle of a packet stream where the next packet is larger than the last.
We modified the code to just write a new buffer for each packet, and the issue seems to be resolved, so it appears to possibly be related to the sync pool
The text was updated successfully, but these errors were encountered: