-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Server / client should implement some sanity checks to prevent unbounded memory growth #103
Comments
I actually believe this is due to Possibly #99 length := binary.BigEndian.Uint32(lengthPrefix)
b := make([]byte, length)
n, err := io.ReadFull(r, b) |
Yeah that would make sense if there was mismatch between server and client (one was little and the other big). I guess there's one clever thing we could do which is to reject any client that looks to be little endian during version negotiation. The lengths should always be the same (for v1.0.0 anyway), so we can check for a specific length. This would at least make it easier to not get in trouble with mismatched dev versions. |
I'm also thinking about potential bad actors here too. You're right though, this shouldn't be an issue in the future. |
@dburkart This also comes about if someone opens a TCP connection and sends data that isn't our proto. Do you think it makes sense to add a magic first byte to the version message or the message in general to immediately disregard malformed requests. |
Moving to 0.1.4 for investigation. |
Which resources are not being cleaned up? I checked and it looks like we clean up the |
I think that it is more likely the symptoms are from what I mention over chat. The first byte being used for the length of the buffer causing a massive consumption of memory of it isn't the proper format. |
Ahh right. Let me take a look at that. |
Fixed the main issue here by backporting the big-endian change to release/0.1.x. Will use this issue to track some defensive programming around message sizes and the like. |
Potential client disconnect memory leak where we need to clean up resources explicitly
The text was updated successfully, but these errors were encountered: