Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Configure message buffer #82

Closed
mehaase opened this issue Nov 9, 2018 · 4 comments
Closed

Configure message buffer #82

mehaase opened this issue Nov 9, 2018 · 4 comments
Assignees
Labels
enhancement New feature or request

Comments

@mehaase
Copy link
Contributor

mehaase commented Nov 9, 2018

This library uses a Trio channel to "buffer" incoming messages. The channel size is hard coded at 0, which means that in practice there is no buffering at all: whenever a message is received, the reader task blocks until another task calls get_message(). The result is that the connection won't process other events like ping or close while a message is pending.

We need to allow the user to configure the message queue as follows:

  • The message queue size (i.e. the Trio channel size).
  • The maximum message size.[1]

Both of these should have "sensible defaults" (i.e. totally open to bikeshedding here) and can be disabled by passing math.inf. When the message queue is full, the reader task will block, just as it currently does. When a message is received that exceeds the maximum message size, the message will be discarded and the connection will be closed with code 1009 (MESSAGE_TOO_BIG). The maximum size will not be enforced on sending messages.

[1] Maximum message size is difficult to enforce for "text" messages, which are delivered as unicode strings. I opened python-hyper/wsproto#86 to improve support in wsproto. For now, "message size" will be defined as the result of len(event.data), which is easy but inaccurate.

@mehaase mehaase self-assigned this Nov 9, 2018
@belm0
Copy link
Member

belm0 commented Nov 9, 2018

+1

I'm leaning towards queue size 1 as a default, but before settling on a number we should understand the mystery of whether end-to-end blocking is really happening at size 0 (#74 (comment)).

@mehaase
Copy link
Contributor Author

mehaase commented Nov 16, 2018

Referring to aaugstin/websockets again: Here is the first PR where buffer configuration was added and then default values tweaked here. It seems the logic in that library was to pick an amount of memory (128MiB) and select buffer parameters to match.

  • Their default message size is 1 MiB before decoding, which can up to 4MiB in a Python str after decoding. This seems reasonable to me.
  • Their default queue size is 32.
  • In my experience using aaugustin/websockets, I found myself using smaller queue sizes to reduce latency between receiving a message and processing it. Your suggestion of 1 sounds good, unless somebody provides a better argument for some other default.

mehaase added a commit that referenced this issue Nov 16, 2018
mehaase added a commit that referenced this issue Nov 16, 2018
@njsmith
Copy link
Member

njsmith commented Nov 16, 2018

Yeah, their idea of bounding memory use to 128 mb is fine for avoiding memory DoS, but that seems huge to me when it comes to latency. Most networks already have too much buffering, you don't need to worry about adding more just for buffering's sake.

mehaase added a commit that referenced this issue Nov 17, 2018
@mehaase mehaase added the enhancement New feature or request label Nov 17, 2018
@mehaase
Copy link
Contributor Author

mehaase commented Nov 17, 2018

Closed via #92.

@mehaase mehaase closed this as completed Nov 17, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants