Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.Sign up
No flow control on read side #105
I had an issue this week where a
The server sends a lot of data really fast, and the client is intentionally slow in order to highlight the issue. After running for about 10-20 seconds, the client is using about 1G of memory.
Profiling the client shows that the
It looks like flow control was implemented on the writer side some time back, but there is nothing equivalent for the reader side.
I'm not an expert in TCP flow control nor
Yes, that's an issue.
It looks like flow control should be implemented with
The idea would be:
For efficiency, we may need high/low water marks to avoid calling
I don't think the
It's a very small patch, and if max_queue defaults to 0, then it's a backwards compatible change. My initial testing indicates that this does improve flow control. I'll do some more testing tonight and send a patch if I think it's working well.
Ah yes, that looks much better!
I'd rather make
Even though you can always take a server offline if you write faster than it can read, in this case, this manifests as an excessive memory consuption, which isn't the most graceful way to handle this scenario. So I think this problem could be viewed as a DoS vector. That's why I don't want to default to an unbounded queue. Security by default and all that :-)
I'm also wondering whether the queue length should be hardcoded to 1, so that websockets doesn't acknowledge a ping until the application has fetched the previous message. Currently websockets acknolwedges pings automatically even if some messages are queued and perhaps won't ever be seen by the application.
That said, one could argue that pings are a protocol level feature and that, as long as all earlier frames have been received and parsed, it's fine to acknowledge subsequent pings. Also, even if the application has fetched a message from the queue, an arbitrarily long async process may take place and the frame may not be fully processed until some later time.
I'll sleep on that question. It just changes whether the queue length is configurable or hardcoded to 1.
One last thing -- it's quite hard to write tests for websockets, partly because testing async stuff is hard, partly because the tests aren't as well structured and documented as they could be. Feel free to leave that part up to me. If you feel like updating the documentation (generated from docstrings) and adding a changelog entry, that's appreciated :-)