New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Slow client can cause send queue to grow indefinetly #477
Comments
Basic flow control tools for reading are present already ( I had not specifically considered a high watermark (presumably a handler that would fire when the buffer got above a certain threshold), but it wouldn't be too difficult to add at the same time and would make the API a little cleaner (no need to explicitly poll with every send). |
Thanks for your quick reply. Yes I noticed the flow control mechanism for reading while digging through the sources and felt that the writing side were a little left out A handler that fires when a high watermark is hit and another one when the buffer again is below a specific level sounds good to me 😄 |
Hi, I would like to fix this issue by setting a handler which will be fired when the send queue becomes empty. |
I'd accept a PR for a hander that fires when the send queue drops below a particular value. |
I have experienced that there is no built in handling in websocketpp to cope with the situation that the client is too slow to process messages sent from the websocketpp server.
I have an application that generates quite a lot of real-time statistics and noticed that when some clients were connected the memory consumption would go through the roof (like +1 gig in 30 minutes) due to the fact that m_send_queue in websocketpp\impl\connection_impl.hpp would grow and grow.
I have now built a protection in my app by checking the buffer size using get_buffered_amount() and if too big I disconnect the client.
However, I think some form of flow control should be built into the library. Like zmq's high watermark or some other mechanism.
What do you think?
The text was updated successfully, but these errors were encountered: