Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Flow control #143

Closed
anderspitman opened this issue Aug 31, 2018 · 9 comments
Closed

Flow control #143

anderspitman opened this issue Aug 31, 2018 · 9 comments

Comments

@anderspitman
Copy link

Does websocket-stream provide an backpressuring, or at least is there any way to close down the stream from the receiving side to tell it to stop sending? My sender is making my receiver run out of memory.

@mcollina
Copy link
Collaborator

mcollina commented Aug 31, 2018 via email

@dhirajtech86
Copy link

dhirajtech86 commented Feb 3, 2019

Guys any update on this. I am facing same issue @anderspitman. My scenario is that my sender socket client is written in c#. There is receiver client which is in node written using ws and websocket stream. Then there is a middle server which pipe the connection with both sender and receiver. This server is also written in node with ws and websocket-stream. When I pipe in this middle server, then data coming from my c# client is at very fast pace and receiver consuming it slowly. But the middle server memory consumption is growing like anything. It crashes after some time.

I have implemented pause and resume on receiver client using events and that works on receiver part but middle server memory keeps growing.

Also tried to use raw socket of websocket provided with ws for pause and resume but it seems my pause and resume have not but impact on data incoming.

Please help me here and let me know if my scenario is not clear to you. I will explain more.

FYI I am sending 5 gb file over this connection.

@anderspitman
Copy link
Author

@mcollina sorry I somehow missed that you had responded. I ended needing a language-independent solution, but thanks for your efforts.

@dhirajtech86 I've spent quite a bit of time working on this. Unfortunately the WebSocket protocol doesn't include any built-in flow control, so you have to do backpressuring at the application level using something like websocket-stream. However, like you, I wanted to be able to use languages other than JS on the backend. From my research, I'm not aware of any existing language-independent solution. Reactive Streams seems to be the closest, but it appears that the non-Java implementations have basically been abandoned.

I've started working on a very simple specification (and reference implementations in JS, Rust, and Go), called omnistreams. You can check it out here. It's still early but the JavaScript implementation has been working well for me and will be going into production for iobio soon. I'd love to start getting external feedback, and I'd be willing to help out with a C# implementation. The protocol is extremely simple and designed to be implemented easily.

@dhirajtech86
Copy link

Thanks for replying this fast @anderspitman. Though I am not an expert on the e topic, but according to my understanding we can manually pause underlying TCP socket which will then trigger TCP backpressure mechanism.

As on other side when sending client detects that TCP buffer for sending is filled then it will not send more data.

In my scenario sender is reading data in chunks and sending it. So if write is not flushed it will stop reading more data and will pause until data is flushed.

Am I missing something here ?

@anderspitman
Copy link
Author

anderspitman commented Feb 3, 2019

@dhirajtech86 I'm not an expert either, but you're correct that TCP has backpressuring. You don't even have to do it manually. The problem from what I can tell is that since WebSockets is an event-based protocol, it fires the message events as soon as they come in, and expect your application to take care of it from there. So if your application can't keep up, the WebSocket buffer is just going to keep filling. But at that point the TCP stack is no longer aware of the messages, it just knows that it handed off to the WebSockets stack. Therefore TCP doesn't do any backpressuring. So the situation where you run into a problem is when your application is unable to keep up with your network speed. So the only solution I've found is application-level flow control. A more ideal solution might be for WebSockets to automatically detect when your application is behind processing message events and pause the TCP stream, but that doesn't currently exist.

@mcollina
Copy link
Collaborator

mcollina commented Feb 4, 2019

If Node.js is the target receiver, I would recommend to use http2 instead.

This library is just a wrapper on top of https://www.npmjs.com/package/ws. So, you might want to open an issue there to check how to handle flow control situation on the receiver side. We can handle these on the sending side because of the callback to send().

@anderspitman
Copy link
Author

@mcollina unfortunately the browser is a primary target for my case. I see that you're using bufferedAmount with a timeout. When I was looking into this my research indicated that bufferedAmount works very differently between browsers and can't really be relied on for flow control. Have you found it to work well? It doesn't really matter in my case because as I said it still doesn't save you if your TCP stack is faster than the receiving application.

Either way we can close this. Thanks again for your help.

@anderspitman
Copy link
Author

@dhirajtech86 I just discovered rsocket today. They already have JS and .NET support and should be able to do everything you need.

@dhirajtech86
Copy link

@anderspitman Thanks, will have a look into it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants