Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Expose a way to flush writes in Websocket_lwt_unix #132

Closed
BraulioVM opened this issue Dec 30, 2022 · 5 comments
Closed

Expose a way to flush writes in Websocket_lwt_unix #132

BraulioVM opened this issue Dec 30, 2022 · 5 comments

Comments

@BraulioVM
Copy link

I am writing a websocket client-server pair and found that when my server crashed (due to my own mistakes) the client would keep "writing" as if the server was still receiving data. By using strace, I could see that the write syscalls on the client were returning EPIPE, yet no exceptions were being raised anywhere that my code could catch and handle.

After some debugging, I have found that it's due to Lwt_io.writes being buffered. The Lwt_io.write will (most of the time) not write to the socket. The writes will be flushed "in the background", and so will not be reported as a failure of this promise. In order to be able to handle the write error, I'd like to be able to explicitly flush the underlying Lwt_io.oc. I guess I would just add a:

val flush : conn -> unit Lwt.t

to the Websocket_lwt_unix module. Do you think this is a reasonable change? I'm willing to make the contribution if so.

@vbmithr
Copy link
Owner

vbmithr commented Dec 30, 2022

I looked at Lwt code and it looks you're right, Lwt_io.write does not always flush its result, and yet return a promise. Very misleading! Since Websocket is rather a datagram oriented protocol rather than stream, i guess that flushing should be done by default.

I now use another library (https://github.com/deepmarker/ocaml-fastws) for my websocket needs (Async only, I never made a release but it has been working satisfactorily for years in prod), hence I'm not familiar with ocaml-websocket anymore.

But I think flushing should be the default. Any opinion @copy @paurkedal ?

@BraulioVM
Copy link
Author

Since Websocket is rather a datagram oriented protocol rather than stream, i guess that flushing should be done by default.

I'm personally happy with that - I was going to do that in my application anyway (flush on every frame). I don't know if there are existing use-cases where people currently benefit from "coalescing" several frames in the same write call though.

I looked at Lwt code and it looks you're right, Lwt_io.write does not always flush its result, and yet return a promise. Very misleading!

In a sense, I was surprised by the behaviour as well and it took me a while what was wrong with the code. In a different sense, what would have been the point of Lwt_io.flush if Lwt_io.write always waited for the write before resolving? I know rust's tokio also resolves writes before the writes have actually finished, and you won't get an error report until the next write or flush (tokio-rs/tokio#4296 (comment)). I don't know async, but maybe their buffered writers behave similarly?

@vbmithr
Copy link
Owner

vbmithr commented Jan 1, 2023

It's fine not to flush after a write but returning a promise is misleading. Like in Async, it would return an immediate value then. But IMO dataframe oriented API must flush the output because that's what people have in mind… I guess

@paurkedal
Copy link
Contributor

I agree it is better to auto-flush there, since the content is already buffered.

(Apropos the question of returning a promise, the reason it may be needed even if the write function is not flushing, would be to ensure a fixed upper size on buffering. It's not so relevant when the protocol uses fixed frames, though.)

@vbmithr vbmithr closed this as completed in 3edc21a Jan 2, 2023
@BraulioVM
Copy link
Author

Thanks @vbmithr and @paurkedal!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants