Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Safer alternative to Lwt_io.establish_server #258

Merged
merged 3 commits into from Jun 24, 2016
Merged

Conversation

aantron
Copy link
Collaborator

@aantron aantron commented Jun 23, 2016

Lwt_io.establish_server is designed to be analogous to Lwt_io.open_connection: it hands you channels, but doesn't try to close them. You have to close them yourself.

People have expressed (1, 2) a desire for a version of establish_server that is more like Lwt_io.with_connection: it runs your callback f to spawn handler threads, and automatically schedules closing the associated channels once each thread completes. This PR adds such a function:

val establish_server_async :
  ?fd : Lwt_unix.file_descr ->
  ?buffer_size : int ->
  ?backlog : int ->                                     (* !!!!! *)
  Unix.sockaddr -> (input_channel * output_channel -> unit Lwt.t) -> server

I'm calling it establish_server_async as suggested by @c-cube for now, but other suggestions are welcome.

This seems like a difficult problem at first, since if the implicit close calls raise exceptions, the user code has no opportunity to catch them and do logging, etc – among other potential challenges. However, if the user needs control over that, it can explicitly close the channels in f, and surround that with whatever handling code it wants. The additional implicit closes will then have no effect.

So, this should give a safer API for simple usage, that nonetheless remains usable in more complex scenarios.

In fact, I've thought about this at length, and I can find only one reason, besides API breakage, not to replace establish_server: one may want to disable implicit channel closing, because the channels are, for example, being stored by f in some data structure. However, we could control that with an optional argument.

Other things to consider:

  • graceful shutdown,
  • concurrent connection limiting,
  • aforementioned exception handling,

I don't think establish_server_async has any impact on these, relative to establish_server, but...

  • there are probably other things I didn't think of.

I will probably rewrite the documentation after getting a fresh look at it in some days.

cc @c-cube @fxfactorial @artemkin

Resolves #208.
Resolves #225.

@c-cube
Copy link
Collaborator

c-cube commented Jun 23, 2016

Didn't look at the code, but this seems very useful indeed. Safe usage of resources is crucial (I hate debugging FDs leaks).

About the "concurrent connection limiting": this could be done with an optional lwt_pool (in fact, I often find myself using a dumb version of lwt_pool with units, basically acting as a semaphore — would lwt_semaphore be a useful shortcut for this common use-case of limiting concurrency? — it could also be nice for, say, Lwt_list.map_p and the likes)

@aantron
Copy link
Collaborator Author

aantron commented Jun 23, 2016

About the "concurrent connection limiting": this could be done with an optional lwt_pool

An Lwt_pool could be used "orthogonally" to limit how many connection threads can run the "body" of their handlers, but it won't stop establish_server_async from accepting an arbitrary number of connections (most of which would then block on the pool). I'm wondering if this is a problem or not, and if we need to add an optional argument to establish_server[_async] to control that. That makes me think it's better to wait until someone requests it, then they can explain why they need it :)

would lwt_semaphore be a useful shortcut for this common use-case of limiting concurrency?

Seems like it might be a nice thing to have. Want to be careful, however, because ideally we could reduce the number of features in Lwt, and/or reinterpret more things in terms of others.

@c-cube
Copy link
Collaborator

c-cube commented Jun 23, 2016

Seems like it might be a nice thing to have. Want to be careful, however, because ideally we could reduce the number of features in Lwt, and/or reinterpret more things in terms of others.

It can be a simple addition to Lwt_pool, actually, with a simple function:

type semaphore = private int t
val semaphore : int -> semaphore
val with_semaphore : semaphore -> (unit -> 'a Lwt.t) -> 'a Lwt.t

@aantron
Copy link
Collaborator Author

aantron commented Jun 23, 2016

It can be a simple addition to Lwt_pool, actually, with a simple function:

Yes. But I guess a mutex could also be a semaphore with limit 1, but an Lwt_pool doesn't, currently, have the same release semantics as Lwt_mutex [edit: not so sure, but either way, it requires investigation], and I don't know, at this point, if there is an unacceptable performance hit to making an Lwt_mutex be an Lwt_pool "under the hood."

Also, I wonder if/how Lwt_preemptive is related.

When establish_server closes a socket, it first calls shutdown(2), then
close(2) on it. When one calls shutdown, if the peer has already closed
the connection, the result is ENOTCONN. This is translated into an
exception in OCaml.

Before this commit, the server did not handle this exception, so it
never called close on the socket in this case, instead allowing a
spurious exception to escape.
@aantron aantron merged commit c346d81 into master Jun 24, 2016
@aantron aantron deleted the establish_server_async branch June 25, 2016 14:34
aantron added a commit that referenced this pull request Dec 20, 2016
aantron added a commit that referenced this pull request Dec 20, 2016
aantron added a commit that referenced this pull request Dec 20, 2016
aantron added a commit that referenced this pull request Apr 8, 2017
aantron added a commit that referenced this pull request Apr 9, 2017
aantron added a commit that referenced this pull request Apr 10, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
2 participants