-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add eio
implementation
#21
Conversation
d4c75c6
to
38d8fd8
Compare
We'll iterate on it for our internal project and merge when ready. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This changes are now required to run the greeter example code.
The last two commits on https://github.com/tmcgilchrist/ocaml-grpc/tree/grpc-eio provide the fixes and ports the route guide example to EIO. |
702d4b5
to
c56d70e
Compare
This fixes a race condition where the trailer was ignored because the read body was not fully processed, so the status promise was not resolved by the trailer handler in time.
We don't need a separate request queue if we just fork
We might wait forever causing a deadlock. Maybe this race condition won't occur in the real world because we have already written the status promise meaning a well-behaved client should terminate the request, but we can't rely on that. Also it occurs with mocked request streams in testing, and overall I see no reason why we should exhaust the request stream here.
Co-authored-by: Tim McGilchrist <timmcgil@gmail.com>
Thanks @tmcgilchrist. I think the rebasing went well this time. CI is now all green except for the experimental lint workflows. I converted the greeter EIO examples to |
Seq.iter | ||
(fun encoder -> | ||
let payload = Grpc.Message.make encoder in | ||
H2.Body.Writer.write_string body payload) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be calling flush after each iteration?
I'm undecided about whether it should. The way it is now will batch up the writes and send them in a smaller number of network packets. Flushing after each write_string
would send them immediately to the server, which might be preferable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's a good question. It could be preferable when aiming for low latency. I'm thinking maybe it should just be opt-in.
On this note, we ran some load tests but they were inconclusive because we stumbled upon a bug in Eio: ocaml-multicore/eio#572
It has been fixed, so I hope we'll have better numbers soon.
We're also experimenting with a different interface for the reader. Lazy sequence might be a bit too opinionated, and they introduce a bit of overhead, so the alternative interface is similar to H2 with on_msg
/on_eof
callbacks, with a conversion to Seq.t
as a convenience helper: https://github.com/dialohq/ocaml-grpc/tree/grpc-eio-noseq/lib/grpc-eio
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting experiment, it will be good to see some performance numbers from it. Keeping the Seq.t
for a simple interface is a good idea for introducing the library and less performance sensitive applications.
Opt-in sounds good, I'm happy to leave that till later on. There are a few other config pieces like auth and compression that will need a way to configure them.
else respond_with `Not_acceptable) | ||
| Some _ -> | ||
(* TODO: not sure if there is a specific way to handle this in grpc *) | ||
respond_with `Bad_request |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This details the responses required for unsupported encodings https://github.com/grpc/grpc/blob/master/doc/compression.md#test-cases
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! I'll take a look. By the way, with the Eio implementation and your Async server implementation, this code is now duplicated and could be factored out.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll do another pass over the tutorial code after this is merged.
Lower bounds for `eio_main` and `h2-eio` Co-authored-by: Tim McGilchrist <timmcgil@gmail.com>
print_endline "Try running:"; | ||
print_endline ""; | ||
print_endline | ||
{| dune exec -- examples/greeter-client-lwt/greeter_client_lwt.exe <your_name> |}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
{| dune exec -- examples/greeter-client-lwt/greeter_client_lwt.exe <your_name> |}; | |
{| dune exec -- examples/greeter-client-eio/greeter_client_eio.exe <your_name> |}; |
Seq.iter | ||
(fun encoder -> | ||
let payload = Grpc.Message.make encoder in | ||
H2.Body.Writer.write_string body payload) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting experiment, it will be good to see some performance numbers from it. Keeping the Seq.t
for a simple interface is a good idea for introducing the library and less performance sensitive applications.
Opt-in sounds good, I'm happy to leave that till later on. There are a few other config pieces like auth and compression that will need a way to configure them.
Merging this to unblock other PRs. |
Both unary and streaming, client and server are implemented. The API mostly follows the existing API but we can take this opportunity to overhaul it.