New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add Reqd.schedule_trailers (fixes #97) #141
Conversation
Thanks for sending this. I'll try to get to it this weekend. Before I start reviewing, a potentially related question: would gRPC also need to read trailer headers, in addition to sending them (what this PR does)? There's some machinery in place for that but it's not fully wired up. |
Hi Antonio, thanks! Clients need the ability to read the response trailers since they contain the actual status code (HTTP code should always be 200, regardless of errors), and an optional message. Clients do not need the ability to send request trailers (nor do servers need to read request trailers) as far as I know. |
@blandinw I think you're right about this. If you don't mind, I'm going to push a test + a fix for that before discussing this PR further. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@blandinw I like the direction of this PR and the approach you took. Thanks again for sending it over!
I only left mostly minor stylistic / documentation comments. Additionally, would you mind adding an entry to the CHANGES.md
file?
Note: I force-pushed to this branch after merging #142, so please make sure to reset locally.
lib/reqd.ml
Outdated
let send_trailers_on_close t new_trailers = | ||
let go s = | ||
match s.response_state with | ||
| Streaming (rsp, rsp_body, _old_trailers) -> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thoughts on failing if you try to schedule multiple times? I think this behavior should be somewhat specified, and additionally documented in the .mli
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can't think of a use case where we'd need to change the trailers after setting them, and we should notify the user somehow that something is probably wrong.
I was hoping we could somehow avoid runtime errors, but I'm not sure how to do it with the current API. Maybe something to think about if we decide to change the API one day (we could then probably enforce it at the type level).
I'll error out for now.
in | ||
match t.state with | ||
| Idle | Active (Open (WaitingForPeer | PartialHeaders _), _) | Closed _ -> | ||
assert false |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's possible, in certain cases, to retain a reference to the request descriptor when the stream has been closed (e.g. because of an error). The other cases seem fine to me, but the Closed
case should probably be handled separately.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I fail with a useful error message in the Closed and Reserved cases.
Let me know if this is what you had in mind.
lib_test/test_h2_server.ml
Outdated
Alcotest.fail | ||
"Expected state machine to issue a write operation after seeing \ | ||
headers." | ||
|
||
(* TODO: test for trailer headers. *) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove TODO
please :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Addressed your comments in a new commit for easier review, please squash before merging
in | ||
match t.state with | ||
| Idle | Active (Open (WaitingForPeer | PartialHeaders _), _) | Closed _ -> | ||
assert false |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I fail with a useful error message in the Closed and Reserved cases.
Let me know if this is what you had in mind.
lib/reqd.ml
Outdated
~max_frame_size:t.max_frame_size | ||
~flags:Flags.(set_end_stream default_flags) | ||
t.id | ||
let frame_info flags = |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct, I used to send an extra DATA frame, but changed my mind.
stream.encoder | ||
frame_info | ||
trailers; | ||
close_stream t; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the previous behavior flushed the writer before marking the stream as closed. Was this change intentional?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here's the recent history for this line of code
- I added the
flush
call with a callback toStream.finish_stream
in d7e5795#diff-71bafa60481b1abadc94a8c1599b45c6R492 (tests would fail otherwise) - you made the
Stream.finish_stream
aclose_stream
call in b126bb2#diff-71bafa60481b1abadc94a8c1599b45c6R488 - since
close_stream
already flushes (either directly, or viareset_stream
), I removed the explicit flush call
dune runtest
and my toy gRPC project are running fine, and a quick look at the code makes me think it should work, but I'll defer to you since you're more experienced with the codebase.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point -- I forgot close_stream
already flushed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@blandinw thank you for addressing my comments. this looks almost ready now! I just left one additional question.
Thanks again! |
Happy to contribute, it was fun 😄 |
This PR makes sending trailing headers (aka. trailers) possible, and thus allows H2 to be used to implement gRPC.
I've been going back and forth on the best patch to write, and I believe this strikes a good balance, but I'm curious to hear your thoughts.
I had a look at the changes proposed in #72 and I believe it suffers from the sequencing issue described below (commit #1).
However, I did steal the test case from there.
Commit 1 (move Streaming flush logic out of Body):
has_pending_output
forwards to Faraday, no moret.write_final_data_frame
and no moreresponse_body_requires_output
)I also noticed a potential bug where I believe the zero length DATA frame is never sent if we're flow-controlled (maxxed out sent bytes) in
Scheduler.write
Commit 2 (allow trailing headers, fixes #97)
Quick note: I was confused at the beginning as to how the state machine worked, until I realized it relied on Gluten. It's not obvious at first if you're grepping around (could not find
next_write_operation
call sites)You may want to have a quick blurb or schema showing the
Gluten -> Server_connection -> Scheduler -> Reqd -> Writer -> Faraday
chain.