Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] fix(benches): re-enable the recently disabled benches #2934

Merged
merged 1 commit into from
Aug 2, 2022

Conversation

oddgrd
Copy link
Contributor

@oddgrd oddgrd commented Jul 30, 2022

Re-enable two of the benches disabled in #2929, using hyper::server::conn rather than the higher-level Server struct which is getting removed.

  • benches/pipeline.rs
  • benches/server.rs

I tested the updated benches by running them on this branch and also on the commit before #2929, and they have the same performance. I tried to implement graceful shutdown, but I'm not sure how to do it without Server. I see there is an issue (#2862) to add a helper for this to hyper-util, but if it can be implemented without it I am happy to keep trying.

PS. I don't have a firm grasp on async yet, so please let me know if I did something weird 馃槃

Partially closes #2930

@oddgrd
Copy link
Contributor Author

oddgrd commented Jul 31, 2022

I've managed to get all but two of the end_to_end benches to work, but a few of them have quite degraded performance compared to the previous implementation. I assume this is in part due to the fact that the previously used Client has a connection pool built in? Of course there is also something wrong with my implementation, as two of the benches just hang and do nothing. I've pushed a commit with my solution so far, with TODOs on the benches that have reduced performance and the two broken commented out.

I've also attached the logs from one of the broken benches (they both get stuck like this).

Logs from failing benches
TRACE mio::poll > registering event source with poller: token=Token(0), interests=READABLE | WRITABLE
 TRACE mio::poll > registering event source with poller: token=Token(1), interests=READABLE | WRITABLE
 TRACE hyper::client::conn > client handshake Http1
 TRACE mio::poll           > registering event source with poller: token=Token(2), interests=READABLE | WRITABLE
 TRACE hyper::proto::h1::conn > Conn::read_head
 TRACE hyper::proto::h1::conn > flushed({role=server}): State { reading: Init, writing: Init, keep_alive: Busy }
 TRACE want                   > signal: Want
 TRACE want                   > signal found waiting giver, notifying
 TRACE hyper::proto::h1::conn > flushed({role=client}): State { reading: Init, writing: Init, keep_alive: Busy }
 TRACE want                   > poll_want: taker wants!
 TRACE tracing::span          > encode_headers;
 TRACE tracing::span::active  > -> encode_headers;
 TRACE hyper::proto::h1::role > Client::encode method=GET, body=None
 TRACE tracing::span::active  > <- encode_headers;
 TRACE tracing::span          > -- encode_headers;
 DEBUG hyper::proto::h1::io   > flushed 45 bytes
 TRACE hyper::proto::h1::conn > flushed({role=client}): State { reading: Init, writing: KeepAlive, keep_alive: Busy }
 TRACE hyper::proto::h1::conn > Conn::read_head
 TRACE hyper::proto::h1::io   > received 45 bytes
 TRACE tracing::span          > parse_headers;
 TRACE tracing::span::active  > -> parse_headers;
 TRACE hyper::proto::h1::role > Request.parse bytes=45
 TRACE hyper::proto::h1::role > Request.parse Complete(45)
 TRACE tracing::span::active  > <- parse_headers;
 TRACE tracing::span          > -- parse_headers;
 DEBUG hyper::proto::h1::io   > parsed 0 headers
 DEBUG hyper::proto::h1::conn > incoming body is empty
 TRACE tracing::span          > encode_headers;
 TRACE tracing::span::active  > -> encode_headers;
 TRACE hyper::proto::h1::role > Server::encode status=200, body=Some(Known(10485760)), req_method=Some(GET)
 TRACE tracing::span::active  > <- encode_headers;
 TRACE tracing::span          > -- encode_headers;
 TRACE hyper::proto::h1::io   > buffer.queue self.len=82 buf.len=10485760
 DEBUG hyper::proto::h1::io   > flushed 2588672 bytes
 TRACE hyper::proto::h1::conn > Conn::read_head
 TRACE hyper::proto::h1::io   > received 8192 bytes
 TRACE tracing::span          > parse_headers;
 TRACE tracing::span::active  > -> parse_headers;
 TRACE hyper::proto::h1::role > Response.parse bytes=8192
 TRACE hyper::proto::h1::role > Response.parse Complete(82)
 TRACE tracing::span::active  > <- parse_headers;
 TRACE tracing::span          > -- parse_headers;
 DEBUG hyper::proto::h1::io   > parsed 2 headers
 DEBUG hyper::proto::h1::conn > incoming body is content-length (10485760 bytes)
 TRACE hyper::proto::h1::decode > decode; state=Length(10485760)
 TRACE hyper::proto::h1::conn   > flushed({role=client}): State { reading: Body(Length(10477650)), writing: KeepAlive, keep_alive: Busy }

@seanmonstar
Copy link
Member

Yes, the end_to_end benches were using a connection pool. It'd be important to adjust the benchmarks to reuse the connection. I assumed we'd need to change that file to spawn N connections, and save the SendRequest handles and keep sending on them. But we can merge the first two first.

re-enable the recently disabled pipeline and e2e bench using
`hyper::server::conn` instead of the removed higher-level `Server` api
@oddgrd
Copy link
Contributor Author

oddgrd commented Aug 2, 2022

Hi, thanks for the tip on the e2e bench! I stashed the e2e commit for now and fixed up the others so they are ready to merge first, as you suggested. They perform as before. I'll have a look at the e2e again, but if anyone else wants to take a shot that is fine by me of course.

@oddgrd oddgrd marked this pull request as ready for review August 2, 2022 11:26
@seanmonstar seanmonstar merged commit c558647 into hyperium:master Aug 2, 2022
@oddgrd oddgrd deleted the feat/re-enable-benches branch August 2, 2022 14:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Re-enable the end-to-end, pipeline, and server benchmarks
2 participants