Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

UnexpectedEof Error on HTTP/2 Requests with Rustls #3427

Closed
cablehead opened this issue Nov 19, 2023 · 16 comments · Fixed by hyperium/h2#743
Closed

UnexpectedEof Error on HTTP/2 Requests with Rustls #3427

cablehead opened this issue Nov 19, 2023 · 16 comments · Fixed by hyperium/h2#743
Labels
C-bug Category: bug. Something is wrong. This is bad!

Comments

@cablehead
Copy link

cablehead commented Nov 19, 2023

Version

hyper = { version = "1.0.1", features = ["full"] }
tokio = { version = "1", features = ["full"] }
http-body-util = "0.1"
hyper-util = { version = "0.1.1", features = ["full"] }
tokio-rustls = "0.24.1"
rustls-pemfile = "1.0.4"
rustls = "0.21.9"

Platform

Darwin m1 23.0.0 Darwin Kernel Version 23.0.0: Fri Sep 15 14:41:34 PDT 2023; root:xnu-10002.1.13~1/RELEASE_ARM64_T8103 arm64

Description

When making successful HTTP/2 requests over TLS , the following server logs an UnexpectedEof error for each request:

        tokio::task::spawn(async move {
            if let Err(err) = hyper::server::conn::http2::Builder::new(
                hyper_util::rt::tokio::TokioExecutor::new(),
            )
            .serve_connection(io, service_fn(hello))
            .await
            {
                println!("Error serving connection: {:?}", err);
            }
        });

I tried this code:

https://gist.github.com/cablehead/e24990e364ecb3e6028d5978acd0c1c9

I performed a request using:

% curl -v -k --http2 https://localhost:3000
*   Trying 127.0.0.1:3000...
* Connected to localhost (127.0.0.1) port 3000 (#0)
* ALPN: offers h2,http/1.1
* (304) (OUT), TLS handshake, Client hello (1):
* (304) (IN), TLS handshake, Server hello (2):
* (304) (IN), TLS handshake, Unknown (8):
* (304) (IN), TLS handshake, Certificate (11):
* (304) (IN), TLS handshake, CERT verify (15):
* (304) (IN), TLS handshake, Finished (20):
* (304) (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / AEAD-CHACHA20-POLY1305-SHA256
* ALPN: server accepted h2
* Server certificate:
*  subject: CN=localhost
*  start date: Nov 17 19:09:27 2023 GMT
*  expire date: Nov 16 19:09:27 2024 GMT
*  issuer: CN=localhost
*  SSL certificate verify result: self signed certificate (18), continuing anyway.
* using HTTP/2
* h2 [:method: GET]
* h2 [:scheme: https]
* h2 [:authority: localhost:3000]
* h2 [:path: /]
* h2 [user-agent: curl/8.1.2]
* h2 [accept: */*]
* Using Stream ID: 1 (easy handle 0x132814200)
> GET / HTTP/2
> Host: localhost:3000
> User-Agent: curl/8.1.2
> Accept: */*
>
< HTTP/2 200
< date: Sun, 19 Nov 2023 17:36:02 GMT
< content-length: 13
<
* Connection #0 to host localhost left intact
Hello, World!%

This successfully returns "Hello world" from the server. However, the server prints out the following error on every successful request:

Error serving connection: hyper::Error(Io, Kind(UnexpectedEof))

Using the http1 builder with the same TLS wrapper does not exhibit this behavior, and neither does HTTP/2 without the TLS wrapper.

The expected behavior is that no error should be logged when a request completes successfully.

@cablehead cablehead added the C-bug Category: bug. Something is wrong. This is bad! label Nov 19, 2023
@seanmonstar
Copy link
Member

Hm, is this a regression? Did it happen in 0.14?

@cablehead
Copy link
Author

I'm pretty sure I've seen it with 0.14 too: it's easier to tease out with v1, as concerns are better separated now(!). Lemme try to reproduce...

@cablehead
Copy link
Author

cablehead commented Nov 20, 2023

Yeah, it's not a regression. This is a reproduction with 0.14:

https://github.com/cablehead/hyper-with-tls/tree/main/hyper-0.14-rustls

@seanmonstar
Copy link
Member

Well that's good at least. Is it only with curl? Or do other clients also see this? Are you able to enable h2s logs to see what it was expecting?

@cablehead
Copy link
Author

cablehead commented Nov 21, 2023

I've put up the repro for v1, and turned on tracing here: https://github.com/cablehead/hyper-with-tls/tree/main/hyper-1.0.1-rustls

These are the logs from curl -v -k --http2 https://localhost:3000

2023-11-21T00:58:03.311082Z DEBUG server_handshake:FramedWrite::buffer{frame=Settings { flags: (0x0), max_concurrent_streams: 200, initial_window_size: 1048576, max_frame_size: 16384, max_header_list_size: 16777216 }}: h2::codec::framed_write: send frame=Settings { flags: (0x0), max_concurrent_streams: 200, initial_window_size: 1048576, max_frame_size: 16384, max_header_list_size: 16777216 }
2023-11-21T00:58:03.311223Z TRACE server_handshake:FramedWrite::buffer{frame=Settings { flags: (0x0), max_concurrent_streams: 200, initial_window_size: 1048576, max_frame_size: 16384, max_header_list_size: 16777216 }}: h2::frame::settings: encoding SETTINGS; len=24
2023-11-21T00:58:03.311249Z TRACE server_handshake:FramedWrite::buffer{frame=Settings { flags: (0x0), max_concurrent_streams: 200, initial_window_size: 1048576, max_frame_size: 16384, max_header_list_size: 16777216 }}: h2::frame::settings: encoding setting; val=MaxConcurrentStreams(200)
2023-11-21T00:58:03.311260Z TRACE server_handshake:FramedWrite::buffer{frame=Settings { flags: (0x0), max_concurrent_streams: 200, initial_window_size: 1048576, max_frame_size: 16384, max_header_list_size: 16777216 }}: h2::frame::settings: encoding setting; val=InitialWindowSize(1048576)
2023-11-21T00:58:03.311269Z TRACE server_handshake:FramedWrite::buffer{frame=Settings { flags: (0x0), max_concurrent_streams: 200, initial_window_size: 1048576, max_frame_size: 16384, max_header_list_size: 16777216 }}: h2::frame::settings: encoding setting; val=MaxFrameSize(16384)
2023-11-21T00:58:03.311278Z TRACE server_handshake:FramedWrite::buffer{frame=Settings { flags: (0x0), max_concurrent_streams: 200, initial_window_size: 1048576, max_frame_size: 16384, max_header_list_size: 16777216 }}: h2::frame::settings: encoding setting; val=MaxHeaderListSize(16777216)
2023-11-21T00:58:03.311288Z TRACE server_handshake:FramedWrite::buffer{frame=Settings { flags: (0x0), max_concurrent_streams: 200, initial_window_size: 1048576, max_frame_size: 16384, max_header_list_size: 16777216 }}: h2::codec::framed_write: encoded settings rem=33
2023-11-21T00:58:03.311333Z TRACE server_handshake: h2::server: state=Flushing(_)
2023-11-21T00:58:03.311349Z TRACE server_handshake:flush:FramedWrite::flush: h2::codec::framed_write: queued_data_frame=false
2023-11-21T00:58:03.311378Z TRACE server_handshake:flush:FramedWrite::flush: h2::codec::framed_write: flushing buffer
2023-11-21T00:58:03.311396Z TRACE server_handshake: h2::server: flush.poll=Ready
2023-11-21T00:58:03.311484Z TRACE server_handshake: h2::proto::streams::flow_control: inc_window; sz=65535; old=0; new=65535
2023-11-21T00:58:03.311533Z TRACE server_handshake: h2::proto::streams::flow_control: inc_window; sz=65535; old=0; new=65535
2023-11-21T00:58:03.311541Z TRACE server_handshake: h2::proto::streams::prioritize: Prioritize::new; flow=FlowControl { window_size: Window(65535), available: Window(65535) }
2023-11-21T00:58:03.311587Z TRACE server_handshake: h2::server: connection established!
2023-11-21T00:58:03.311641Z TRACE server_handshake: h2::proto::streams::recv: set_target_connection_window; target=1048576; available=65535, reserved=0
2023-11-21T00:58:03.311678Z TRACE server_handshake:Connection{peer=Server}:poll: h2::proto::connection: connection.state=Open
2023-11-21T00:58:03.311720Z TRACE server_handshake:Connection{peer=Server}:poll:FramedRead::poll_next: h2::codec::framed_read: poll
2023-11-21T00:58:03.311755Z TRACE server_handshake:Connection{peer=Server}:poll:FramedRead::poll_next: tokio_util::codec::framed_impl: attempting to decode a frame
2023-11-21T00:58:03.311769Z TRACE server_handshake:Connection{peer=Server}:poll:FramedRead::poll_next: tokio_util::codec::framed_impl: frame decoded from buffer
2023-11-21T00:58:03.311780Z TRACE server_handshake:Connection{peer=Server}:poll:FramedRead::poll_next: h2::codec::framed_read: read.bytes=27
2023-11-21T00:58:03.311816Z TRACE server_handshake:Connection{peer=Server}:poll:FramedRead::poll_next:FramedRead::decode_frame{offset=27}: h2::codec::framed_read: decoding frame from 27B
2023-11-21T00:58:03.311839Z TRACE server_handshake:Connection{peer=Server}:poll:FramedRead::poll_next:FramedRead::decode_frame{offset=27}: h2::codec::framed_read: frame.kind=Settings
2023-11-21T00:58:03.311884Z DEBUG server_handshake:Connection{peer=Server}:poll:FramedRead::poll_next: h2::codec::framed_read: received frame=Settings { flags: (0x0), enable_push: 0, max_concurrent_streams: 100, initial_window_size: 10485760 }
2023-11-21T00:58:03.311921Z TRACE server_handshake:Connection{peer=Server}:poll: h2::proto::connection: recv SETTINGS frame=Settings { flags: (0x0), enable_push: 0, max_concurrent_streams: 100, initial_window_size: 10485760 }
2023-11-21T00:58:03.311941Z DEBUG server_handshake:Connection{peer=Server}:poll:poll_ready:FramedWrite::buffer{frame=Settings { flags: (0x1: ACK) }}: h2::codec::framed_write: send frame=Settings { flags: (0x1: ACK) }
2023-11-21T00:58:03.311953Z TRACE server_handshake:Connection{peer=Server}:poll:poll_ready:FramedWrite::buffer{frame=Settings { flags: (0x1: ACK) }}: h2::frame::settings: encoding SETTINGS; len=0
2023-11-21T00:58:03.311964Z TRACE server_handshake:Connection{peer=Server}:poll:poll_ready:FramedWrite::buffer{frame=Settings { flags: (0x1: ACK) }}: h2::codec::framed_write: encoded settings rem=9
2023-11-21T00:58:03.311978Z TRACE server_handshake:Connection{peer=Server}:poll:poll_ready: h2::proto::settings: ACK sent; applying settings
2023-11-21T00:58:03.312005Z TRACE server_handshake:Connection{peer=Server}:poll:FramedRead::poll_next: h2::codec::framed_read: poll
2023-11-21T00:58:03.312014Z TRACE server_handshake:Connection{peer=Server}:poll:FramedRead::poll_next: tokio_util::codec::framed_impl: attempting to decode a frame
2023-11-21T00:58:03.312024Z TRACE server_handshake:Connection{peer=Server}:poll:FramedRead::poll_next: tokio_util::codec::framed_impl: frame decoded from buffer
2023-11-21T00:58:03.312033Z TRACE server_handshake:Connection{peer=Server}:poll:FramedRead::poll_next: h2::codec::framed_read: read.bytes=13
2023-11-21T00:58:03.312046Z TRACE server_handshake:Connection{peer=Server}:poll:FramedRead::poll_next:FramedRead::decode_frame{offset=13}: h2::codec::framed_read: decoding frame from 13B
2023-11-21T00:58:03.312057Z TRACE server_handshake:Connection{peer=Server}:poll:FramedRead::poll_next:FramedRead::decode_frame{offset=13}: h2::codec::framed_read: frame.kind=WindowUpdate
2023-11-21T00:58:03.312081Z DEBUG server_handshake:Connection{peer=Server}:poll:FramedRead::poll_next: h2::codec::framed_read: received frame=WindowUpdate { stream_id: StreamId(0), size_increment: 1048510465 }
2023-11-21T00:58:03.312095Z TRACE server_handshake:Connection{peer=Server}:poll: h2::proto::connection: recv WINDOW_UPDATE frame=WindowUpdate { stream_id: StreamId(0), size_increment: 1048510465 }
2023-11-21T00:58:03.312124Z TRACE server_handshake:Connection{peer=Server}:poll: h2::proto::streams::flow_control: inc_window; sz=1048510465; old=65535; new=1048576000
2023-11-21T00:58:03.312158Z TRACE server_handshake:Connection{peer=Server}:poll:FramedRead::poll_next: h2::codec::framed_read: poll
2023-11-21T00:58:03.312167Z TRACE server_handshake:Connection{peer=Server}:poll:FramedRead::poll_next: tokio_util::codec::framed_impl: attempting to decode a frame
2023-11-21T00:58:03.312176Z TRACE server_handshake:Connection{peer=Server}:poll:FramedRead::poll_next: tokio_util::codec::framed_impl: frame decoded from buffer
2023-11-21T00:58:03.312185Z TRACE server_handshake:Connection{peer=Server}:poll:FramedRead::poll_next: h2::codec::framed_read: read.bytes=39
2023-11-21T00:58:03.312198Z TRACE server_handshake:Connection{peer=Server}:poll:FramedRead::poll_next:FramedRead::decode_frame{offset=39}: h2::codec::framed_read: decoding frame from 39B
2023-11-21T00:58:03.312209Z TRACE server_handshake:Connection{peer=Server}:poll:FramedRead::poll_next:FramedRead::decode_frame{offset=39}: h2::codec::framed_read: frame.kind=Headers
2023-11-21T00:58:03.312240Z TRACE server_handshake:Connection{peer=Server}:poll:FramedRead::poll_next:FramedRead::decode_frame{offset=39}: h2::frame::headers: loading headers; flags=(0x5: END_HEADERS | END_STREAM)
2023-11-21T00:58:03.312281Z TRACE server_handshake:Connection{peer=Server}:poll:FramedRead::poll_next:FramedRead::decode_frame{offset=39}:hpack::decode: h2::hpack::decoder: decode
2023-11-21T00:58:03.312293Z TRACE server_handshake:Connection{peer=Server}:poll:FramedRead::poll_next:FramedRead::decode_frame{offset=39}:hpack::decode: h2::hpack::decoder: rem=30 kind=Indexed
2023-11-21T00:58:03.312329Z TRACE server_handshake:Connection{peer=Server}:poll:FramedRead::poll_next:FramedRead::decode_frame{offset=39}:hpack::decode: h2::hpack::decoder: rem=29 kind=Indexed
2023-11-21T00:58:03.312353Z TRACE server_handshake:Connection{peer=Server}:poll:FramedRead::poll_next:FramedRead::decode_frame{offset=39}:hpack::decode: h2::hpack::decoder: rem=28 kind=LiteralWithIndexing
2023-11-21T00:58:03.312433Z TRACE server_handshake:Connection{peer=Server}:poll:FramedRead::poll_next:FramedRead::decode_frame{offset=39}:hpack::decode: h2::hpack::decoder: rem=16 kind=Indexed
2023-11-21T00:58:03.312447Z TRACE server_handshake:Connection{peer=Server}:poll:FramedRead::poll_next:FramedRead::decode_frame{offset=39}:hpack::decode: h2::hpack::decoder: rem=15 kind=LiteralWithIndexing
2023-11-21T00:58:03.312504Z TRACE server_handshake:Connection{peer=Server}:poll:FramedRead::poll_next:FramedRead::decode_frame{offset=39}:hpack::decode: h2::hpack::decoder: rem=5 kind=LiteralWithIndexing
2023-11-21T00:58:03.312525Z DEBUG server_handshake:Connection{peer=Server}:poll:FramedRead::poll_next: h2::codec::framed_read: received frame=Headers { stream_id: StreamId(1), flags: (0x5: END_HEADERS | END_STREAM) }
2023-11-21T00:58:03.312540Z TRACE server_handshake:Connection{peer=Server}:poll: h2::proto::connection: recv HEADERS frame=Headers { stream_id: StreamId(1), flags: (0x5: END_HEADERS | END_STREAM) }
2023-11-21T00:58:03.312553Z TRACE server_handshake:Connection{peer=Server}:poll: h2::proto::streams::flow_control: inc_window; sz=1048576; old=0; new=1048576
2023-11-21T00:58:03.312561Z TRACE server_handshake:Connection{peer=Server}:poll: h2::proto::streams::flow_control: inc_window; sz=10485760; old=0; new=10485760
2023-11-21T00:58:03.312584Z TRACE server_handshake:Connection{peer=Server}:poll: h2::proto::streams::streams: recv_headers; stream=StreamId(1); state=State { inner: Idle }
2023-11-21T00:58:03.312614Z TRACE server_handshake:Connection{peer=Server}:poll: h2::proto::streams::recv: opening stream; init_window=1048576
2023-11-21T00:58:03.312649Z TRACE server_handshake:Connection{peer=Server}:poll: h2::proto::streams::store: Queue::push_back
2023-11-21T00:58:03.312657Z TRACE server_handshake:Connection{peer=Server}:poll: h2::proto::streams::store:  -> first entry
2023-11-21T00:58:03.312666Z TRACE server_handshake:Connection{peer=Server}:poll: h2::proto::streams::counts: transition_after; stream=StreamId(1); state=State { inner: HalfClosedRemote(AwaitingHeaders) }; is_closed=false; pending_send_empty=true; buffered_send_data=0; num_recv=1; num_send=0
2023-11-21T00:58:03.312684Z TRACE server_handshake:Connection{peer=Server}:poll:FramedRead::poll_next: h2::codec::framed_read: poll
2023-11-21T00:58:03.312694Z TRACE server_handshake:Connection{peer=Server}:poll:FramedRead::poll_next: tokio_util::codec::framed_impl: attempting to decode a frame
2023-11-21T00:58:03.312717Z TRACE server_handshake:Connection{peer=Server}:poll:FramedRead::poll_next: tokio_util::codec::framed_impl: attempting to decode a frame
2023-11-21T00:58:03.312727Z TRACE server_handshake:Connection{peer=Server}:poll:FramedRead::poll_next: tokio_util::codec::framed_impl: frame decoded from buffer
2023-11-21T00:58:03.312735Z TRACE server_handshake:Connection{peer=Server}:poll:FramedRead::poll_next: h2::codec::framed_read: read.bytes=9
2023-11-21T00:58:03.312748Z TRACE server_handshake:Connection{peer=Server}:poll:FramedRead::poll_next:FramedRead::decode_frame{offset=9}: h2::codec::framed_read: decoding frame from 9B
2023-11-21T00:58:03.312758Z TRACE server_handshake:Connection{peer=Server}:poll:FramedRead::poll_next:FramedRead::decode_frame{offset=9}: h2::codec::framed_read: frame.kind=Settings
2023-11-21T00:58:03.312771Z DEBUG server_handshake:Connection{peer=Server}:poll:FramedRead::poll_next: h2::codec::framed_read: received frame=Settings { flags: (0x1: ACK) }
2023-11-21T00:58:03.312788Z TRACE server_handshake:Connection{peer=Server}:poll: h2::proto::connection: recv SETTINGS frame=Settings { flags: (0x1: ACK) }
2023-11-21T00:58:03.312798Z DEBUG server_handshake:Connection{peer=Server}:poll: h2::proto::settings: received settings ACK; applying Settings { flags: (0x0), max_concurrent_streams: 200, initial_window_size: 1048576, max_frame_size: 16384, max_header_list_size: 16777216 }
2023-11-21T00:58:03.312808Z TRACE server_handshake:Connection{peer=Server}:poll: h2::proto::streams::recv: update_initial_window_size; new=1048576; old=1048576
2023-11-21T00:58:03.312835Z TRACE server_handshake:Connection{peer=Server}:poll:FramedRead::poll_next: h2::codec::framed_read: poll
2023-11-21T00:58:03.312845Z TRACE server_handshake:Connection{peer=Server}:poll:FramedRead::poll_next: tokio_util::codec::framed_impl: attempting to decode a frame
2023-11-21T00:58:03.312867Z DEBUG server_handshake:Connection{peer=Server}:poll:FramedWrite::buffer{frame=WindowUpdate { stream_id: StreamId(0), size_increment: 983041 }}: h2::codec::framed_write: send frame=WindowUpdate { stream_id: StreamId(0), size_increment: 983041 }
2023-11-21T00:58:03.312879Z TRACE server_handshake:Connection{peer=Server}:poll:FramedWrite::buffer{frame=WindowUpdate { stream_id: StreamId(0), size_increment: 983041 }}: h2::frame::window_update: encoding WINDOW_UPDATE; id=StreamId(0)
2023-11-21T00:58:03.312890Z TRACE server_handshake:Connection{peer=Server}:poll:FramedWrite::buffer{frame=WindowUpdate { stream_id: StreamId(0), size_increment: 983041 }}: h2::codec::framed_write: encoded window_update rem=22
2023-11-21T00:58:03.312903Z TRACE server_handshake:Connection{peer=Server}:poll: h2::proto::streams::flow_control: inc_window; sz=983041; old=65535; new=1048576
2023-11-21T00:58:03.312930Z TRACE server_handshake:Connection{peer=Server}:poll: h2::proto::streams::prioritize: poll_complete
2023-11-21T00:58:03.312941Z TRACE server_handshake:Connection{peer=Server}:poll: h2::proto::streams::prioritize: schedule_pending_open
2023-11-21T00:58:03.312981Z TRACE server_handshake:Connection{peer=Server}:poll:FramedWrite::flush: h2::codec::framed_write: queued_data_frame=false
2023-11-21T00:58:03.313008Z TRACE server_handshake:Connection{peer=Server}:poll:FramedWrite::flush: h2::codec::framed_write: flushing buffer
2023-11-21T00:58:03.313031Z TRACE h2::proto::streams::streams: next_incoming; id=StreamId(1), state=State { inner: HalfClosedRemote(AwaitingHeaders) }
2023-11-21T00:58:03.313038Z TRACE h2::server: received incoming
2023-11-21T00:58:03.313068Z TRACE server_handshake:Connection{peer=Server}:poll: h2::proto::connection: connection.state=Open
2023-11-21T00:58:03.313087Z TRACE server_handshake:Connection{peer=Server}:poll:FramedRead::poll_next: h2::codec::framed_read: poll
2023-11-21T00:58:03.313106Z TRACE server_handshake:Connection{peer=Server}:poll: h2::proto::streams::prioritize: poll_complete
2023-11-21T00:58:03.313115Z TRACE server_handshake:Connection{peer=Server}:poll: h2::proto::streams::prioritize: schedule_pending_open
2023-11-21T00:58:03.313130Z TRACE server_handshake:Connection{peer=Server}:poll:FramedWrite::flush: h2::codec::framed_write: flushing buffer
2023-11-21T00:58:03.313161Z TRACE h2::proto::streams::streams: drop_stream_ref; stream=Stream { id: StreamId(1), state: State { inner: HalfClosedRemote(AwaitingHeaders) }, is_counted: true, ref_count: 2, next_pending_send: None, is_pending_send: false, send_flow: FlowControl { window_size: Window(10485760), available: Window(0) }, requested_send_capacity: 0, buffered_send_data: 0, send_task: None, pending_send: Deque { indices: None }, next_pending_send_capacity: None, is_pending_send_capacity: false, send_capacity_inc: false, next_open: None, is_pending_open: false, is_pending_push: false, next_pending_accept: None, is_pending_accept: false, recv_flow: FlowControl { window_size: Window(1048576), available: Window(1048576) }, in_flight_recv_data: 0, next_window_update: None, is_pending_window_update: false, reset_at: None, next_reset_expire: None, pending_recv: Deque { indices: None }, is_recv: false, recv_task: None, pending_push_promises: Queue { indices: None, _p: PhantomData<h2::proto::streams::stream::NextAccept> }, content_length: Omitted }
2023-11-21T00:58:03.313182Z TRACE h2::proto::streams::counts: transition_after; stream=StreamId(1); state=State { inner: HalfClosedRemote(AwaitingHeaders) }; is_closed=false; pending_send_empty=true; buffered_send_data=0; num_recv=1; num_send=0
2023-11-21T00:58:03.313204Z TRACE h2::proto::streams::send: send_headers; frame=Headers { stream_id: StreamId(1), flags: (0x4: END_HEADERS) }; init_window=10485760
2023-11-21T00:58:03.313222Z TRACE Prioritize::queue_frame{stream.id=StreamId(1)}: h2::proto::streams::prioritize: schedule_send stream.id=StreamId(1)
2023-11-21T00:58:03.313230Z TRACE Prioritize::queue_frame{stream.id=StreamId(1)}: h2::proto::streams::store: Queue::push_back
2023-11-21T00:58:03.313236Z TRACE Prioritize::queue_frame{stream.id=StreamId(1)}: h2::proto::streams::store:  -> first entry
2023-11-21T00:58:03.313245Z TRACE h2::proto::streams::counts: transition_after; stream=StreamId(1); state=State { inner: HalfClosedRemote(Streaming) }; is_closed=false; pending_send_empty=false; buffered_send_data=0; num_recv=1; num_send=0
2023-11-21T00:58:03.313263Z TRACE reserve_capacity{stream.id=StreamId(1) requested=1 effective=1 curr=0}:try_assign_capacity{stream.id=StreamId(1)}: h2::proto::streams::prioritize: requested=1 additional=1 buffered=0 window=10485760 conn=1048576000
2023-11-21T00:58:03.313275Z TRACE reserve_capacity{stream.id=StreamId(1) requested=1 effective=1 curr=0}:try_assign_capacity{stream.id=StreamId(1)}: h2::proto::streams::prioritize: assigning capacity=1
2023-11-21T00:58:03.313283Z TRACE reserve_capacity{stream.id=StreamId(1) requested=1 effective=1 curr=0}:try_assign_capacity{stream.id=StreamId(1)}: h2::proto::streams::stream:   assigned capacity to stream; available=1; buffered=0; id=StreamId(1); max_buffer_size=409600 prev=0
2023-11-21T00:58:03.313292Z TRACE reserve_capacity{stream.id=StreamId(1) requested=1 effective=1 curr=0}:try_assign_capacity{stream.id=StreamId(1)}: h2::proto::streams::stream:   notifying task
2023-11-21T00:58:03.313299Z TRACE reserve_capacity{stream.id=StreamId(1) requested=1 effective=1 curr=0}:try_assign_capacity{stream.id=StreamId(1)}: h2::proto::streams::prioritize: available=1 requested=1 buffered=0 has_unavailable=true
2023-11-21T00:58:03.313322Z TRACE send_data{sz=13 requested=1}: h2::proto::streams::prioritize: buffered=13
2023-11-21T00:58:03.313333Z TRACE send_data{sz=13 requested=1}:try_assign_capacity{stream.id=StreamId(1)}: h2::proto::streams::prioritize: requested=13 additional=12 buffered=13 window=10485760 conn=1048575999
2023-11-21T00:58:03.313343Z TRACE send_data{sz=13 requested=1}:try_assign_capacity{stream.id=StreamId(1)}: h2::proto::streams::prioritize: assigning capacity=12
2023-11-21T00:58:03.313350Z TRACE send_data{sz=13 requested=1}:try_assign_capacity{stream.id=StreamId(1)}: h2::proto::streams::stream:   assigned capacity to stream; available=13; buffered=13; id=StreamId(1); max_buffer_size=409600 prev=0
2023-11-21T00:58:03.313358Z TRACE send_data{sz=13 requested=1}:try_assign_capacity{stream.id=StreamId(1)}: h2::proto::streams::prioritize: available=13 requested=13 buffered=13 has_unavailable=true
2023-11-21T00:58:03.313367Z TRACE send_data{sz=13 requested=1}:try_assign_capacity{stream.id=StreamId(1)}: h2::proto::streams::store: Queue::push_back
2023-11-21T00:58:03.313374Z TRACE send_data{sz=13 requested=1}:try_assign_capacity{stream.id=StreamId(1)}: h2::proto::streams::store:  -> already queued
2023-11-21T00:58:03.313396Z TRACE send_data{sz=13 requested=1}: h2::proto::streams::state: send_close: HalfClosedRemote => Closed
2023-11-21T00:58:03.313411Z TRACE send_data{sz=13 requested=1}: h2::proto::streams::prioritize: available=13 buffered=13
2023-11-21T00:58:03.313422Z TRACE send_data{sz=13 requested=1}:Prioritize::queue_frame{stream.id=StreamId(1)}: h2::proto::streams::prioritize: schedule_send stream.id=StreamId(1)
2023-11-21T00:58:03.313433Z TRACE send_data{sz=13 requested=1}:Prioritize::queue_frame{stream.id=StreamId(1)}: h2::proto::streams::store: Queue::push_back
2023-11-21T00:58:03.313440Z TRACE send_data{sz=13 requested=1}:Prioritize::queue_frame{stream.id=StreamId(1)}: h2::proto::streams::store:  -> already queued
2023-11-21T00:58:03.313452Z TRACE h2::proto::streams::counts: transition_after; stream=StreamId(1); state=State { inner: Closed(EndStream) }; is_closed=false; pending_send_empty=false; buffered_send_data=13; num_recv=1; num_send=0
2023-11-21T00:58:03.313458Z TRACE h2::proto::streams::streams: drop_stream_ref; stream=Stream { id: StreamId(1), state: State { inner: Closed(EndStream) }, is_counted: true, ref_count: 2, next_pending_send: None, is_pending_send: true, send_flow: FlowControl { window_size: Window(10485760), available: Window(13) }, requested_send_capacity: 13, buffered_send_data: 13, send_task: Some(Waker { data: 0x146104580, vtable: 0x1006827d8 }), pending_send: Deque { indices: Some(Indices { head: 0, tail: 1 }) }, next_pending_send_capacity: None, is_pending_send_capacity: false, send_capacity_inc: true, next_open: None, is_pending_open: false, is_pending_push: false, next_pending_accept: None, is_pending_accept: false, recv_flow: FlowControl { window_size: Window(1048576), available: Window(1048576) }, in_flight_recv_data: 0, next_window_update: None, is_pending_window_update: false, reset_at: None, next_reset_expire: None, pending_recv: Deque { indices: None }, is_recv: false, recv_task: None, pending_push_promises: Queue { indices: None, _p: PhantomData<h2::proto::streams::stream::NextAccept> }, content_length: Omitted }
2023-11-21T00:58:03.313468Z TRACE h2::proto::streams::counts: transition_after; stream=StreamId(1); state=State { inner: Closed(EndStream) }; is_closed=false; pending_send_empty=false; buffered_send_data=13; num_recv=1; num_send=0
2023-11-21T00:58:03.313473Z TRACE h2::proto::streams::streams: drop_stream_ref; stream=Stream { id: StreamId(1), state: State { inner: Closed(EndStream) }, is_counted: true, ref_count: 1, next_pending_send: None, is_pending_send: true, send_flow: FlowControl { window_size: Window(10485760), available: Window(13) }, requested_send_capacity: 13, buffered_send_data: 13, send_task: Some(Waker { data: 0x146104580, vtable: 0x1006827d8 }), pending_send: Deque { indices: Some(Indices { head: 0, tail: 1 }) }, next_pending_send_capacity: None, is_pending_send_capacity: false, send_capacity_inc: true, next_open: None, is_pending_open: false, is_pending_push: false, next_pending_accept: None, is_pending_accept: false, recv_flow: FlowControl { window_size: Window(1048576), available: Window(1048576) }, in_flight_recv_data: 0, next_window_update: None, is_pending_window_update: false, reset_at: None, next_reset_expire: None, pending_recv: Deque { indices: None }, is_recv: false, recv_task: None, pending_push_promises: Queue { indices: None, _p: PhantomData<h2::proto::streams::stream::NextAccept> }, content_length: Omitted }
2023-11-21T00:58:03.313482Z TRACE h2::proto::streams::counts: transition_after; stream=StreamId(1); state=State { inner: Closed(EndStream) }; is_closed=false; pending_send_empty=false; buffered_send_data=13; num_recv=1; num_send=0
2023-11-21T00:58:03.313495Z TRACE server_handshake:Connection{peer=Server}:poll: h2::proto::connection: connection.state=Open
2023-11-21T00:58:03.313512Z TRACE server_handshake:Connection{peer=Server}:poll:FramedRead::poll_next: h2::codec::framed_read: poll
2023-11-21T00:58:03.313530Z TRACE server_handshake:Connection{peer=Server}:poll: h2::proto::streams::prioritize: poll_complete
2023-11-21T00:58:03.313538Z TRACE server_handshake:Connection{peer=Server}:poll: h2::proto::streams::prioritize: schedule_pending_open
2023-11-21T00:58:03.313554Z TRACE server_handshake:Connection{peer=Server}:poll:pop_frame:popped{stream.id=StreamId(1) stream.state=State { inner: Closed(EndStream) }}: h2::proto::streams::prioritize: is_pending_reset=false
2023-11-21T00:58:03.313569Z TRACE server_handshake:Connection{peer=Server}:poll:pop_frame:popped{stream.id=StreamId(1) stream.state=State { inner: Closed(EndStream) }}: h2::proto::streams::prioritize: pop_frame; frame=Headers { stream_id: StreamId(1), flags: (0x4: END_HEADERS) }
2023-11-21T00:58:03.313579Z TRACE server_handshake:Connection{peer=Server}:poll:pop_frame:popped{stream.id=StreamId(1) stream.state=State { inner: Closed(EndStream) }}: h2::proto::streams::store: Queue::push_back
2023-11-21T00:58:03.313589Z TRACE server_handshake:Connection{peer=Server}:poll:pop_frame:popped{stream.id=StreamId(1) stream.state=State { inner: Closed(EndStream) }}: h2::proto::streams::store:  -> first entry
2023-11-21T00:58:03.313598Z TRACE server_handshake:Connection{peer=Server}:poll:pop_frame:popped{stream.id=StreamId(1) stream.state=State { inner: Closed(EndStream) }}: h2::proto::streams::counts: transition_after; stream=StreamId(1); state=State { inner: Closed(EndStream) }; is_closed=false; pending_send_empty=false; buffered_send_data=13; num_recv=1; num_send=0
2023-11-21T00:58:03.313614Z TRACE server_handshake:Connection{peer=Server}:poll: h2::proto::streams::prioritize: writing frame=Headers { stream_id: StreamId(1), flags: (0x4: END_HEADERS) }
2023-11-21T00:58:03.313628Z DEBUG server_handshake:Connection{peer=Server}:poll:FramedWrite::buffer{frame=Headers { stream_id: StreamId(1), flags: (0x4: END_HEADERS) }}: h2::codec::framed_write: send frame=Headers { stream_id: StreamId(1), flags: (0x4: END_HEADERS) }
2023-11-21T00:58:03.313666Z TRACE server_handshake:Connection{peer=Server}:poll: h2::proto::streams::prioritize: schedule_pending_open
2023-11-21T00:58:03.313681Z TRACE server_handshake:Connection{peer=Server}:poll:pop_frame:popped{stream.id=StreamId(1) stream.state=State { inner: Closed(EndStream) }}: h2::proto::streams::prioritize: is_pending_reset=false
2023-11-21T00:58:03.313692Z TRACE server_handshake:Connection{peer=Server}:poll:pop_frame:popped{stream.id=StreamId(1) stream.state=State { inner: Closed(EndStream) }}: h2::proto::streams::prioritize: data frame sz=13 eos=true window=13 available=13 requested=13 buffered=13
2023-11-21T00:58:03.313706Z TRACE server_handshake:Connection{peer=Server}:poll:pop_frame:popped{stream.id=StreamId(1) stream.state=State { inner: Closed(EndStream) }}: h2::proto::streams::prioritize: sending data frame len=13
2023-11-21T00:58:03.313720Z TRACE server_handshake:Connection{peer=Server}:poll:pop_frame:popped{stream.id=StreamId(1) stream.state=State { inner: Closed(EndStream) }}:updating stream flow: h2::proto::streams::flow_control: send_data; sz=13; window=10485760; available=13
2023-11-21T00:58:03.313731Z TRACE server_handshake:Connection{peer=Server}:poll:pop_frame:popped{stream.id=StreamId(1) stream.state=State { inner: Closed(EndStream) }}:updating stream flow: h2::proto::streams::stream:   sent stream data; available=0; buffered=0; id=StreamId(1); max_buffer_size=409600 prev=0
2023-11-21T00:58:03.313747Z TRACE server_handshake:Connection{peer=Server}:poll:pop_frame:popped{stream.id=StreamId(1) stream.state=State { inner: Closed(EndStream) }}:updating connection flow: h2::proto::streams::flow_control: send_data; sz=13; window=1048576000; available=1048576000
2023-11-21T00:58:03.313760Z TRACE server_handshake:Connection{peer=Server}:poll:pop_frame:popped{stream.id=StreamId(1) stream.state=State { inner: Closed(EndStream) }}: h2::proto::streams::prioritize: pop_frame; frame=Data { stream_id: StreamId(1), flags: (0x1: END_STREAM) }
2023-11-21T00:58:03.313772Z TRACE server_handshake:Connection{peer=Server}:poll:pop_frame:popped{stream.id=StreamId(1) stream.state=State { inner: Closed(EndStream) }}: h2::proto::streams::counts: transition_after; stream=StreamId(1); state=State { inner: Closed(EndStream) }; is_closed=true; pending_send_empty=true; buffered_send_data=0; num_recv=1; num_send=0
2023-11-21T00:58:03.313784Z TRACE server_handshake:Connection{peer=Server}:poll:pop_frame:popped{stream.id=StreamId(1) stream.state=State { inner: Closed(EndStream) }}: h2::proto::streams::counts: dec_num_streams; stream=StreamId(1)
2023-11-21T00:58:03.313805Z TRACE server_handshake:Connection{peer=Server}:poll: h2::proto::streams::prioritize: writing frame=Data { stream_id: StreamId(1), flags: (0x1: END_STREAM) }
2023-11-21T00:58:03.313818Z DEBUG server_handshake:Connection{peer=Server}:poll:FramedWrite::buffer{frame=Data { stream_id: StreamId(1), flags: (0x1: END_STREAM) }}: h2::codec::framed_write: send frame=Data { stream_id: StreamId(1), flags: (0x1: END_STREAM) }
2023-11-21T00:58:03.313836Z TRACE server_handshake:Connection{peer=Server}:poll:try_reclaim_frame: h2::proto::streams::prioritize: reclaimed frame=Data { stream_id: StreamId(1), flags: (0x1: END_STREAM) } sz=0
2023-11-21T00:58:03.313851Z TRACE server_handshake:Connection{peer=Server}:poll: h2::proto::streams::prioritize: schedule_pending_open
2023-11-21T00:58:03.313865Z TRACE server_handshake:Connection{peer=Server}:poll:FramedWrite::flush: h2::codec::framed_write: queued_data_frame=false
2023-11-21T00:58:03.313885Z TRACE server_handshake:Connection{peer=Server}:poll:FramedWrite::flush: h2::codec::framed_write: flushing buffer
2023-11-21T00:58:03.314258Z TRACE server_handshake:Connection{peer=Server}:poll: h2::proto::connection: connection.state=Open
2023-11-21T00:58:03.314293Z TRACE server_handshake:Connection{peer=Server}:poll:FramedRead::poll_next: h2::codec::framed_read: poll
2023-11-21T00:58:03.314311Z TRACE server_handshake:Connection{peer=Server}:poll:FramedRead::poll_next: tokio_util::codec::framed_impl: Got an error, going to errored state
2023-11-21T00:58:03.314328Z DEBUG server_handshake:Connection{peer=Server}:poll: h2::proto::connection: Connection::poll; IO error error=UnexpectedEof
2023-11-21T00:58:03.314345Z TRACE h2::proto::streams::streams: Streams::recv_eof
Error serving connection: hyper::Error(Io, Kind(UnexpectedEof))

@cablehead
Copy link
Author

The first attempt with Safari (macOS), I got: Error accepting TLS connection: Custom { kind: UnexpectedEof, error: "tls handshake eof" } (so a slightly different error) 3 times, the first time I made a request (it'd be requesting additional assets): but then subsequent refreshes don't log that error. I tried clearing the cache for localhost, and still can't get it to log that again. Switching to private mode, I was able to get it to log that again, on first request, but now even entering private mode again, refreshes are served cleanly.

@seanmonstar
Copy link
Member

I tried searching h2 for UnexpectedEof, and the only place it exists is unrelated. tokio-util also doesn't return it. It could be coming from rustls (here, when h2 checks if there's any more frames, that will eventually check a poll_read on the TLS stream.

Does this happen with native-tls?

It could also be that curl in this case is terminating the connection differently than Safari.

@Gelbpunkt
Copy link

I'm seeing pretty much the same thing with rustls + hyper v1, but I'm getting hyper::Error(Shutdown, Os { code: 107, kind: NotConnected, message: "Transport endpoint is not connected" }) instead of UnexpectedEof. curling the server always produces this, using Firefox to access it works fine and does not produce any errors.

@Gelbpunkt
Copy link

Gelbpunkt commented Nov 21, 2023

Here's the tracing logs for my issue: https://gist.github.com/Gelbpunkt/b0fb2063d198cbae23b93b678065f3c1
I can reproduce UnexpectedEof on HTTP/2, "Transport endopint is not connected" happens on HTTP/1.1

@seanmonstar seanmonstar changed the title UnexpectedEof Error on HTTP/2 Requests with Rustls and Hyper v1 UnexpectedEof Error on HTTP/2 Requests with Rustls Nov 21, 2023
@seanmonstar
Copy link
Member

Just to reiterate, it was noticed earlier in the thread that this seems to occur with hyper v0.14 as well (so I've adjusted the title).

Is this something new with rustls? Or with curl?

@kayabaNerve
Copy link

Apologies if this is just noise, yet when using hyper 0.14 + hyper-rustls on a per-connection basis (so not with the connection-pooling Client API), I would have sockets which worked a few ms prior suddenly error when making a new request over HTTP1.

Conn::read_head
received 0 bytes
parse eof
State::close_read()
parse error (connection closed before message completed) with 0 bytes

was a log snippet I saved.

I was unsure if this was my fault, as it worked fine with the Client API (though I did check my connecting-driving re: async and Drop), or hyper's, or the server I was connecting to. Now seeing this issue, I feel this anecdote may be relevant.

@cablehead
Copy link
Author

I tried searching h2 for UnexpectedEof, and the only place it exists is unrelated. tokio-util also doesn't return it. It could be coming from rustls (here, when h2 checks if there's any more frames, that will eventually check a poll_read on the TLS stream.

Interesting.

Does this happen with native-tls?

I'll knock up a sample that uses native-tls to see. Might just take a couple of days

@cablehead
Copy link
Author

"Transport endopint is not connected" happens on HTTP/1.1

@Gelbpunkt I haven't come across any issues with HTTP1 + rustls yet. Do you have a small repro?

@cablehead
Copy link
Author

cablehead commented Nov 24, 2023

This is a quick sample with hyper v1 + native-tls:

https://github.com/cablehead/hyper-with-tls/tree/main/hyper-1.0.1-native-tls

(i've merged the above v1 + rustls and v0.14 + rustls samples under this same repo and updated the links to them above)

With the native-tls sample, the connection closes cleanly after:

curl -v -k --http2-prior-knowledge https://localhost:3000

@seanmonstar I'm thinking my next best step is to close this issue, and create a new one against rustls?

@seanmonstar
Copy link
Member

It'd reasonable to assume the evidence points there. If after reporting there, new evidence says it's in hyper, we can dig further over here!

@cablehead cablehead closed this as not planned Won't fix, can't repro, duplicate, stale Nov 24, 2023
kayabaNerve added a commit to serai-dex/serai that referenced this issue Nov 26, 2023
This removes all re-attempts present in monero-serai's RPC and is an attempt to
narrow down the sporadic failures.

Inspired by hyperium/hyper#3427
kayabaNerve added a commit to serai-dex/serai that referenced this issue Nov 27, 2023
This removes all re-attempts present in monero-serai's RPC and is an attempt to
narrow down the sporadic failures.

Inspired by hyperium/hyper#3427
@cablehead
Copy link
Author

Returning with new information :)

I raised the issue with rustls here: rustls/rustls#1635

@jsha added some good context:

Some quick context: rustls returns UnexpectedEof from read if a connection is closed without the peer sending the close_notify TLS message. This is to prevent attackers from truncating connections in the middle of a message and changing their semantics.

Higher level protocols like HTTP/1 and HTTP/2 have their own framing that knows the length of a message (Content-Length, or HTTP/2 frames). So both servers and clients often don't worry so much about sending TLS close_notify before shutting down a TCP connection, which is fine. So long as the protocol is between messages there's no risk of a truncation attack.

And he's since opened a PR to add this documentation: https://github.com/rustls/rustls/pull/1637/files#diff-015212fe8ccad176046e9d830f1db1e9ddde16d0e922f3817f72c07e587c9e17R36

If I'm understanding things correctly, it seems like the bug ultimately lies with curl. TLS clients are expected to send a close_notify. Practically though, it seems likely that, in addition to curl, many HTTP clients may just rely on the higher level HTTP/2 message framing.

@djc suggests:

the best fix might be inside hyper, where we could ignore UnexpectedEof errors only for requests/responses that are deemed complete

Which may be the most pragmatic thing. What are your thoughts @seanmonstar? If you think it's reasonable, I could take a shot at a PR for it. And in that case, any easy pointers your could throw my way would be appreciated :)

Given we don't see it with HTTP/1.1, I wonder: could hyper already be doing this if all outstanding requests are closed for HTTP/1.1 🤔

@cablehead cablehead reopened this Nov 29, 2023
kayabaNerve added a commit to serai-dex/serai that referenced this issue Nov 29, 2023
This removes all re-attempts present in monero-serai's RPC and is an attempt to
narrow down the sporadic failures.

Inspired by hyperium/hyper#3427
kayabaNerve added a commit to serai-dex/serai that referenced this issue Nov 29, 2023
This removes all re-attempts present in monero-serai's RPC and is an attempt to
narrow down the sporadic failures.

Inspired by hyperium/hyper#3427
kayabaNerve added a commit to serai-dex/serai that referenced this issue Nov 29, 2023
This removes all re-attempts present in monero-serai's RPC and is an attempt to
narrow down the sporadic failures.

Inspired by hyperium/hyper#3427
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
C-bug Category: bug. Something is wrong. This is bad!
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants