Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hangs in OSX reading certain sized responses unless specifying Connection::close #26

Closed
anowell opened this issue Nov 23, 2016 · 47 comments

Comments

@anowell
Copy link
Contributor

anowell commented Nov 23, 2016

I'm making an authenticated GET request that succeeds, but any attempt to read_to_string or read_to_end just hangs indefinitely, but only under all the following criteria:

  • OSX (works fine on linux)
  • responses in the range of 3835-16120 bytes (~260 bytes short of the 4K/16K boundaries).
  • over SSL (works fine if I make the same request through a local http nginx proxy)

This is the debug output of an example Response object I see before it hangs during reading:

Response { inner: Response { status: Ok, headers: Headers { Date: Wed, 23 Nov 2016 03:06:35 GMT, X-Frame-Options: DENY, Strict-Transport-Security: max-age=0; includeSubDomains; preload, X-Data-Type: directory, Connection: keep-alive, Content-Length: 7452, Content-Type: application/json; charset=utf-8, }, version: Http11, url: "https://api.algorithmia.com/v1/connector/data/anowell/foo", status_raw: RawStatus(200, "OK"), message: Http11Message { is_proxied: false, method: None, stream: Wrapper { obj: Some(Reading(SizedReader(remaining=7452))) } } } }

And yet, I haven't managed to repro this with a simple test script to just reqwest::get some public URL in that size range, so I'm still missing some contributing factor. I've also tried an old build of my client which used hyper+openssl, and it had no issue.

Things on my mind to try still:

  • Read in smaller chunks to see if perhaps read_to_end is waiting on the wrong number of bytes
  • Put together the minimal repro using an API key for a shareable test account
  • Try a bit harder to find a repro outside of our API
@seanmonstar
Copy link
Owner

Is this after using several requests from the same Client? I'd possibly wonder at a connection pooling error, but you say the Response comes back with a status and headers, so the connection had to have been valid enough.

It's not impossible there is an error in rust-security-framework, but I'm hesitant to claim that since it could very well be my own buggy code.

@anowell
Copy link
Contributor Author

anowell commented Nov 29, 2016

It was a single request, and yeah, Response always has the status and headers that I'd expect. I'm also still hesitant to claim a specific cause since I haven't even managed a repro outside our API or client, but I do lean toward thinking it involves some interaction with security framework given that it's OSX-specific and didn't happen on a previous hyper+openssl build.

I just need to borrow my wife's Macbook again and try and piece together a more minimal repro - got derailed by moving over the holiday - maybe I'll get to that tonight.

@anowell
Copy link
Contributor Author

anowell commented Nov 30, 2016

I managed to create a fairly minimal repo with a public file on dropbox:

    let mut res = reqwest::get("https://s3.amazonaws.com/algorithmia-assets/8125.txt").expect("request failed");
    println!("{:#?}", &res);
    let mut body = String::new();
    res.read_to_string(&mut body).expect("read_to_string failed");
    println!("Body len: {}", body.len());

Prints what appears to be successful Response object, then hangs in read_to_string. (macOS 10.12.1, using reqwest from master).

[updated URL since Dropbox public URLs don't work the same anymore]

@seanmonstar
Copy link
Owner

Great for finding a reproducible case! I don't have a Mac at all, so I can't personally debug that well. It'd be super useful if you could either run dtruss (for syscalls) or gdb (to actually see where it stops)

@anowell
Copy link
Contributor Author

anowell commented Nov 30, 2016

haha. the joy of supporting platforms we don't actually have access to...

Turns out the repro is behaving a bit differently. read_to_string suspiciously/consistently spends almost exactly 60 seconds (give or take a few millis), but then actually returns the correct data, so I suspect there is some I/O timeout at play (which in my original case was possibly much longer than 60 seconds).

dtruss is dumping a lot of invalid kernel access in action #12 at DIF offset 92, but I have a bit of ramp up on macos debugging to make sense of it. I'm still digging.

@anowell
Copy link
Contributor Author

anowell commented Nov 30, 2016

The 4k-16k aspect of the issue might just be specific to the memory layout of the client I was working with, as I see slight variations with different repros. The repro I've spent the most time digging into consistently hangs in the call to read_to_string for 60 seconds before completing successfully. If I interrupt it with lldb while it's hanging, I consistently see this backtrace:

(lldb) bt
error: need to add support for DW_TAG_base_type '()' encoded with DW_ATE = 0x7, bit_size = 0
* thread #1: tid = 0xde00, 0x00007fffb6f8b2de libsystem_kernel.dylib`read + 10, stop reason = signal SIGSTOP
  * frame #0: 0x00007fffb6f8b2de libsystem_kernel.dylib`read + 10
    frame #1: 0x00000001000e7f46 test-reqwest`_$LT$std..net..tcp..TcpStream$u20$as$u20$std..io..Read$GT$::read::h63aac5a69abb22a3 + 22
    frame #2: 0x0000000100024f0c test-reqwest`hyper::net::{{impl}}::read(self=0x0000000101a17090, buf=(data_ptr = "\x17\x03\x03\"?_\x90GTy\x92?HTTP/1.1 200 OK\r\nServer: nginx\r\nDate: Wed, 30 Nov 2016 10:27:02 GMT\r\nContent-Type: text/plain; charset=utf-8\r\nContent-Length: 8125\r\nConnection: keep-alive\r\nreferrer-policy: no-referrer\r\nx-robots-tag: noindex, nofollow, noimageindex\r\ncontent-disposition: inline; filename=\"6000.txt\"; filename*=UTF-8''6000.txt\r\nset-cookie: uc_session=RBAHoQhIYGiasHZOKDHizfMqcEDODaocU0d70gj4IIL2pHkO5S8TwMf3meluM2Jd; Domain=dropboxusercontent.com; httponly; Path=/; secure\r\naccept-ranges: bytes\r\ncontent-security-policy: referrer no-referrer\r\netag: 311n\r\nx-dropbox-request-id: 3369f10d026ac54f643d98506e10052f\r\npragma: public\r\ncache-control: max-age=0\r\nx-content-security-policy: referrer no-referrer\r\nx-webkit-csp: referrer no-referrer\r\nX-Server-Response-Time: 1451\r\n\r\n2ycmyF2HKuDsKXjxW5uLtOuydBJGarw+8KMw6GS4EtQ7fIJmwvk28tb5yoBSFHdP\npjtuKLabHsFsaznkvPZ/7aOVmPHiLm6YW318JHgbplH6Psue2DgNpCACF5mmJw6K\nYKopNqbBneBIoFby76mFPqUptlIJqvUvKpx33i8Am8wVvnrPnHgRw6lh0A7vX7Tj\nwXTRq6BrkVWQgqWneqdhM+MWqjEt+ijZkmAS0Dj1Qgi9OxJ4aJGbEUSXqynrFZcK"..., length = 5)) + 76 at net.rs:295
    frame #3: 0x000000010003de28 test-reqwest`security_framework::secure_transport::read_func::{{closure}}<hyper::net::HttpStream> + 104 at secure_transport.rs:752
    frame #4: 0x000000010003f304 test-reqwest`core::ops::FnOnce::call_once::ha3b98f2e5c30eb3a + 20
    frame #5: 0x000000010003909a test-reqwest`std::panic::{{impl}}::call_once<core::result::Result<usize, std::io::error::Error>,closure>(self=AssertUnwindSafe<closure> @ 0x00007fff5fbfdb38, _args=<unavailable>) + 106 at panic.rs:255
    frame #6: 0x0000000100015efc test-reqwest`std::panicking::try::do_call<std::panic::AssertUnwindSafe<closure>,core::result::Result<usize, std::io::error::Error>>(data="H?_?) + 284 at panicking.rs:356
    frame #7: 0x00000001000ede2b test-reqwest`__rust_maybe_catch_panic + 27
    frame #8: 0x000000010001576d test-reqwest`std::panicking::try<core::result::Result<usize, std::io::error::Error>,std::panic::AssertUnwindSafe<closure>>(f=AssertUnwindSafe<closure> @ 0x00007fff5fbfde28) + 333 at panicking.rs:332
    frame #9: 0x0000000100015487 test-reqwest`std::panic::catch_unwind<std::panic::AssertUnwindSafe<closure>,core::result::Result<usize, std::io::error::Error>>(f=AssertUnwindSafe<closure> @ 0x00007fff5fbfdea8) + 103 at panic.rs:311
    frame #10: 0x000000010000c23a test-reqwest`security_framework::secure_transport::read_func<hyper::net::HttpStream>(connection=0x0000000101a17090, data=0x0000000102002400, data_length=0x00007fff5fbfe0b0) + 298 at secure_transport.rs:752
    frame #11: 0x00007fffa80ba86f Security`SSLRecordReadInternal + 69
    frame #12: 0x00007fffa804c357 Security`SSLReadRecord + 20
    frame #13: 0x00007fffa804fca6 Security`SSLRead + 380
    frame #14: 0x0000000100038957 test-reqwest`security_framework::secure_transport::{{impl}}::read<hyper::net::HttpStream>(self=0x0000000101a280b8, buf=(data_ptr = "oiqZnlrBq8hUOi4X18L73l603MN5t1oRIN\nu85ZWhi/QQMwbfaI1nNk1REODX1IDQOdBlUPphhmvmeOv1YdVZkGOnp1h4k5X6bl\nNW8ECkKmGj3f0I1r4Bc8m7oKD++QyAulgMaTJBz6r3kby/KuntnMGnY4/BxwGdLY\neIBG0OkHSAGHZYdTAdMLZbMCmIq3a+Bc5/Ri4t4YsKWJJBZ/oT/jXGZ7XYOQNtEu\neJ6jgopwstlCMK/vdA4GM5AlMQ/th7LAcvJZQZUFikPJ92/zwlD8lDMH/uyud4lX\n9skiqv8XAj1cvVchD5N8pDwOPrdzk8vvpq3iw0PP6bq1/8nU8mcnSeNO/2Sx9A7n\nEw3zUTKGwzBCHA50HHAwE9iTBamMtUzjef7wJhyoPUeumCFWqA6ninvEneEBQmCV\ntEXc1mOu5QHUOWhY/dPTYtpBpWUbS4mIMZr8IDfPkh2KeDnfVFxPVPZiHjcPjd5R\ngx2eGFXWJJBIi0vVRXpIh5uKgeyEde3+aBTYKvxjB00jjj7Qm/yKMxN7JMlVgCqP\nBTRs1gkiEsPbgIFDI4ZYY7tiHOKtYgkH1u4V1ojPpF00qzGfXv53Ot+UAIToFM8a\nygOAdPjfRhcSL045k1nLHcT3TMxSZi8q9t+7TLvGkJEFDwGTA9wb2gV3LK95NUmA\n\ndRG8BFGXoT6/Ab+OgqZ9eZP7Na0BoGMlQ/ngpm+p4y7ZAYQ7igG85WMqssEORA1Z\n5rbpUFvRA1tPNTdhWGwRnugd+dJH5GRvObva1f/iIBzKfIPl/3W2Zxwss1yAxsxB\nX96mTFrMBP0H61Zt4IJ2sYEKbBLWlIGhjBbnFAyfg/NAIlnRnIjxo4JejsRijvdx\nb5TRtasUfilAebQenwzeqCZ3fZqZw9cgyLEh5hoKggD2ROVxIsr4gKTXhYf3G6QV\n/4lUjCMb+QiHDYBAacKvxpCIGNYTYU2XFKn/eqpvI3qBiwz5tg5GW0a91S3Hqxr0\nRqv7AXqftZ0DT"..., length = 4096)) + 135 at secure_transport.rs:930
    frame #15: 0x000000010002c76c test-reqwest`native_tls::imp::{{impl}}::read<hyper::net::HttpStream>(self=0x0000000101a280b8, buf=(data_ptr = "oiqZnlrBq8hUOi4X18L73l603MN5t1oRIN\nu85ZWhi/QQMwbfaI1nNk1REODX1IDQOdBlUPphhmvmeOv1YdVZkGOnp1h4k5X6bl\nNW8ECkKmGj3f0I1r4Bc8m7oKD++QyAulgMaTJBz6r3kby/KuntnMGnY4/BxwGdLY\neIBG0OkHSAGHZYdTAdMLZbMCmIq3a+Bc5/Ri4t4YsKWJJBZ/oT/jXGZ7XYOQNtEu\neJ6jgopwstlCMK/vdA4GM5AlMQ/th7LAcvJZQZUFikPJ92/zwlD8lDMH/uyud4lX\n9skiqv8XAj1cvVchD5N8pDwOPrdzk8vvpq3iw0PP6bq1/8nU8mcnSeNO/2Sx9A7n\nEw3zUTKGwzBCHA50HHAwE9iTBamMtUzjef7wJhyoPUeumCFWqA6ninvEneEBQmCV\ntEXc1mOu5QHUOWhY/dPTYtpBpWUbS4mIMZr8IDfPkh2KeDnfVFxPVPZiHjcPjd5R\ngx2eGFXWJJBIi0vVRXpIh5uKgeyEde3+aBTYKvxjB00jjj7Qm/yKMxN7JMlVgCqP\nBTRs1gkiEsPbgIFDI4ZYY7tiHOKtYgkH1u4V1ojPpF00qzGfXv53Ot+UAIToFM8a\nygOAdPjfRhcSL045k1nLHcT3TMxSZi8q9t+7TLvGkJEFDwGTA9wb2gV3LK95NUmA\n\ndRG8BFGXoT6/Ab+OgqZ9eZP7Na0BoGMlQ/ngpm+p4y7ZAYQ7igG85WMqssEORA1Z\n5rbpUFvRA1tPNTdhWGwRnugd+dJH5GRvObva1f/iIBzKfIPl/3W2Zxwss1yAxsxB\nX96mTFrMBP0H61Zt4IJ2sYEKbBLWlIGhjBbnFAyfg/NAIlnRnIjxo4JejsRijvdx\nb5TRtasUfilAebQenwzeqCZ3fZqZw9cgyLEh5hoKggD2ROVxIsr4gKTXhYf3G6QV\n/4lUjCMb+QiHDYBAacKvxpCIGNYTYU2XFKn/eqpvI3qBiwz5tg5GW0a91S3Hqxr0\nRqv7AXqftZ0DT"..., length = 4096)) + 76 at security_framework.rs:238
    frame #16: 0x00000001000283ac test-reqwest`native_tls::{{impl}}::read<hyper::net::HttpStream>(self=0x0000000101a280b8, buf=(data_ptr = "oiqZnlrBq8hUOi4X18L73l603MN5t1oRIN\nu85ZWhi/QQMwbfaI1nNk1REODX1IDQOdBlUPphhmvmeOv1YdVZkGOnp1h4k5X6bl\nNW8ECkKmGj3f0I1r4Bc8m7oKD++QyAulgMaTJBz6r3kby/KuntnMGnY4/BxwGdLY\neIBG0OkHSAGHZYdTAdMLZbMCmIq3a+Bc5/Ri4t4YsKWJJBZ/oT/jXGZ7XYOQNtEu\neJ6jgopwstlCMK/vdA4GM5AlMQ/th7LAcvJZQZUFikPJ92/zwlD8lDMH/uyud4lX\n9skiqv8XAj1cvVchD5N8pDwOPrdzk8vvpq3iw0PP6bq1/8nU8mcnSeNO/2Sx9A7n\nEw3zUTKGwzBCHA50HHAwE9iTBamMtUzjef7wJhyoPUeumCFWqA6ninvEneEBQmCV\ntEXc1mOu5QHUOWhY/dPTYtpBpWUbS4mIMZr8IDfPkh2KeDnfVFxPVPZiHjcPjd5R\ngx2eGFXWJJBIi0vVRXpIh5uKgeyEde3+aBTYKvxjB00jjj7Qm/yKMxN7JMlVgCqP\nBTRs1gkiEsPbgIFDI4ZYY7tiHOKtYgkH1u4V1ojPpF00qzGfXv53Ot+UAIToFM8a\nygOAdPjfRhcSL045k1nLHcT3TMxSZi8q9t+7TLvGkJEFDwGTA9wb2gV3LK95NUmA\n\ndRG8BFGXoT6/Ab+OgqZ9eZP7Na0BoGMlQ/ngpm+p4y7ZAYQ7igG85WMqssEORA1Z\n5rbpUFvRA1tPNTdhWGwRnugd+dJH5GRvObva1f/iIBzKfIPl/3W2Zxwss1yAxsxB\nX96mTFrMBP0H61Zt4IJ2sYEKbBLWlIGhjBbnFAyfg/NAIlnRnIjxo4JejsRijvdx\nb5TRtasUfilAebQenwzeqCZ3fZqZw9cgyLEh5hoKggD2ROVxIsr4gKTXhYf3G6QV\n/4lUjCMb+QiHDYBAacKvxpCIGNYTYU2XFKn/eqpvI3qBiwz5tg5GW0a91S3Hqxr0\nRqv7AXqftZ0DT"..., length = 4096)) + 76 at lib.rs:456
    frame #17: 0x000000010003d87c test-reqwest`reqwest::tls::{{impl}}::read(self=0x0000000101a280b8, buf=(data_ptr = "oiqZnlrBq8hUOi4X18L73l603MN5t1oRIN\nu85ZWhi/QQMwbfaI1nNk1REODX1IDQOdBlUPphhmvmeOv1YdVZkGOnp1h4k5X6bl\nNW8ECkKmGj3f0I1r4Bc8m7oKD++QyAulgMaTJBz6r3kby/KuntnMGnY4/BxwGdLY\neIBG0OkHSAGHZYdTAdMLZbMCmIq3a+Bc5/Ri4t4YsKWJJBZ/oT/jXGZ7XYOQNtEu\neJ6jgopwstlCMK/vdA4GM5AlMQ/th7LAcvJZQZUFikPJ92/zwlD8lDMH/uyud4lX\n9skiqv8XAj1cvVchD5N8pDwOPrdzk8vvpq3iw0PP6bq1/8nU8mcnSeNO/2Sx9A7n\nEw3zUTKGwzBCHA50HHAwE9iTBamMtUzjef7wJhyoPUeumCFWqA6ninvEneEBQmCV\ntEXc1mOu5QHUOWhY/dPTYtpBpWUbS4mIMZr8IDfPkh2KeDnfVFxPVPZiHjcPjd5R\ngx2eGFXWJJBIi0vVRXpIh5uKgeyEde3+aBTYKvxjB00jjj7Qm/yKMxN7JMlVgCqP\nBTRs1gkiEsPbgIFDI4ZYY7tiHOKtYgkH1u4V1ojPpF00qzGfXv53Ot+UAIToFM8a\nygOAdPjfRhcSL045k1nLHcT3TMxSZi8q9t+7TLvGkJEFDwGTA9wb2gV3LK95NUmA\n\ndRG8BFGXoT6/Ab+OgqZ9eZP7Na0BoGMlQ/ngpm+p4y7ZAYQ7igG85WMqssEORA1Z\n5rbpUFvRA1tPNTdhWGwRnugd+dJH5GRvObva1f/iIBzKfIPl/3W2Zxwss1yAxsxB\nX96mTFrMBP0H61Zt4IJ2sYEKbBLWlIGhjBbnFAyfg/NAIlnRnIjxo4JejsRijvdx\nb5TRtasUfilAebQenwzeqCZ3fZqZw9cgyLEh5hoKggD2ROVxIsr4gKTXhYf3G6QV\n/4lUjCMb+QiHDYBAacKvxpCIGNYTYU2XFKn/eqpvI3qBiwz5tg5GW0a91S3Hqxr0\nRqv7AXqftZ0DT"..., length = 4096)) + 76 at tls.rs:49
    frame #18: 0x00000001000290c7 test-reqwest`hyper::net::{{impl}}::read<reqwest::tls::TlsStream>(self=0x0000000101a280b0, buf=(data_ptr = "oiqZnlrBq8hUOi4X18L73l603MN5t1oRIN\nu85ZWhi/QQMwbfaI1nNk1REODX1IDQOdBlUPphhmvmeOv1YdVZkGOnp1h4k5X6bl\nNW8ECkKmGj3f0I1r4Bc8m7oKD++QyAulgMaTJBz6r3kby/KuntnMGnY4/BxwGdLY\neIBG0OkHSAGHZYdTAdMLZbMCmIq3a+Bc5/Ri4t4YsKWJJBZ/oT/jXGZ7XYOQNtEu\neJ6jgopwstlCMK/vdA4GM5AlMQ/th7LAcvJZQZUFikPJ92/zwlD8lDMH/uyud4lX\n9skiqv8XAj1cvVchD5N8pDwOPrdzk8vvpq3iw0PP6bq1/8nU8mcnSeNO/2Sx9A7n\nEw3zUTKGwzBCHA50HHAwE9iTBamMtUzjef7wJhyoPUeumCFWqA6ninvEneEBQmCV\ntEXc1mOu5QHUOWhY/dPTYtpBpWUbS4mIMZr8IDfPkh2KeDnfVFxPVPZiHjcPjd5R\ngx2eGFXWJJBIi0vVRXpIh5uKgeyEde3+aBTYKvxjB00jjj7Qm/yKMxN7JMlVgCqP\nBTRs1gkiEsPbgIFDI4ZYY7tiHOKtYgkH1u4V1ojPpF00qzGfXv53Ot+UAIToFM8a\nygOAdPjfRhcSL045k1nLHcT3TMxSZi8q9t+7TLvGkJEFDwGTA9wb2gV3LK95NUmA\n\ndRG8BFGXoT6/Ab+OgqZ9eZP7Na0BoGMlQ/ngpm+p4y7ZAYQ7igG85WMqssEORA1Z\n5rbpUFvRA1tPNTdhWGwRnugd+dJH5GRvObva1f/iIBzKfIPl/3W2Zxwss1yAxsxB\nX96mTFrMBP0H61Zt4IJ2sYEKbBLWlIGhjBbnFAyfg/NAIlnRnIjxo4JejsRijvdx\nb5TRtasUfilAebQenwzeqCZ3fZqZw9cgyLEh5hoKggD2ROVxIsr4gKTXhYf3G6QV\n/4lUjCMb+QiHDYBAacKvxpCIGNYTYU2XFKn/eqpvI3qBiwz5tg5GW0a91S3Hqxr0\nRqv7AXqftZ0DT"..., length = 4096)) + 151 at net.rs:474
    frame #19: 0x0000000100031c0a test-reqwest`hyper::client::pool::{{impl}}::read<hyper::net::HttpsStream<reqwest::tls::TlsStream>>(self=0x0000000101a28070, buf=(data_ptr = "oiqZnlrBq8hUOi4X18L73l603MN5t1oRIN\nu85ZWhi/QQMwbfaI1nNk1REODX1IDQOdBlUPphhmvmeOv1YdVZkGOnp1h4k5X6bl\nNW8ECkKmGj3f0I1r4Bc8m7oKD++QyAulgMaTJBz6r3kby/KuntnMGnY4/BxwGdLY\neIBG0OkHSAGHZYdTAdMLZbMCmIq3a+Bc5/Ri4t4YsKWJJBZ/oT/jXGZ7XYOQNtEu\neJ6jgopwstlCMK/vdA4GM5AlMQ/th7LAcvJZQZUFikPJ92/zwlD8lDMH/uyud4lX\n9skiqv8XAj1cvVchD5N8pDwOPrdzk8vvpq3iw0PP6bq1/8nU8mcnSeNO/2Sx9A7n\nEw3zUTKGwzBCHA50HHAwE9iTBamMtUzjef7wJhyoPUeumCFWqA6ninvEneEBQmCV\ntEXc1mOu5QHUOWhY/dPTYtpBpWUbS4mIMZr8IDfPkh2KeDnfVFxPVPZiHjcPjd5R\ngx2eGFXWJJBIi0vVRXpIh5uKgeyEde3+aBTYKvxjB00jjj7Qm/yKMxN7JMlVgCqP\nBTRs1gkiEsPbgIFDI4ZYY7tiHOKtYgkH1u4V1ojPpF00qzGfXv53Ot+UAIToFM8a\nygOAdPjfRhcSL045k1nLHcT3TMxSZi8q9t+7TLvGkJEFDwGTA9wb2gV3LK95NUmA\n\ndRG8BFGXoT6/Ab+OgqZ9eZP7Na0BoGMlQ/ngpm+p4y7ZAYQ7igG85WMqssEORA1Z\n5rbpUFvRA1tPNTdhWGwRnugd+dJH5GRvObva1f/iIBzKfIPl/3W2Zxwss1yAxsxB\nX96mTFrMBP0H61Zt4IJ2sYEKbBLWlIGhjBbnFAyfg/NAIlnRnIjxo4JejsRijvdx\nb5TRtasUfilAebQenwzeqCZ3fZqZw9cgyLEh5hoKggD2ROVxIsr4gKTXhYf3G6QV\n/4lUjCMb+QiHDYBAacKvxpCIGNYTYU2XFKn/eqpvI3qBiwz5tg5GW0a91S3Hqxr0\nRqv7AXqftZ0DT"..., length = 4096)) + 138 at pool.rs:175
    frame #20: 0x000000010004da04 test-reqwest`std::io::impls::{{impl}}::read<NetworkStream>(self=0x0000000101a2f048, buf=(data_ptr = "oiqZnlrBq8hUOi4X18L73l603MN5t1oRIN\nu85ZWhi/QQMwbfaI1nNk1REODX1IDQOdBlUPphhmvmeOv1YdVZkGOnp1h4k5X6bl\nNW8ECkKmGj3f0I1r4Bc8m7oKD++QyAulgMaTJBz6r3kby/KuntnMGnY4/BxwGdLY\neIBG0OkHSAGHZYdTAdMLZbMCmIq3a+Bc5/Ri4t4YsKWJJBZ/oT/jXGZ7XYOQNtEu\neJ6jgopwstlCMK/vdA4GM5AlMQ/th7LAcvJZQZUFikPJ92/zwlD8lDMH/uyud4lX\n9skiqv8XAj1cvVchD5N8pDwOPrdzk8vvpq3iw0PP6bq1/8nU8mcnSeNO/2Sx9A7n\nEw3zUTKGwzBCHA50HHAwE9iTBamMtUzjef7wJhyoPUeumCFWqA6ninvEneEBQmCV\ntEXc1mOu5QHUOWhY/dPTYtpBpWUbS4mIMZr8IDfPkh2KeDnfVFxPVPZiHjcPjd5R\ngx2eGFXWJJBIi0vVRXpIh5uKgeyEde3+aBTYKvxjB00jjj7Qm/yKMxN7JMlVgCqP\nBTRs1gkiEsPbgIFDI4ZYY7tiHOKtYgkH1u4V1ojPpF00qzGfXv53Ot+UAIToFM8a\nygOAdPjfRhcSL045k1nLHcT3TMxSZi8q9t+7TLvGkJEFDwGTA9wb2gV3LK95NUmA\n\ndRG8BFGXoT6/Ab+OgqZ9eZP7Na0BoGMlQ/ngpm+p4y7ZAYQ7igG85WMqssEORA1Z\n5rbpUFvRA1tPNTdhWGwRnugd+dJH5GRvObva1f/iIBzKfIPl/3W2Zxwss1yAxsxB\nX96mTFrMBP0H61Zt4IJ2sYEKbBLWlIGhjBbnFAyfg/NAIlnRnIjxo4JejsRijvdx\nb5TRtasUfilAebQenwzeqCZ3fZqZw9cgyLEh5hoKggD2ROVxIsr4gKTXhYf3G6QV\n/4lUjCMb+QiHDYBAacKvxpCIGNYTYU2XFKn/eqpvI3qBiwz5tg5GW0a91S3Hqxr0\nRqv7AXqftZ0DT"..., length = 4096)) + 84 at impls.rs:87
    frame #21: 0x0000000100071a50 test-reqwest`hyper::buffer::{{impl}}::fill_buf<Box<NetworkStream>>(self=0x0000000101a2f048) + 144 at buffer.rs:102
    frame #22: 0x00000001000715f2 test-reqwest`hyper::buffer::{{impl}}::read<Box<NetworkStream>>(self=0x0000000101a2f048, buf=(data_ptr = "", length = 685)) + 242 at buffer.rs:91
    frame #23: 0x0000000100083573 test-reqwest`hyper::http::h1::{{impl}}::read<hyper::buffer::BufReader<Box<NetworkStream>>>(self=0x0000000101a2f040, buf=(data_ptr = "", length = 720)) + 1059 at h1.rs:570
    frame #24: 0x000000010004db5f test-reqwest`std::io::impls::{{impl}}::read<hyper::http::h1::HttpReader<hyper::buffer::BufReader<Box<NetworkStream>>>>(self=0x00007fff5fbff030, buf=(data_ptr = "", length = 720)) + 79 at impls.rs:23
    frame #25: 0x0000000100081185 test-reqwest`hyper::http::h1::{{impl}}::read(self=0x0000000101a2f000, buf=(data_ptr = "", length = 720)) + 181 at h1.rs:124
    frame #26: 0x00000001000151f4 test-reqwest`std::io::impls::{{impl}}::read<HttpMessage>(self=0x00007fff5fbff9c0, buf=(data_ptr = "", length = 720)) + 84 at impls.rs:87
    frame #27: 0x000000010002a511 test-reqwest`hyper::client::response::{{impl}}::read(self=0x00007fff5fbff918, buf=(data_ptr = "", length = 720)) + 129 at response.rs:71
    frame #28: 0x000000010003d57c test-reqwest`reqwest::client::{{impl}}::read(self=0x00007fff5fbff918, buf=(data_ptr = "", length = 720)) + 76 at client.rs:299
    frame #29: 0x0000000100001e6c test-reqwest`std::io::read_to_end<reqwest::client::Response>(r=0x00007fff5fbff918, buf=0x00007fff5fbff8f0) + 316 at mod.rs:351
    frame #30: 0x0000000100007607 test-reqwest`std::io::Read::read_to_string::{{closure}}<reqwest::client::Response>(b=0x00007fff5fbff8f0) + 71 at mod.rs:561
    frame #31: 0x00000001000022bf test-reqwest`std::io::append_to_string<closure>(buf=0x00007fff5fbff8f0, f=closure @ 0x00007fff5fbff580) + 271 at mod.rs:319
    frame #32: 0x00000001000024c7 test-reqwest`std::io::Read::read_to_string<reqwest::client::Response>(self=0x00007fff5fbff918, buf=0x00007fff5fbff8f0) + 71 at mod.rs:561
    frame #33: 0x00000001000072cb test-reqwest`test_reqwest::main + 411 at main.rs:13
    frame #34: 0x00000001000ede2b test-reqwest`__rust_maybe_catch_panic + 27
    frame #35: 0x00000001000ec357 test-reqwest`std::rt::lang_start::h538f8960e7644c80 + 391
    frame #36: 0x00000001000076aa test-reqwest`main + 42
    frame #37: 0x00007fffb6e5b255 libdyld.dylib`start + 1
    frame #38: 0x00007fffb6e5b255 libdyld.dylib`start + 1

Other fun fact, if I avoid read_to_{string,end} and just call read with a fixed-size array, I never see it hang. I often don't get the whole response back in the first read, but for example with 8125 byte response, I can get all 8125 bytes with 2 back-to-back read calls in a matter of milliseconds.

Nothing has definitively clicked yet, but maybe I'll have more luck with fresh eyes after I get away from it for a while.

@seanmonstar
Copy link
Owner

Would you be able to run that with a logger hooked up? Something like env_logger, with RUST_LOG=hyper=trace.

@anowell
Copy link
Contributor Author

anowell commented Dec 1, 2016

A quick env_logger dump:

TRACE:hyper::http::h1: Sized read, remaining=8125
TRACE:hyper::http::h1: Sized read: 32
TRACE:hyper::http::h1: Sized read, remaining=8093
TRACE:hyper::http::h1: Sized read: 64
TRACE:hyper::http::h1: Sized read, remaining=8029
TRACE:hyper::http::h1: Sized read: 128
TRACE:hyper::http::h1: Sized read, remaining=7901
TRACE:hyper::http::h1: Sized read: 256
TRACE:hyper::http::h1: Sized read, remaining=7645
TRACE:hyper::http::h1: Sized read: 512
TRACE:hyper::http::h1: Sized read, remaining=7133
TRACE:hyper::http::h1: Sized read: 1024
TRACE:hyper::http::h1: Sized read, remaining=6109
TRACE:hyper::http::h1: Sized read: 1329
TRACE:hyper::http::h1: Sized read, remaining=4780
TRACE:hyper::http::h1: Sized read: 719
TRACE:hyper::http::h1: Sized read, remaining=4061
TRACE:hyper::http::h1: Sized read: 3377
TRACE:hyper::http::h1: Sized read, remaining=684
---- THIS IS WHERE IT HANGS FOR 60 SECONDS ----
TRACE:hyper::http::h1: Sized read: 684
TRACE:hyper::http::h1: Sized read, remaining=0
---- COMPLETE ----

@seanmonstar
Copy link
Owner

seanmonstar commented Dec 1, 2016 via email

@anowell
Copy link
Contributor Author

anowell commented Dec 1, 2016

In the case of this simple repro, yes.

In the case of the API client where I originally hit this, it hangs much longer - at least 90 min (I've yet to see it complete) for a 16,120 byte GET response.

@seanmonstar
Copy link
Owner

seanmonstar commented Dec 1, 2016 via email

@anowell
Copy link
Contributor Author

anowell commented Dec 1, 2016

Yeah, those numbers are consistent across runs.

Sorry if it seems I'm just dumping random results here. While I do sorta hope that something stands out causing that "Aha!" moment, I do plan to keep digging - it's just a question of time and access to a mac.

@seanmonstar
Copy link
Owner

seanmonstar commented Dec 1, 2016 via email

@anowell
Copy link
Contributor Author

anowell commented Dec 1, 2016

ah.. sorry, I switched files at some point... https://s3.amazonaws.com/algorithmia-assets/8125.txt

the full trace output (and a few things I was printing) is here

@seanmonstar
Copy link
Owner

seanmonstar commented Dec 1, 2016

The logs suggest this is hanging when trying to read the last 684 bytes (plaintext, would be more encrypted). However, if it was doing that, then it should also happen when not using read_to_string.

To clarify if this is hyper trying to read more than it should, could you try putting Connection::close() header in the request? That should make the server close the connection after writing the end, and so reading on the tcp stream at the point would give a Ok(0) instead of blocking for more. If that header doesn't fix it, then it's as the logs suggest, and pausing somewhere in the middle.

@grim8634
Copy link

grim8634 commented Dec 2, 2016

I'm seeing this exact same issue. Adding Connection::close() header does indeed stop it from hanging for 60 seconds.

@kiliankoe
Copy link

Can confirm the same, had no idea what the issue locally was about (thought it was somewhere in the json mapping), but stumbled across this issue by accident and the additional Connection::close() header seems to help.

@steveatinfincia
Copy link

steveatinfincia commented Feb 19, 2017

I ran into something like this earlier today with larger requests, where the body is binary data between 200k-500k. It gets down to the last 0.5k-3k and then hangs for 10s of seconds before continuing normally. Setting Connection::close() resolves it.

15:54:04 [DEBUG] hyper::client::response: version=Http11, status=Ok
15:54:04 [DEBUG] hyper::client::response: headers=Headers { Content-Type: application/tar;charset=UTF-8
, Content-Length: 370276
, Date: Sat, 18 Feb 2017 20:54:04 GMT
, }
15:54:04 [TRACE] hyper::http::h1: Sized read, remaining=370276
15:54:04 [TRACE] hyper::http::h1: Sized read: 32
15:54:04 [TRACE] hyper::http::h1: Sized read, remaining=370244
15:54:04 [TRACE] hyper::http::h1: Sized read: 64
15:54:04 [TRACE] hyper::http::h1: Sized read, remaining=370180
15:54:04 [TRACE] hyper::http::h1: Sized read: 128
15:54:04 [TRACE] hyper::http::h1: Sized read, remaining=370052
15:54:04 [TRACE] hyper::http::h1: Sized read: 256
15:54:04 [TRACE] hyper::http::h1: Sized read, remaining=369796
15:54:04 [TRACE] hyper::http::h1: Sized read: 512

...

15:54:07 [TRACE] hyper::http::h1: Sized read, remaining=1668
15:54:37 [TRACE] hyper::http::h1: Sized read: 1668
15:54:37 [TRACE] hyper::http::h1: Sized read, remaining=0

@anowell anowell changed the title Hangs in OSX trying to read 4k-16k responses Hangs in OSX reading certain sized responses unless specifying Connection::close Feb 19, 2017
@swsnr
Copy link

swsnr commented Mar 10, 2017

I'm suffering from the same issue, on macOS 10.12.3.

The following hangs (don't know how long, C-c my way out of it):

extern crate reqwest;

use std::io;
use std::io::prelude::*;
use std::env;
use std::path::Path;
use std::fs::File;

fn main() {
    let mut sink = File::create("wordlist.txt").unwrap();
    let mut source = reqwest::get( "https://www.eff.org/files/2016/07/18/eff_large_wordlist.txt").unwrap();
    io::copy(&mut source, &mut sink).unwrap();
}

However, the following finishes in under a second, as expected:

fn main() {
    let client = reqwest::Client::new().unwrap();
    let mut sink = File::create("wordlist.txt").unwrap();
    let mut source = client.get("https://www.eff.org/files/2016/07/18/eff_large_wordlist.txt").header(reqwest::header::Connection::close()).send().unwrap();
    io::copy(&mut source, &mut sink).unwrap();
}

I did not debug any further, for it works now. I can provide debug information, but I should like to add that I'm making my very first steps in Rust, and am not familiar with its ecosystem or debugging tools, so I'd appreciate step-by-step instructions for anything I should do 😊

@anowell
Copy link
Contributor Author

anowell commented Mar 29, 2017

I just started taking a look at this again. An update to reqwest 0.5.0 fixed this for one of my tools (without using Connection::close), but it's definitely the gzip decoding path that works, as I'm still seeing the same behavior as before by adding gzip(false), so it seems that libflate's reading of Hyper's response seems to play nicer, reiterating a previous mention that individual calls to read seem to work better than read_to_{end,string}.

I started tinkering with hyper's h1 HttpReader, and I see it happen consistently on the final sized read (where the remaining bytes is less than the buffer length). My simple (non-gzip) repro only hangs on that final read for ~5 seconds now (as opposed to 60s before), but my library wrapping reqwest still hangs for way longer than I've been able to wait. I'll timebox a few more ideas tonight/tomorrow, but if all else fails, I might jump on the Connection::close() work-around for now.

@seanmonstar
Copy link
Owner

Other fun fact, if I avoid read_to_{string,end} and just call read with a fixed-size array, I never see it hang. I often don't get the whole response back in the first read, but for example with 8125 byte response, I can get all 8125 bytes with 2 back-to-back read calls in a matter of milliseconds.

Re-reading this stuck out at me: can you clarify if this was with using a [u8; 8125] buffer, or using something like [u8; 4096] twice.

@echochamber
Copy link
Contributor

echochamber commented Mar 29, 2017

@anowell @seanmonstar

Just a thought, the gzip decoding path actually edits the Content-Length header value, updating it to reflect the new length of the un-gzipped content (courtesy of @rylio from his PR #68).

So that means, on the auto un-gzip code path, the Content-Length is guaranteed to be correct (as it is actually being set by us checking the length of the content body itself, here in reqwest).

On the other code path however, the value of the Content-Length header is the literal value from the response returned by the API you are hitting (I haven't looked at hyper code to see how it constructs its response object to verify thats what hyper does). So that means if hyper doesn't validate the content length actually matches the size of the received content body (my gut tells me shouldn't be hyper's responsibility) that means that the Content-Length value has the potential to be incorrect on the non-gzipped code path.

So, if the API is returning a Content-Length that is not the actually length of the content body, could this discrepancy between the Content-Length header and the actual size of the content body cause this behavior? I don't know enough about the surrounding systems at play to guess how it could or would, but that sticks out as one of the major differences between those two code paths and so would make sense why one code path experiences the hang and the other doesn't.

My first thought is that some code somewhere is using the Content-Length to know how many bytes its expecting to receive, and when it receives the incorrect amount (fewer?) it waits until however long its configured to timeout (10s for some people 60s for others). Again, this are just my thoughts off the top of my head. Don't have the ability to dig through code right now to verify any of them.

One more hypothetical along the same lines. Incorrect content length header is mistake I can see a dev easily making when implementing a random web API. For example, if the body contains multi-byte characters but the Content-Length is being set as the number of characters rather than the number of bytes that would cause this behavior. Although if the content was gzipped, then that doesn't make much sense. ¯\_(ツ)_/¯

@seanmonstar
Copy link
Owner

@echochamber when gzip decoding, the Content-Length header isn't set to another value, it's removed.

hyper checks for a Content-Length header when constructing the Response, and then sets up an internal SizedReader that counts bytes to prevent from reading past the end.

@rylio
Copy link
Contributor

rylio commented Mar 29, 2017

I experienced this issue also. I assumed it was a problem with hyper and keep-alive since setting the Connection header to close fixed it. I found that the connection always closes after timing out, and also changing the read timeout length will change how long it hangs for.

@echochamber
Copy link
Contributor

Ah, you are correct. My fault for skimming over the body of the message at the top of that PR.

Last idle thought. Maybe the gzip decoding path is succeeding just because the reduced content-body size means its no longer falling into one of the problematic size ranges.

@seanmonstar If you aren't already working on this yourself, I wouldn't mind digging into it some. Do you have any suspicions of potential causes?

@rylio
Copy link
Contributor

rylio commented Mar 29, 2017

@echochamber I experienced this issue with non-gzipped content, so I don't think the problem is related to the gzip decoding.

@seanmonstar
Copy link
Owner

@echochamber I don't own a Mac, so debugging is hard for me.

I find two pieces of this error to be peculiar: it only seems to happen on Macs (so TLS is secure-transport), and it sounds like it only happens with read_to_end (need more confirmation in #26 (comment)).

If it is hanging because of keep-alive, it sounds like for some reason the TcpStream is being asked to read again after we've reached the end, because with keep-alive disabled, that last read just ends quickly with Ok(0). That would imply that even though there is the SizedReader keeping count of bytes, something is getting passed that and asking for more data.

@rylio
Copy link
Contributor

rylio commented Mar 29, 2017

I also found that reproducing the error was inconsistent, sometimes it would hang, sometimes it would not.

I will try debugging it later this week.

@echochamber
Copy link
Contributor

echochamber commented Mar 29, 2017

@rylio Good to know. I was a bit vague in my previous comment. I should have said "maybe the gzip decoding path is succeeding for @anowell because...

@seanmonstar Thanks for sharing. I'll try taking a look into those potential causes and see what I can find.

@anowell
Copy link
Contributor Author

anowell commented Mar 29, 2017

@seanmonstar - I was about to say it's back to back [u8; 4096], but actually it gradually reads into larger buffers until it could read the whole remainder into a [u8; 4096], but instead it reads just 32 bytes into a [u8; 4096]. Then it tries to read the remainder into a [u8; 4064] - presumably the unused 4096-32 bytes from the previous buf.

Here's slightly augmented trace that prints buf.len():

TRACE:hyper::http::h1: Sized read, remaining=8125 buf.len()=32
TRACE:hyper::http::h1: Sized read: 32
TRACE:hyper::http::h1: Sized read, remaining=8093 buf.len()=64
TRACE:hyper::http::h1: Sized read: 64
TRACE:hyper::http::h1: Sized read, remaining=8029 buf.len()=128
TRACE:hyper::http::h1: Sized read: 128
TRACE:hyper::http::h1: Sized read, remaining=7901 buf.len()=256
TRACE:hyper::http::h1: Sized read: 256
TRACE:hyper::http::h1: Sized read, remaining=7645 buf.len()=512
TRACE:hyper::http::h1: Sized read: 512
TRACE:hyper::http::h1: Sized read, remaining=7133 buf.len()=1024
TRACE:hyper::http::h1: Sized read: 1024
TRACE:hyper::http::h1: Sized read, remaining=6109 buf.len()=2048
TRACE:hyper::http::h1: Sized read: 2048
TRACE:hyper::http::h1: Sized read, remaining=4061 buf.len()=4096
TRACE:hyper::http::h1: Sized read: 32
TRACE:hyper::http::h1: Sized read, remaining=4029 buf.len()=4064
<------- This is where it hangs for a bit -------->
TRACE:hyper::http::h1: Sized read: 4029
TRACE:hyper::http::h1: Sized read, remaining=0 buf.len()=35

This gives me a couple more ideas to experiment with.

@echochamber - fair point about the compressed size affecting the success. I quickly tried a few other sizes in the gzip path without any repro, but my methodology was kinda arbitrary, so I can't exactly rule that out yet.

@anowell
Copy link
Contributor Author

anowell commented Mar 29, 2017

narrowing this down by explicitly passing in various sized buffers (all using the 8125-byte response):

If the first buf fits the entire contents, it works flawlessly:

TRACE:hyper::http::h1: Sized read, remaining=8125 buf.len()=8192
TRACE:hyper::http::h1: Sized read: 8125

otherwise, if the first read is >= 4096 bytes, the second read will always hang:

TRACE:hyper::http::h1: Sized read, remaining=8125 buf.len()=6144
TRACE:hyper::http::h1: Sized read: 6144
TRACE:hyper::http::h1: Sized read, remaining=1981 buf.len()=32
---------- HANGS HERE ----------
TRACE:hyper::http::h1: Sized read: 32
TRACE:hyper::http::h1: Sized read, remaining=1949 buf.len()=4096
TRACE:hyper::http::h1: Sized read: 1949

otherwise, (if the first read is <4096), then the first read to reach or exceed the 4096th byte of the response will only read to that point (without fully filling the buffer), and the subsequent read will hang:

TRACE:hyper::http::h1: Sized read, remaining=8125 buf.len()=2048
TRACE:hyper::http::h1: Sized read: 2048
TRACE:hyper::http::h1: Sized read, remaining=6077 buf.len()=4096
TRACE:hyper::http::h1: Sized read: 2048
TRACE:hyper::http::h1: Sized read, remaining=4029 buf.len()=4096
---------- HANGS HERE ----------
TRACE:hyper::http::h1: Sized read: 4029

and now I need to decompress a bit to see if I can make sense of "why?"

@anowell
Copy link
Contributor Author

anowell commented Mar 29, 2017

It hangs during the call to Secure Transport's SSLRead. Regardless of the buffer size when reading from hyper Response, SslStream always gets read with a buffer of at least 4096 bytes (INIT_BUFFER_SIZE), thus, it always happens on the second call to SSLRead, e.g.:

TRACE:hyper::http::h1: Sized read, remaining=8125 buf.len()=2048
SSLRead: buf.len()=4096 nread=4096 ret=0
TRACE:hyper::http::h1: Sized read: 2048
TRACE:hyper::http::h1: Sized read, remaining=6077 buf.len()=4096
TRACE:hyper::http::h1: Sized read: 2048
TRACE:hyper::http::h1: Sized read, remaining=4029 buf.len()=4096
---------- SSLRead Hangs ----------
SSLRead: buf.len()=4096 nread=4029 ret=0
TRACE:hyper::http::h1: Sized read: 4029

I tinkered with bumping hyper's INIT_BUFFER_SIZE from 4096 to 8192, and that just shifted the problem to larger responses. The second call to SSLRead still hangs.

Nothing obvious stands out skimming the Secure Transport docs, but I didn't dig into how the rest of the session impacts SSLRead. The questions on my mind at this point are:

  • How does Connection::close() affect the state of Secure Transport?
  • Why doesn't this repro if SSLRead gets called 3 times? (i.e. none of the 3 calls will hang)

@seanmonstar
Copy link
Owner

How does Connection::close() affect the state of Secure Transport?

By setting Connection: close, the server will shutdown the socket after sending the end of the response. Therefore, when SSLRead asks our TcpStream for more data, TcpStream::read can immediately respond with Ok(0).

The hang is because for a yet undiscovered reason, SSLRead is calling TcpStream::read([u8; 5]) after having read to the end of the response body. Since the server hasn't closed the connection, the TcpStream::read blocks until a timeout occurs, or the server sends more data. The 5 bytes are to read the next TLS header, which would exist if there was more body to read.

It seems like during the final SSLRead call, it does read the end (since your log says nread=4029), but then something makes it call its inner SSLReadFunc again, which just is where the hang happens.


A useful thing to help debugging is if we remove hyper from the equation. Here's a gist that will just use native-tls: https://gist.github.com/seanmonstar/fa4c700e6cd9b8399ae736638f8164e5. Can alter it to request any resource.

@seanmonstar
Copy link
Owner

Er, I hadn't thought that through. The gist couldn't use read_to_end, since it's a keep-alive connection. It'd need to check for a Content-Length header, and then be sure to not read once that many bytes have been read...

@sfackler
Copy link
Contributor

It turns out SSLRead does A Bad Thing!

https://gist.github.com/seanmonstar/c58fc023d60bc295230539f2f4c82a0c#file-ssltransport-cpp-L100-L102
kornelski/rust-security-framework@4cd518d#diff-0c189053dd842cb8ae78ca32925584a8R1029

@seanmonstar
Copy link
Owner

I believe I've found the issue. Searching the Apple's source code for SSLRead revealed that if called with data length longer than what was buffered internally in SslContext, it would copy from the buffer, and then perform a socket read. This is contrary to how typical IO objects work, such as TCP will first return the data in its buffer, and only subsequent calls will block on the socket.

We should have a new update of security-framework soon, and then a cargo update should hopefully fix everyone.

@echochamber
Copy link
Contributor

@seanmonstar @sfackler Nice job hunting that down.

@anowell
Copy link
Contributor Author

anowell commented Mar 29, 2017

I was just about to post a bunch of debug output with a modified version of your gist - but it basically confirms exactly what you described. I just wasn't sure if it was hyper giving a wrong sized buffer or SSLRead not handling it sanely. Many thanks for getting to the bottom of this.

@rylio
Copy link
Contributor

rylio commented Mar 29, 2017

There was recently the release of macOS 10.12.4, and it still has this bug 😞.

@sfackler
Copy link
Contributor

Published security-framework 0.1.14 with this fix!

@echochamber
Copy link
Contributor

@sfackler

OMG APPLE WTF

got a laugh out of that

@rylio
Copy link
Contributor

rylio commented Mar 29, 2017

It still hangs for me. Using this: https://gist.github.com/rylio/aa055cc08c07fac38782805118f01238

@seanmonstar
Copy link
Owner

@rylio after doing a cargo update?

@anowell
Copy link
Contributor Author

anowell commented Mar 29, 2017

@rylio I think this comment is relevant

The gist couldn't use read_to_end, since it's a keep-alive connection. It'd need to check for a Content-Length header, and then be sure to not read once that many bytes have been read...

@anowell
Copy link
Contributor Author

anowell commented Mar 29, 2017

As far as I can tell, the fix has resolved all of my repros.

@rylio
Copy link
Contributor

rylio commented Mar 29, 2017

@anowell You are correct, didn't notice that.

@seanmonstar
Copy link
Owner

Woo! Closing as fixed. Thanks all for helping track down this issue.

repi pushed a commit to EmbarkStudios/reqwest that referenced this issue Dec 20, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

9 participants