-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hangs in OSX reading certain sized responses unless specifying Connection::close
#26
Comments
Is this after using several requests from the same It's not impossible there is an error in rust-security-framework, but I'm hesitant to claim that since it could very well be my own buggy code. |
It was a single request, and yeah, I just need to borrow my wife's Macbook again and try and piece together a more minimal repro - got derailed by moving over the holiday - maybe I'll get to that tonight. |
I managed to create a fairly minimal repo with a public file on dropbox: let mut res = reqwest::get("https://s3.amazonaws.com/algorithmia-assets/8125.txt").expect("request failed");
println!("{:#?}", &res);
let mut body = String::new();
res.read_to_string(&mut body).expect("read_to_string failed");
println!("Body len: {}", body.len()); Prints what appears to be successful [updated URL since Dropbox public URLs don't work the same anymore] |
Great for finding a reproducible case! I don't have a Mac at all, so I can't personally debug that well. It'd be super useful if you could either run |
haha. the joy of supporting platforms we don't actually have access to... Turns out the repro is behaving a bit differently.
|
The 4k-16k aspect of the issue might just be specific to the memory layout of the client I was working with, as I see slight variations with different repros. The repro I've spent the most time digging into consistently hangs in the call to
Other fun fact, if I avoid Nothing has definitively clicked yet, but maybe I'll have more luck with fresh eyes after I get away from it for a while. |
Would you be able to run that with a logger hooked up? Something like env_logger, with |
A quick env_logger dump:
|
Does that mean that after the 60 second hang, the response does finish and
you have all the bytes?
…On Wed, Nov 30, 2016, 8:14 PM Anthony Nowell ***@***.***> wrote:
A quick env_logger dump:
TRACE:hyper::http::h1: Sized read, remaining=8125
TRACE:hyper::http::h1: Sized read: 32
TRACE:hyper::http::h1: Sized read, remaining=8093
TRACE:hyper::http::h1: Sized read: 64
TRACE:hyper::http::h1: Sized read, remaining=8029
TRACE:hyper::http::h1: Sized read: 128
TRACE:hyper::http::h1: Sized read, remaining=7901
TRACE:hyper::http::h1: Sized read: 256
TRACE:hyper::http::h1: Sized read, remaining=7645
TRACE:hyper::http::h1: Sized read: 512
TRACE:hyper::http::h1: Sized read, remaining=7133
TRACE:hyper::http::h1: Sized read: 1024
TRACE:hyper::http::h1: Sized read, remaining=6109
TRACE:hyper::http::h1: Sized read: 1329
TRACE:hyper::http::h1: Sized read, remaining=4780
TRACE:hyper::http::h1: Sized read: 719
TRACE:hyper::http::h1: Sized read, remaining=4061
TRACE:hyper::http::h1: Sized read: 3377
TRACE:hyper::http::h1: Sized read, remaining=684
---- THIS IS WHERE IT HANGS FOR 60 SECONDS ----
TRACE:hyper::http::h1: Sized read: 684
TRACE:hyper::http::h1: Sized read, remaining=0
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#26 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AADJFzhSxpOcQZ52dpy7-uOh72yl8pwwks5rDknBgaJpZM4K7E1m>
.
|
In the case of this simple repro, yes. In the case of the API client where I originally hit this, it hangs much longer - at least 90 min (I've yet to see it complete) for a 16,120 byte GET response. |
Are those numbers consistent in multiple runs? If so, I see a possible off
by 1 error.
…On Wed, Nov 30, 2016, 9:00 PM Anthony Nowell ***@***.***> wrote:
In the case of this simple repro, yes.
In the case of the API client where I originally hit this, it hangs much
longer - at least 15 min (I've yet to see it complete) for a 16,120 byte
GET response.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#26 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AADJF3vefnHc9Yu-cAxa-xo3B_ckEkwmks5rDlRvgaJpZM4K7E1m>
.
|
Yeah, those numbers are consistent across runs. Sorry if it seems I'm just dumping random results here. While I do sorta hope that something stands out causing that "Aha!" moment, I do plan to keep digging - it's just a question of time and access to a mac. |
It'd also be helpful to see the logs included from hyper::buffer, which I
know are noisy.
I tried to test on my Linux machine, and found that the Url you're using
was only around 4K, instead of 8...
…On Wed, Nov 30, 2016, 10:53 PM Anthony Nowell ***@***.***> wrote:
Yeah, those numbers are consistent across runs.
Sorry if it seems I'm just dumping random results here. While I do sorta
hope that something stands out causing that "Aha!" moment, I do plan to
keep digging - it's just a question of time and access to a mac.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#26 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AADJFzLEfs-Xplgn94Xe83IUu2D4RcUcks5rDm7xgaJpZM4K7E1m>
.
|
ah.. sorry, I switched files at some point... https://s3.amazonaws.com/algorithmia-assets/8125.txt the full trace output (and a few things I was printing) is here |
The logs suggest this is hanging when trying to read the last 684 bytes (plaintext, would be more encrypted). However, if it was doing that, then it should also happen when not using To clarify if this is hyper trying to read more than it should, could you try putting |
I'm seeing this exact same issue. Adding Connection::close() header does indeed stop it from hanging for 60 seconds. |
Can confirm the same, had no idea what the issue locally was about (thought it was somewhere in the json mapping), but stumbled across this issue by accident and the additional |
I ran into something like this earlier today with larger requests, where the body is binary data between 200k-500k. It gets down to the last 0.5k-3k and then hangs for 10s of seconds before continuing normally. Setting
|
Connection::close
I'm suffering from the same issue, on macOS 10.12.3. The following hangs (don't know how long, C-c my way out of it): extern crate reqwest;
use std::io;
use std::io::prelude::*;
use std::env;
use std::path::Path;
use std::fs::File;
fn main() {
let mut sink = File::create("wordlist.txt").unwrap();
let mut source = reqwest::get( "https://www.eff.org/files/2016/07/18/eff_large_wordlist.txt").unwrap();
io::copy(&mut source, &mut sink).unwrap();
} However, the following finishes in under a second, as expected:
I did not debug any further, for it works now. I can provide debug information, but I should like to add that I'm making my very first steps in Rust, and am not familiar with its ecosystem or debugging tools, so I'd appreciate step-by-step instructions for anything I should do 😊 |
I just started taking a look at this again. An update to reqwest 0.5.0 fixed this for one of my tools (without using I started tinkering with hyper's h1 HttpReader, and I see it happen consistently on the final sized read (where the remaining bytes is less than the buffer length). My simple (non-gzip) repro only hangs on that final read for ~5 seconds now (as opposed to 60s before), but my library wrapping reqwest still hangs for way longer than I've been able to wait. I'll timebox a few more ideas tonight/tomorrow, but if all else fails, I might jump on the |
Re-reading this stuck out at me: can you clarify if this was with using a |
Just a thought, the gzip decoding path actually edits the So that means, on the auto un-gzip code path, the On the other code path however, the value of the So, if the API is returning a My first thought is that some code somewhere is using the One more hypothetical along the same lines. Incorrect content length header is mistake I can see a dev easily making when implementing a random web API. For example, if the body contains multi-byte characters but the |
@echochamber when gzip decoding, the hyper checks for a |
I experienced this issue also. I assumed it was a problem with hyper and |
Ah, you are correct. My fault for skimming over the body of the message at the top of that PR. Last idle thought. Maybe the gzip decoding path is succeeding just because the reduced content-body size means its no longer falling into one of the problematic size ranges. @seanmonstar If you aren't already working on this yourself, I wouldn't mind digging into it some. Do you have any suspicions of potential causes? |
@echochamber I experienced this issue with non-gzipped content, so I don't think the problem is related to the gzip decoding. |
@echochamber I don't own a Mac, so debugging is hard for me. I find two pieces of this error to be peculiar: it only seems to happen on Macs (so TLS is secure-transport), and it sounds like it only happens with If it is hanging because of keep-alive, it sounds like for some reason the TcpStream is being asked to read again after we've reached the end, because with keep-alive disabled, that last read just ends quickly with |
I also found that reproducing the error was inconsistent, sometimes it would hang, sometimes it would not. I will try debugging it later this week. |
@rylio Good to know. I was a bit vague in my previous comment. I should have said "maybe the gzip decoding path is succeeding for @anowell because... @seanmonstar Thanks for sharing. I'll try taking a look into those potential causes and see what I can find. |
@seanmonstar - I was about to say it's back to back Here's slightly augmented trace that prints
This gives me a couple more ideas to experiment with. @echochamber - fair point about the compressed size affecting the success. I quickly tried a few other sizes in the gzip path without any repro, but my methodology was kinda arbitrary, so I can't exactly rule that out yet. |
narrowing this down by explicitly passing in various sized buffers (all using the 8125-byte response): If the first TRACE:hyper::http::h1: Sized read, remaining=8125 buf.len()=8192
TRACE:hyper::http::h1: Sized read: 8125 otherwise, if the first read is >= 4096 bytes, the second read will always hang: TRACE:hyper::http::h1: Sized read, remaining=8125 buf.len()=6144
TRACE:hyper::http::h1: Sized read: 6144
TRACE:hyper::http::h1: Sized read, remaining=1981 buf.len()=32
---------- HANGS HERE ----------
TRACE:hyper::http::h1: Sized read: 32
TRACE:hyper::http::h1: Sized read, remaining=1949 buf.len()=4096
TRACE:hyper::http::h1: Sized read: 1949 otherwise, (if the first read is <4096), then the first read to reach or exceed the 4096th byte of the response will only read to that point (without fully filling the buffer), and the subsequent read will hang: TRACE:hyper::http::h1: Sized read, remaining=8125 buf.len()=2048
TRACE:hyper::http::h1: Sized read: 2048
TRACE:hyper::http::h1: Sized read, remaining=6077 buf.len()=4096
TRACE:hyper::http::h1: Sized read: 2048
TRACE:hyper::http::h1: Sized read, remaining=4029 buf.len()=4096
---------- HANGS HERE ----------
TRACE:hyper::http::h1: Sized read: 4029 and now I need to decompress a bit to see if I can make sense of "why?" |
It hangs during the call to Secure Transport's
I tinkered with bumping hyper's Nothing obvious stands out skimming the Secure Transport docs, but I didn't dig into how the rest of the session impacts
|
By setting The hang is because for a yet undiscovered reason, It seems like during the final A useful thing to help debugging is if we remove hyper from the equation. Here's a gist that will just use native-tls: https://gist.github.com/seanmonstar/fa4c700e6cd9b8399ae736638f8164e5. Can alter it to request any resource. |
Er, I hadn't thought that through. The gist couldn't use |
I believe I've found the issue. Searching the Apple's source code for We should have a new update of security-framework soon, and then a |
@seanmonstar @sfackler Nice job hunting that down. |
I was just about to post a bunch of debug output with a modified version of your gist - but it basically confirms exactly what you described. I just wasn't sure if it was hyper giving a wrong sized buffer or |
There was recently the release of macOS 10.12.4, and it still has this bug 😞. |
Published security-framework 0.1.14 with this fix! |
got a laugh out of that |
It still hangs for me. Using this: https://gist.github.com/rylio/aa055cc08c07fac38782805118f01238 |
@rylio after doing a |
@rylio I think this comment is relevant
|
As far as I can tell, the fix has resolved all of my repros. |
@anowell You are correct, didn't notice that. |
Woo! Closing as fixed. Thanks all for helping track down this issue. |
Fix typo in Readme.md
I'm making an authenticated GET request that succeeds, but any attempt to
read_to_string
orread_to_end
just hangs indefinitely, but only under all the following criteria:This is the debug output of an example
Response
object I see before it hangs during reading:And yet, I haven't managed to repro this with a simple test script to just
reqwest::get
some public URL in that size range, so I'm still missing some contributing factor. I've also tried an old build of my client which used hyper+openssl, and it had no issue.Things on my mind to try still:
The text was updated successfully, but these errors were encountered: