Join GitHub today
GitHub is home to over 36 million developers working together to host and review code, manage projects, and build software together.Sign up
curl_multi_timeout() returns very large timeout while DNS resolving #2996
I did this
My code experiencing this issue is written in Rust using the wrapper crate, but we concluded that this is likely an upstream bug.
I am using the multi interface to drive easy handles in a loop. The basic logic is something like this:
When attaching an easy handle with a URL that needs resolving (bare IP address has no problems),
By itself, this is not a problem, but the fact that I have 1 extra file descriptor I want to poll (this is used in my lib to receive "wakeups" from another thread and has very low activity),
I raise this issue because a previous version of libcurl did not exhibit this behavior. See more details on this in alexcrichton/curl-rust#227.
I expected the following
This is using the threaded resolver right? And no specific timeout set?
It is a bit weird to call
However, I think the real bug (even if you remove the
is done before:
... as the latter function will end up calling https://github.com/curl/curl/blob/master/lib/asyn-thread.c#L564-L585 which sets up a suitable timeout for while the thread is resolving a host.
I'll push a PR in a sec to address that particular flaw.
referenced this issue
Sep 15, 2018
Yes, libcurl is built with the threaded resolver, and I am letting libcurl use its default connect timeout (currently 300 seconds).
Ah yes, now that I look closer that is a bit silly of me, as
So the real bug is still that timers are not being set correctly. I had a hunch, but I wasn't familiar enough with the code to deduce that.
Until my lib can use the patch for this, I'll keep doing it the silly way though, since my current workaround is to assume that if
I will pull in the fix locally sometime today and see if it fixes the problem.
(I'll have to wait until the Rust bindings updates it's curl version before I can publish an actual release with the fix.)
Update: Just tested the patch, the change does indeed fix the behavior for my use case.