-
-
Notifications
You must be signed in to change notification settings - Fork 6.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
waiting for name resolver threads before quitting can cause delays even with --max-time #2975
Comments
Yes in your case it calls getaddrinfo which is blocking in a separate thread, however it waits in multi_done for the resolve to complete: Lines 536 to 539 in 432eb5f
I'm not sure why that is necessary because libcurl threaded resolver was designed to handle cleaning up the resolves in the background, which would happen later on when multi_done calls via Curl_resolver_cancel and in this case that would call asyn-thread's destroy_async_data Lines 351 to 362 in 432eb5f
|
git blame puts it at ac9a179
I think we should just let it leak |
This isn't easy. Leaking is a problem for tools that trigger on this (like OSS-Fuzz did) and for applications who'd do this several times. I don't think we can allow that to happen unconditionally. But also not responding sooner to this sort of failure is also disturbing... |
If the thread can be abandoned with ownership of all its resources which it later cleans up itself (I'm assuming here) then I don't see what benefit there is to waiting for it other than to please OSS-Fuzz. I think the fuzzer is wrong in this case. |
Perhaps we should track the number of detached threads and have Curl_resolver_global_cleanup wait for the count to drop to 0 before returning? |
This would have to be rather intricate, since curl_multi_cleanup is called from multi thread as well, thus, cannot do blocking wait. |
Any wait for the pending thread(s) will risk blocking a call that will effect some users. We can only really fix this issue by removing the wait completely. If the threads don't leak any memory by this, I think it can be done. We just have to keep logic to make sure the OSS-Fuzz and possibly other tests don't consider that a leak. |
Threads leak their stacks. This issue can't be fully fixed in libcurl, but users of libcurl (e.g. the command line |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
I also encountered this issue. My guess: even that were seems to be a timeout of 252ms for the dns resolving, this timeout will be ignored. This could be valid since I figured out with Since I put the domain in the I hope this story was helpful for someone!
The php script was similar to this: <?php
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, '...');
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_FAILONERROR, true)
curl_setopt($ch, CURLOPT_TIMEOUT_MS, 150);
curl_setopt($ch, CURLOPT_PROXY, null);
curl_exec($ch);
var_dump(curl_error($ch)); |
I did this
/etc/resolve.conf
to use a bad DNS server.time curl https://github.com/ --max-time 1 -v
And
curl
exits after 5 seconds. The output isI expected the following
curl
exits after 1 second.Document at https://ec.haxx.se/usingcurl-timeouts.html says "When the set time has elapsed, curl will exit no matter what is going on at that moment", so I think this is a bug.
curl/libcurl version
curl 7.61.0 (x86_64-pc-linux-gnu) libcurl/7.61.0 OpenSSL/1.1.0i zlib/1.2.11 libidn2/2.0.5 libpsl/0.20.2 (+libidn2/2.0.4) nghttp2/1.32.0
Release-Date: 2018-07-11
Protocols: dict file ftp ftps gopher http https imap imaps pop3 pop3s rtsp smb smbs smtp smtps telnet tftp
Features: AsynchDNS IDN IPv6 Largefile GSS-API Kerberos SPNEGO NTLM NTLM_WB SSL libz TLS-SRP HTTP2 UnixSockets HTTPS-proxy PSL
operating system
Arch Linux
The text was updated successfully, but these errors were encountered: