Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CURLMOPT_MAX_TOTAL_CONNECTIONS seems to be ignored #15857

Closed
baranyaib90 opened this issue Dec 29, 2024 · 11 comments
Closed

CURLMOPT_MAX_TOTAL_CONNECTIONS seems to be ignored #15857

baranyaib90 opened this issue Dec 29, 2024 · 11 comments

Comments

@baranyaib90
Copy link

baranyaib90 commented Dec 29, 2024

I did this

I have set CURLMOPT_MAX_TOTAL_CONNECTIONS to 8.

I expected the following

That curl does not open more than 8 sockets, but it did.

[D] 1735457228.596040 https_client.c:86 curl opened socket: 8  << This was opened and reused thanks to HTTPS/2
# 572 seconds pass
[D] 1735457800.736733 https_client.c:86 curl opened socket: 9
[D] 1735457800.737025 https_client.c:86 curl opened socket: 10
[D] 1735457800.737279 https_client.c:86 curl opened socket: 11
[D] 1735457800.737504 https_client.c:86 curl opened socket: 12
[D] 1735457800.737710 https_client.c:86 curl opened socket: 13
[D] 1735457800.738379 https_client.c:86 curl opened socket: 14
[D] 1735457800.738670 https_client.c:86 curl opened socket: 15
[D] 1735457800.740684 https_client.c:86 curl opened socket: 16 << This is the 9th one, that should not happen

curl/libcurl version

curl 8.9.1 (x86_64-pc-linux-gnu) libcurl/8.9.1 OpenSSL/3.3.1 zlib/1.3.1 brotli/1.1.0 zstd/1.5.6 libidn2/2.3.7 libpsl/0.21.2 libssh2/1.11.0 nghttp2/1.62.1 librtmp/2.3 OpenLDAP/2.6.8
Release-Date: 2024-07-31, security patched: 8.9.1-2ubuntu2.2
Protocols: dict file ftp ftps gopher gophers http https imap imaps ipfs ipns ldap ldaps mqtt pop3 pop3s rtmp rtsp scp sftp smb smbs smtp smtps telnet tftp ws wss
Features: alt-svc AsynchDNS brotli GSS-API HSTS HTTP2 HTTPS-proxy IDN IPv6 Kerberos Largefile libz NTLM PSL SPNEGO SSL threadsafe TLS-SRP UnixSockets zstd

operating system

Ubuntu 24.10

Extra information

@baranyaib90
Copy link
Author

According to the logs (at the end of flightrecorder.log), it looks like curl had one HTTP/2 connection (no. 109) idle for a long time.
Therefore curl intended to close it, but actually the CURLMOPT_SOCKETFUNCTION has not been called with CURL_POLL_REMOVE and the CURLOPT_CLOSESOCKETFUNCTION has not been called to close the socket.

[D] 1735457800.736091 main.c:112 Received request for id: 58AF, len: 65
[D] 1735457800.736124 https_client.c:260 58AF: Requesting HTTP/2
[D] 1735457800.736181 https_client.c:214 58AF: * RESOLVE cloudflare-dns.com:443 - old addresses discarded
[D] 1735457800.736193 https_client.c:214 58AF: * Added cloudflare-dns.com:443:104.16.248.249,104.16.249.249 to DNS cache
[D] 1735457800.736249 https_client.c:214 58AF: * Too old connection (342 seconds idle), disconnect it
[D] 1735457800.736253 https_client.c:214 58AF: * Connection 109 seems to be dead
[D] 1735457800.736259 https_client.c:214 58AF: * shutting down connection #109
[D] 1735457800.736621 https_client.c:214 58AF: * TLSv1.3 (OUT), TLS alert, close notify (256):
[D] 1735457800.736654 https_client.c:636 Reserved new io event: 0x7ffc8a6408d0
[D] 1735457800.736678 https_client.c:214 58AF: * Hostname cloudflare-dns.com was found in DNS cache
[D] 1735457800.736733 https_client.c:86 curl opened socket: 9
[D] 1735457800.736751 https_client.c:214 58AF: *   Trying 104.16.248.249:443...
[D] 1735457800.736852 https_client.c:636 Reserved new io event: 0x7ffc8a640900

But curl started to open new connections for every upcoming requests and connection limit was surpassed before the too old connection was closed properly.
The following logs were printed repeatedly:

[D] 1735457800.736891 main.c:112 Received request for id: A89B, len: 65
[D] 1735457800.736901 https_client.c:260 A89B: Requesting HTTP/2
[D] 1735457800.736931 https_client.c:214 A89B: * RESOLVE cloudflare-dns.com:443 - old addresses discarded
[D] 1735457800.736938 https_client.c:214 A89B: * Added cloudflare-dns.com:443:104.16.248.249,104.16.249.249 to DNS cache
[D] 1735457800.736977 https_client.c:214 A89B: * Found bundle for host: 0x5d21b0e7ecc0 [serially]
[D] 1735457800.736981 https_client.c:214 A89B: * Server does not support multiplex (yet)
[D] 1735457800.736991 https_client.c:214 A89B: * Hostname cloudflare-dns.com was found in DNS cache
[D] 1735457800.737025 https_client.c:86 curl opened socket: 10
[D] 1735457800.737036 https_client.c:214 A89B: *   Trying 104.16.248.249:443...
[D] 1735457800.737105 https_client.c:636 Reserved new io event: 0x7ffc8a640930

@icing
Copy link
Contributor

icing commented Dec 30, 2024

The feature that was introduced in 8.9.0 is improved connection shutdown, especially for connections that involve TLS. For this, a separate list of connections in shutdown is kept at curl's connection pool. These do not count against the CURLMOPT_MAX_TOTAL_CONNECTIONS limit.

Would this explain the effects you are seeing?

Since the logs are huge, it might help analysing them if you traced curl's connection and transfer ids with it (the [n-m] entries in curl's tracing with -vv. These are available via CURLINFO_CONN_ID and CURLINFO_XFER_ID.

@bagder
Copy link
Member

bagder commented Dec 30, 2024

That curl does not open more than 8 sockets, but it did.

The option is not a limit of number of sockets used. It is a limit of number of connections done at any one time.

@baranyaib90
Copy link
Author

Hi @icing, I suppose the feature you mentioned is causing us issue, because the number of opened sockets by libcurl never exceeded CURLMOPT_MAX_TOTAL_CONNECTIONS limit before.

The software which uses libcurl statically reserves 8 IO event handler to handle the read/write events for curl sockets.
Before curl never requested more socket handlers. Now that some connections are in a magical shutdown phase and they do not count in the limit, we have to do some of the following options:

  1. block libcurl by opening more sockets (in CURLOPT_OPENSOCKETFUNCTION returning CURL_SOCKET_BAD when limit of 8 is reached)
  2. reserve more IO event handler, like 12 and hope that the extra 4 will be enough for connections which remain a bit in the new shutdown phase
  3. in CURLMOPT_SOCKETFUNCTION return with -1 and throw all the curl easy handlers in the wastebin (not a huge deal)

I don't know if you could follow my thought or not, never mind.
To me it is strange, that those shutting down connections do not count into the limit.
Would be nice to have yet another option to limit the number of those shutting down connections :D
If you consider that this behaviour is fine by curl, feel free to close this ticket.
Thank you for your support!

PS: about the logs:
I know that the log is huge, but I suppose it is enough to check the last ~100 lines starting with line 9901 (containing timestamp: 1735457800.736091). At that point, only connection no. 109 was idle, but it was too old, therefore it started to be closed (but socket was not yet closed). And curl started to open brand new connections for every request that came in in a burst.

baranyaib90 added a commit to baranyaib90/https_dns_proxy that referenced this issue Dec 30, 2024
@bagder
Copy link
Member

bagder commented Dec 31, 2024

First: number of sockets != number of connections. curl may use more sockets than connections.

Then: I think we should improve libcurl in this regard. A set maximum number of connections should be acknowledged and respected even if some of them are held for disconnect. I think that's what users want when they set the limit. The main question is how...

@icing
Copy link
Contributor

icing commented Dec 31, 2024

Hi @baranyaib90, thanks for the explanation. I can see how the change in connection shutdown handling affects your application. Though, as @bagder said, it was never intended to limit the number of sockets, it worked like that in the past for you.

FYI: the "magical" shutdown was introduced for other use cases where libcurl caused TCP RST conditions before. That negatively affected other libcurl applications.

@bagder, I think we could change the limit handling to include connections in shutdown (the easy part), but making the resumptions of idle transfers work reliably when the connection really goes away, it the more tricky part.

@icing
Copy link
Contributor

icing commented Dec 31, 2024

I made #15879 as a proposed change to count shutdown connections against the destination limit. I would be great if you could test this in your scenario.

@bagder
Copy link
Member

bagder commented Jan 1, 2025

@icing just note that @baranyaib90's issue here is using the total connection count, not the one per host.

@icing
Copy link
Contributor

icing commented Jan 2, 2025

Ah, will adjust.

@icing
Copy link
Contributor

icing commented Jan 2, 2025

Added the changes for total limit to #15879 and added test cases and verification.

baranyaib90 added a commit to baranyaib90/https_dns_proxy that referenced this issue Jan 4, 2025
A few curl version after 8.9.0 kept connections opened
which did not count in CURLMOPT_MAX_TOTAL_CONNECTIONS limit.
Therefore more IO events are necessary to handle more
sockets than we expect.

Details in: curl/curl#15857
baranyaib90 added a commit to baranyaib90/https_dns_proxy that referenced this issue Jan 4, 2025
A few curl version after 8.9.0 kept connections opened
which did not count in CURLMOPT_MAX_TOTAL_CONNECTIONS limit.
Therefore more IO events are necessary to handle more
sockets than we expect.

Details in: curl/curl#15857
@bagder bagder closed this as completed in bd3c027 Jan 6, 2025
@baranyaib90
Copy link
Author

Hi! Thank you very much for your support! Much appreciated!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Development

No branches or pull requests

3 participants