# Curl is not handling uploads with large number of URLs (100,000+) #1959

Closed
opened this Issue Oct 6, 2017 · 7 comments

Projects
None yet
3 participants

### I did this

I'm trying to upload large number of small files (100,000 or 1,000,000) using single HTTPS connection

for i in {1..100000}; do   echo "upload-file=/tmp/file${i}"; echo "url=https://server/path/file${i}"; done > /tmp/a.cfg
curl -vvv -K /tmp/a.cfg


### I expected the following

I expected curl to start uploading process in a reasonable amount of time, but upload process never starts.

for i in {1..100000}; do echo "url=https://server/path/file\${i}"; done > /tmp/b.cfg
curl -vvv -K /tmp/b.cfg


It looks like curl is not processing large number of uploads/URLs in optimal way.

### curl/libcurl version

[curl -V output]
curl 7.52.1 (x86_64-apple-darwin13.4.0) libcurl/7.52.1 OpenSSL/1.0.2l zlib/1.2.8
Protocols: dict file ftp ftps gopher http https imap imaps pop3 pop3s rtsp smb smbs smtp smtps telnet tftp
Features: IPv6 Largefile NTLM NTLM_WB SSL libz TLS-SRP UnixSockets HTTPS-proxy

### operating system

macOS Sierra 10.12.6

Member

### jay commented Oct 6, 2017 • edited

 I tried this in Windows with several recent builds and it takes about 20 seconds to parse the config with the upload options (200,000 lines) and 10 seconds to parse the config with the download options (100,000 lines). If you build with -DDEBUG_CONFIG it will show you the parsing in real time, for example I see this repeating without delay: GOT: upload-file PARAM: "/tmp/poa09580" GOT: url PARAM: "http://httpbin.org/put" Check for a delay in parsing the entries. edit: note I'm using debug builds of curl/libcurl, release builds would obviously be faster.
Author

Author

### arainchik commented Oct 6, 2017

 I've also compiled it with "-pg" option on Linux and I see that it spends most of time in getparameter function. this is tool_getparam.c part where I think it spends all this time. If I get it correctly - it constantly scans list of URLs (from beginning to end) and more URLs get added - the longer it takes.  /* we are uploading */ { struct getout *url; if(!config->url_out) config->url_out = config->url_list; if(config->url_out) { /* there's a node here, if it already is filled-in continue to find an "empty" node */ while(config->url_out && (config->url_out->flags & GETOUT_UPLOAD)) config->url_out = config->url_out->next; } 
Member

### bagder commented Oct 6, 2017

 Yeps, that's exactly what I suspected and spotted as well. We should make the logic store the last used pointer so that it can start at that point the next time to avoid a lot of looping...
Author

### arainchik commented Oct 6, 2017 • edited

 So I tried this: replacing tool_getparam.c:1916 from config->url_out = config->url_list;  to config->url_out = config->url_last;  and it seem to fixed the issue :) I'm not a developer and I don't understand the logic with GETOUT_UPLOAD flags at all, but hey - it worked :)
Member

### bagder commented Oct 12, 2017

 Lovely @arainchik and thanks! Someone just needs to verify that it actually is a proper fix and we can land that. This is a pretty tricky issue to add a test case for so - I'd rather not have a hundred thousand transfers as a test. I think we can skip it this time.

Author

### arainchik commented Oct 13, 2017

 @badger - you don't have to test actual 100,000 transfers. If you test and use some random, non-existing domain name or incorrect IP address like http://999.999.999.999/upload - you'll be able to test parsing config file without actual transfers.

### bagder added a commit that referenced this issue Nov 4, 2017

 curl: speed up handling of many URLs 
By properly keeping track of the last entry in the list of URLs/uploads
to handle, curl now avoids many meaningless traverses of the list which
speeds up many-URL handling *MASSIVELY* (several magnitudes on 100K
URLs).

Added test 1291, to verify that it doesn't take ages - but we don't have
any detection of "too slow" command in the test suite.

Reported-by: arainchik on github
Fixes #1959
 b197267 

Closed