Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CURLE_OUT_OF_MEMORY when adding large header after update to 7.71.0 #6681

Closed
carlzogh opened this issue Mar 2, 2021 · 6 comments
Closed

CURLE_OUT_OF_MEMORY when adding large header after update to 7.71.0 #6681

carlzogh opened this issue Mar 2, 2021 · 6 comments
Labels

Comments

@carlzogh
Copy link

@carlzogh carlzogh commented Mar 2, 2021

I did this

An application I'm working on uses libcurl to make HTTP requests to a server and uses the following code pattern to make this request:

    // setup
    curl_easy_reset(m_curl_handle);
    curl_easy_setopt(m_curl_handle, CURLOPT_HTTP_VERSION, CURL_HTTP_VERSION_1_1);
    curl_easy_setopt(m_curl_handle, CURLOPT_TIMEOUT, 0L);
    curl_easy_setopt(m_curl_handle, CURLOPT_CONNECTTIMEOUT, 1L);
    curl_easy_setopt(m_curl_handle, CURLOPT_NOSIGNAL, 1L);
    curl_easy_setopt(m_curl_handle, CURLOPT_TCP_NODELAY, 1L);
    curl_easy_setopt(m_curl_handle, CURLOPT_POST, 1L);
    curl_easy_setopt(m_curl_handle, CURLOPT_READFUNCTION, read_data); 
    curl_easy_setopt(m_curl_handle, CURLOPT_WRITEFUNCTION, write_data);
    curl_easy_setopt(m_curl_handle, CURLOPT_HEADERFUNCTION, write_header);
    curl_easy_setopt(m_curl_handle, CURLOPT_PROXY, "");
    curl_easy_setopt(m_curl_handle, CURLOPT_URL, url.c_str());

    curl_slist* headers = nullptr;
    headers = curl_slist_append(headers, "content-type: text/html");
    headers = curl_slist_append(headers, ("large-header: " + up_to_1mb_string).c_str());

    std::pair<std::string const&, size_t> ctx{payload, 0};
    aws::http::response resp;
    curl_easy_setopt(m_curl_handle, CURLOPT_WRITEDATA, &resp);
    curl_easy_setopt(m_curl_handle, CURLOPT_HEADERDATA, &resp);
    curl_easy_setopt(m_curl_handle, CURLOPT_READDATA, &ctx);
    curl_easy_setopt(m_curl_handle, CURLOPT_HTTPHEADER, headers);
    CURLcode curl_code = curl_easy_perform(m_curl_handle);

    curl_slist_free_all(headers);

    if (curl_code != CURLE_OK) {
        // log error
    }

We compile curl from source with:

 ./configure \
        --prefix $INSTALL_PREFIX \
        --disable-shared \
        --enable-debug \ # added to debug this issue (logs below)
        --without-ssl \
        --with-pic \
        --without-zlib && \
    make && \
    make install

Execution logs:

[DEBUG] [1614694131840] CURL DBG: STATE: INIT => CONNECT handle 0x14e9728; line 1771 (connection #-5000)
[DEBUG] [1614694131840] CURL DBG: Found bundle for host 127.0.0.1: 0x14f82b8 [serially]
[DEBUG] [1614694131840] CURL DBG: Re-using existing connection! (#0) with host 127.0.0.1
[DEBUG] [1614694131840] CURL DBG: Connected to 127.0.0.1 (127.0.0.1) port 9001 (#0)
[DEBUG] [1614694131840] CURL DBG: STATE: CONNECT => DO handle 0x14e9728; line 1825 (connection #0)
[DEBUG] [1614694131846] CURL DBG: multi_done
[DEBUG] [1614694131846] CURL DBG: Connection #0 to host 127.0.0.1 left intact
[DEBUG] [1614694131846] CURL DBG: Expire cleared (transfer 0x14e9728)
[DEBUG] [1614694131846] CURL returned error code 27 - Out of memory

I expected the following

Before version 7.71.0 (tested up to 7.75.0), this worked fine for headers with size up to 1mb but after the update I've started seeing CURLE_OUT_OF_MEMORY.
I'm not entirely sure what's changed in this version to blow up with OOM exceptions on large headers, but I suspect maybe aee277b as it's the closest change to the issue that I can see. Have not confirmed this however.

curl/libcurl version

7.71.0+

operating system

Amazon Linux 2

$ uname -a
Linux 331a706e46ef 4.19.121-linuxkit #1 SMP Tue Dec 1 17:50:32 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux

I'd really appreciate guidance on this, thanks!

@jay
Copy link
Member

@jay jay commented Mar 2, 2021

Before version 7.71.0 (tested up to 7.75.0), this worked fine for headers with size up to 1mb but after the update I've started seeing CURLE_OUT_OF_MEMORY.

Does it always reproduce sometimes or always? Can you give us an example header?

@jay jay added the HTTP label Mar 2, 2021
@carlzogh
Copy link
Author

@carlzogh carlzogh commented Mar 2, 2021

@jay this is reproducible on every run when the header value is large, eg:

"my-header-key-name-with-forty-characters: @@@@" <- repeat the character 300k times

Not a real-life example, but should work for a repro

@jay
Copy link
Member

@jay jay commented Mar 2, 2021

bisected to ed35d65 (dynbuf: introduce internal generic dynamic buffer functions).

repro:

(printf "my-header-key-name-with-forty-characters: "; printf 'A%.0s' {1..300000}; printf "B") > testhdr.txt
curl -v -H @testhdr.txt http://httpbin.org/get

#define DYN_HTTP_REQUEST (128*1024)

curl/lib/http.c

Lines 3071 to 3072 in 2f33be8

/* initialize a dynamic send-buffer */
Curl_dyn_init(&req, DYN_HTTP_REQUEST);

curl/lib/dynbuf.c

Lines 78 to 81 in 2f33be8

if(fit > s->toobig) {
Curl_dyn_free(s);
return CURLE_OUT_OF_MEMORY;
}

The solution would be remove the limit. DYN_NO_LIMIT was previously proposed in #5966

@carlzogh
Copy link
Author

@carlzogh carlzogh commented Mar 2, 2021

Thanks, that makes sense - neat quick repro too :)

Another option could maybe be to make them configurable as CURLOPT_ options? The use-case I have is that I need to communicate with an http server that accepts higher limits that my client can also send. The values currently configured seem sensible for most use-cases I presume and could be used as defaults for the options.

@bagder
Copy link
Member

@bagder bagder commented Mar 3, 2021

I still believe no limit to be bad (because this is used to trap mistakes or malicious use), but I'm all for discussing upping the limit to something larger than 128KB. Maybe go with 1MB here?. I would like to try to avoid having setopts for them as they're so specific and 99.99% of all users would never change them.

@carlzogh
Copy link
Author

@carlzogh carlzogh commented Mar 3, 2021

Agreed with no limit not being the best path forward here - I believe 1MB is a sensible value and would cover use-cases like the one here where we could also avoid having to introduce new options.

bagder added a commit that referenced this issue Mar 3, 2021
Reported-by: Carl Zogheib
Fixes #6681
@bagder bagder closed this in 6221bc1 Mar 6, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

3 participants