Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Buffer leak in basic_parser::put #1734

Closed
inetic opened this issue Oct 18, 2019 · 4 comments
Closed

Buffer leak in basic_parser::put #1734

inetic opened this issue Oct 18, 2019 · 4 comments
Labels

Comments

@inetic
Copy link
Contributor

inetic commented Oct 18, 2019

The version of basic_parser::put that takes a buffer sequence as an argument has a bug in the fallback code path (the one that is not optimized for single buffer sequences nor those whose size is smaller than max_stack_buffer).

The issue is that while buf_.get() has size buf_len_, it has only size number of valid octets. This causes some of the invalid octets (those with index >=size and <buf_len_) to "leak" from previous calls to put into the current one.

The fix is simple. I'm attempting to write a unit test, but that's a bit more challenging due to the fact that the buffers fed into the put function need to be crafted to avoid the optimized paths.

We noticed this bug in Boost 1.69, but it's still present in current 1.71 and the develop branch.

@vinniefalco
Copy link
Member

This is a case of me being too clever for my own good. The parser should simply require that the input is in a single contiguous buffer. In a future version of Beast I will likely just build the dynamic buffer into the parser itself and keep it flat to hide this implementation detail.

@vinniefalco vinniefalco added this to To do in HTTP Refactor Oct 18, 2019
@inetic
Copy link
Contributor Author

inetic commented Oct 18, 2019

So for completeness, here is a code that reproduces the issue. It's quite ugly because it assumes a particular implementation of basic_parser::put(ConstBufferSequence const&,...) and thus I think it's best to just keep it here and not commit to unit tests(?).

    void
    testIssue1734()
    {
        // Ensure more than one buffer, this is to avoid an optimized path in
        // basic_parser::put(ConstBufferSequence const&,...) which avoids
        // buffer flattening.
        auto multibufs = [](auto buffers) {
            std::vector<net::const_buffer> bs;
            for (auto b : buffers) bs.push_back(b);
            while (std::distance(bs.begin(), bs.end()) < 2) {
                bs.push_back({});
            }
            return bs;
        };

        // XXX: This must be the same as basic_parser<...>::max_stack_buffer.
        // The latter is private however so we can't use it here.
        std::size_t constexpr max_stack_buffer = 8192;

        // Buffers must be bigger than max_stack_buffer to force flattening
        // in basic_parser::put(ConstBufferSequence const&,...)
        std::string first_chunk_data(2*max_stack_buffer + 1, 'x');

        std::string second_chunk_data_part1(max_stack_buffer + 2, 'x');
        std::string second_chunk_data_part2(max_stack_buffer + 1, 'x');

        multi_buffer b;
        parser_type<false> p;
        p.eager(true);
        error_code ec;
        std::size_t used;

        ostream(b) <<
            "HTTP/1.1 200 OK\r\n"
            "Server: test\r\n"
            "Transfer-Encoding: chunked\r\n"
            "\r\n";

        used = p.put(b.data(), ec);
        b.consume(used);

        BEAST_EXPECT(net::buffer_size(b.data()) == 0);
        BEAST_EXPECTS(!ec, ec.message());
        BEAST_EXPECT(!p.is_done());
        BEAST_EXPECT(p.is_header_done());

        ostream(b) <<
            std::hex <<
            first_chunk_data.size() << "\r\n" <<
            first_chunk_data << "\r\n";

        // First chunk
        used = p.put(multibufs(b.data()), ec);
        b.consume(used);

        BEAST_EXPECTS(ec == error::need_more, ec.message());
        BEAST_EXPECT(!p.is_done());

        ostream(b) <<
            std::hex <<
            (second_chunk_data_part1.size() +
             second_chunk_data_part2.size() ) << "\r\n" <<
            second_chunk_data_part1;

        // Second chunk, part 1
        used = p.put(multibufs(b.data()), ec);
        b.consume(used);

        BEAST_EXPECTS(!ec, ec.message());
        BEAST_EXPECT(!p.is_done());

        ostream(b) <<
            second_chunk_data_part2 << "\r\n"
            << "0\r\n\r\n";

        // Second chunk, part 2
        used = p.put(multibufs(b.data()), ec);
        b.consume(used);

        BEAST_EXPECTS(!ec, ec.message()); // <-- Error: bad chunk
        if(p.need_eof())
        {
            p.put_eof(ec);
            BEAST_EXPECTS(! ec, ec.message());
        }
        BEAST_EXPECT(p.is_done());
    }

@vinniefalco
Copy link
Member

Just add friend basic_parser_test;, this is one of the reasons I use a class for each test suite.

@vinniefalco
Copy link
Member

I'll incorporate it and make the adjustment, thanks!

vinniefalco pushed a commit to vinniefalco/beast that referenced this issue Oct 18, 2019
fix boostorg#1734, close boostorg#1736

buf_.get() has size `buf_len_`, but only `size` number of octets in that
buffer is valid.
inetic added a commit to equalitie/ouinet that referenced this issue Oct 21, 2019
inetic added a commit to equalitie/ouinet that referenced this issue Oct 21, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
No open projects
Development

No branches or pull requests

2 participants