Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Non-proper handling of Content-Length and Transfer-Encoding: chunked headers #9861

Closed
ZeddYu opened this issue Dec 9, 2019 · 24 comments · Fixed by #9865
Closed

Non-proper handling of Content-Length and Transfer-Encoding: chunked headers #9861

ZeddYu opened this issue Dec 9, 2019 · 24 comments · Fixed by #9865
Milestone

Comments

@ZeddYu
Copy link

ZeddYu commented Dec 9, 2019

Expected behavior

1.Only accept one Content-Length.RFC 7230 says duplicate Content-Length header fields have been generated or combined by an upstream message processor, then the recipient MUST either reject the message as invalid or replace the duplicated field-values with a single valid Content-Length.
2.Only accept identity and chunked Transport-Encoding
In this implementation, the order does not matter (it probably should). The Go implementation only uses the first value of the header.Seems to be in sync with the behaviour of AWS ALB. All other valid (gzip, compress, etc.) and invalid TE will return a 501, since we don't have readers for them I figured this was the right move, but feel free to correct me

Actual behavior

  1. But netty accept all.
    2.Netty accpet random TE.

Steps to reproduce

Use two CL to reproduce the first.
Use a chunked TE header and a random TE header.
Smiliar with 9571. It also cause http smuggling. Or see the other issue benoitc/gunicorn#2176 and the PR benoitc/gunicorn#2181

Minimal yet complete reproducer code (or URL to code)

Netty version

all

JVM version (e.g. java -version)

OS version (e.g. uname -a)

normanmaurer added a commit that referenced this issue Dec 10, 2019
…TTP/1.1

Motivation:

RFC7230 states that we should not accept multiple content-length headers.

Modifications:

- Check for multiple content-length headers and if found mark message as invalid
- Add unit test

Result:

Fixes #9861
@normanmaurer
Copy link
Member

normanmaurer commented Dec 10, 2019

@ZeddYu I fixed 1) #9865 but I am not sure about 2) as netty is just a library so I think the user should verify the transfer-encoding as it also depends on what handlers are in the pipeline.

@normanmaurer normanmaurer added this to the 4.1.44.Final milestone Dec 10, 2019
@ZeddYu
Copy link
Author

ZeddYu commented Dec 10, 2019

@normanmaurer But I test it with following code:

            HttpHeaders headers = request.headers();
            if (!headers.isEmpty()) {
                for (Map.Entry<String, String> h: headers) {
                    CharSequence key = h.getKey();
                    CharSequence value = h.getValue();
                    buf.append("HEADER: ").append(key).append(" = ").append(value).append("\r\n");
                }
                buf.append("\r\n");
            }

Use this request headerto test:

Transfer-Encoding: chunked
Transfer-Encoding: some

Only get one TE header is 'some'.
And for the 1), why I use two CL header to test is still accepted?

@normanmaurer
Copy link
Member

@ZeddYu sorry I don't understand your question... Can you please try to rephrase ? As shown in the test-case of #9865 multiple Content-Length headers are not allowed anymore .

@ZeddYu
Copy link
Author

ZeddYu commented Dec 10, 2019

@normanmaurer All right. I paste an img.

Using two CL header is still accepted. Netty version is 4.1.43 Final.

@normanmaurer
Copy link
Member

@ZeddYu sure as this is not merged yet and will be part of netty 4.1.44.Final.

@ZeddYu
Copy link
Author

ZeddYu commented Dec 10, 2019

@normanmaurer All right. But it seems it has got another question.

You see. It allows CRLF after the colon. But RFC7230 says only allow CRLF + 1*(SP/HTAB).
And for the 2) and this , I am sure it will cause problems.

@ZeddYu
Copy link
Author

ZeddYu commented Dec 10, 2019

Here it is, the obs-fold.

header-field   = field-name ":" OWS field-value OWS

     field-name     = token
     field-value    = *( field-content / obs-fold )
     field-content  = field-vchar [ 1*( SP / HTAB ) field-vchar ]
     field-vchar    = VCHAR / obs-text

     obs-fold       = CRLF 1*( SP / HTAB )
                    ; obsolete line folding
                    ; see Section 3.2.4

According to https://tools.ietf.org/html/rfc7230#section-3.3.2.

@normanmaurer
Copy link
Member

@ZeddYu If you think there is an other issue related to multi-line headers please open another issue. IHMO this is not related to this one.

@ZeddYu
Copy link
Author

ZeddYu commented Dec 10, 2019

@normanmaurer What about the 2)?

@normanmaurer
Copy link
Member

@ZeddYu again I am not sure why we should do anything about 2) as what is supported depends on what the user is doing with the pipeline etc. Netty is just a library and not a server implementation itself. For example http_parser is not doing anything about 2) as well.

@ZeddYu
Copy link
Author

ZeddYu commented Dec 10, 2019

@normanmaurer But just like

@normanmaurer But I test it with following code:

            HttpHeaders headers = request.headers();
            if (!headers.isEmpty()) {
                for (Map.Entry<String, String> h: headers) {
                    CharSequence key = h.getKey();
                    CharSequence value = h.getValue();
                    buf.append("HEADER: ").append(key).append(" = ").append(value).append("\r\n");
                }
                buf.append("\r\n");
            }

Use this request headerto test:

Transfer-Encoding: chunked
Transfer-Encoding: some

Only get one TE header is 'some'.

@normanmaurer But just like what I said just now. I can't seem to get the first TE header. I use the code which is from https://netty.io/4.1/xref/io/netty/example/http/snoop/HttpSnoopServer.html.

@normanmaurer
Copy link
Member

@ZeddYu please can you provide a few reproducer I can run... This back and forth just eats up a lot of time on both of our ends.

@ZeddYu
Copy link
Author

ZeddYu commented Dec 10, 2019

Sorry for that.
Application.class:

public class Application {

    public static void main(String[] args) throws Exception{
        HttpServer server = new HttpServer(8888);// 8081为启动端口
        server.start();
    }
}

HttpServer.class:

import io.netty.bootstrap.ServerBootstrap;
import io.netty.channel.ChannelFuture;
import io.netty.channel.EventLoopGroup;
import io.netty.channel.nio.NioEventLoopGroup;
import io.netty.channel.socket.nio.NioServerSocketChannel;
import io.netty.handler.logging.LogLevel;
import io.netty.handler.logging.LoggingHandler;

import java.net.InetSocketAddress;

/**
 * netty server
 * 2018/11/1.
 */
public class HttpServer {

    int port = 8888;

    public HttpServer(int port){
        this.port = port;
    }

    public void start() throws Exception{
        ServerBootstrap bootstrap = new ServerBootstrap();
        EventLoopGroup boss = new NioEventLoopGroup();
        EventLoopGroup work = new NioEventLoopGroup();
        bootstrap.group(boss,work)
                .handler(new LoggingHandler(LogLevel.DEBUG))
                .channel(NioServerSocketChannel.class)
                .childHandler(new HttpServerInitializer());

        ChannelFuture f = bootstrap.bind(new InetSocketAddress(port)).sync();
        System.out.println(" server start up on port : " + port);
        f.channel().closeFuture().sync();

    }

}

HttpServerInitialize.class

import io.netty.channel.ChannelInitializer;
import io.netty.channel.ChannelPipeline;
import io.netty.channel.socket.SocketChannel;
import io.netty.handler.codec.http.HttpObjectAggregator;
import io.netty.handler.codec.http.HttpServerCodec;

public class HttpServerInitializer extends ChannelInitializer<SocketChannel> {

    @Override
    protected void initChannel(SocketChannel channel) throws Exception {
        ChannelPipeline pipeline = channel.pipeline();
        pipeline.addLast(new HttpServerCodec());
        pipeline.addLast("httpAggregator",new HttpObjectAggregator(512*1024)); 
        pipeline.addLast(new HttpRequestHandler());

    }
}

HttpRequestHandler.class

import io.netty.buffer.ByteBuf;
import io.netty.buffer.Unpooled;
import io.netty.channel.ChannelFutureListener;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.SimpleChannelInboundHandler;
import io.netty.handler.codec.DecoderResult;
import io.netty.handler.codec.http.DefaultFullHttpResponse;
import io.netty.handler.codec.http.FullHttpResponse;
import io.netty.handler.codec.http.HttpContent;
import io.netty.handler.codec.http.HttpHeaderNames;
import io.netty.handler.codec.http.HttpUtil;
import io.netty.handler.codec.http.HttpHeaderValues;
import io.netty.handler.codec.http.HttpHeaders;
import io.netty.handler.codec.http.HttpObject;
import io.netty.handler.codec.http.HttpRequest;
import io.netty.handler.codec.http.LastHttpContent;
import io.netty.handler.codec.http.QueryStringDecoder;
import io.netty.handler.codec.http.cookie.Cookie;
import io.netty.handler.codec.http.cookie.ServerCookieDecoder;
import io.netty.handler.codec.http.cookie.ServerCookieEncoder;
import io.netty.util.CharsetUtil;

import java.util.List;
import java.util.Map;
import java.util.Map.Entry;
import java.util.Set;

import static io.netty.handler.codec.http.HttpResponseStatus.*;
import static io.netty.handler.codec.http.HttpVersion.*;

public class HttpRequestHandler extends SimpleChannelInboundHandler<Object> {

    private HttpRequest request;
    /** Buffer that stores the response content */
    private final StringBuilder buf = new StringBuilder();

    @Override
    public void channelReadComplete(ChannelHandlerContext ctx) {
        ctx.flush();
    }

    @Override
    public void channelRead(ChannelHandlerContext ctx, Object msg) {
        if (msg instanceof HttpRequest) {
            HttpRequest request = this.request = (HttpRequest) msg;

            if (HttpUtil.is100ContinueExpected(request)) {
                sendContinue(ctx);
            }

            buf.setLength(0);
            buf.append("WELCOME TO THE WILD WILD WEB SERVER\r\n");
            buf.append("===================================\r\n");

            buf.append("VERSION: ").append(request.protocolVersion()).append("\r\n");
            buf.append("HOSTNAME: ").append(request.headers().get(HttpHeaderNames.HOST, "unknown")).append("\r\n");
            buf.append("REQUEST_URI: ").append(request.uri()).append("\r\n\r\n");

            HttpHeaders headers = request.headers();
            if (!headers.isEmpty()) {
                for (Map.Entry<String, String> h: headers) {
                    CharSequence key = h.getKey();
                    CharSequence value = h.getValue();
                    buf.append("HEADER: ").append(key).append(" = ").append(value).append("\r\n");
                }
                buf.append("\r\n");
            }

            QueryStringDecoder queryStringDecoder = new QueryStringDecoder(request.uri());
            Map<String, List<String>> params = queryStringDecoder.parameters();
            if (!params.isEmpty()) {
                for (Entry<String, List<String>> p: params.entrySet()) {
                    String key = p.getKey();
                    List<String> vals = p.getValue();
                    for (String val : vals) {
                        buf.append("PARAM: ").append(key).append(" = ").append(val).append("\r\n");
                    }
                }
                buf.append("\r\n");
            }

            appendDecoderResult(buf, request);
        }

        if (msg instanceof HttpContent) {
            HttpContent httpContent = (HttpContent) msg;

            ByteBuf content = httpContent.content();
            if (content.isReadable()) {
                buf.append("CONTENT: ");
                buf.append(content.toString(CharsetUtil.UTF_8));
                buf.append("\r\n");
                appendDecoderResult(buf, request);
            }

            if (msg instanceof LastHttpContent) {
                buf.append("END OF CONTENT\r\n");

                LastHttpContent trailer = (LastHttpContent) msg;
                if (!trailer.trailingHeaders().isEmpty()) {
                    buf.append("\r\n");
                    for (CharSequence name: trailer.trailingHeaders().names()) {
                        for (CharSequence value: trailer.trailingHeaders().getAll(name)) {
                            buf.append("TRAILING HEADER: ");
                            buf.append(name).append(" = ").append(value).append("\r\n");
                        }
                    }
                    buf.append("\r\n");
                }

                if (!writeResponse(trailer, ctx)) {
                    // If keep-alive is off, close the connection once the content is fully written.
                    ctx.writeAndFlush(Unpooled.EMPTY_BUFFER).addListener(ChannelFutureListener.CLOSE);
                }
            }
        }
    }

    @Override
    protected void channelRead0(ChannelHandlerContext channelHandlerContext, Object o) throws Exception {

    }

    private static void appendDecoderResult(StringBuilder buf, HttpObject o) {
        DecoderResult result = o.decoderResult();
        if (result.isSuccess()) {
            return;
        }

        buf.append(".. WITH DECODER FAILURE: ");
        buf.append(result.cause());
        buf.append("\r\n");
    }

    private boolean writeResponse(HttpObject currentObj, ChannelHandlerContext ctx) {
        // Decide whether to close the connection or not.
        boolean keepAlive = HttpUtil.isKeepAlive(request);
        // Build the response object.
        FullHttpResponse response = new DefaultFullHttpResponse(
                HTTP_1_1, currentObj.decoderResult().isSuccess()? OK : BAD_REQUEST,
                Unpooled.copiedBuffer(buf.toString(), CharsetUtil.UTF_8));

        response.headers().set(HttpHeaderNames.CONTENT_TYPE, "text/plain; charset=UTF-");

        if (keepAlive) {
            // Add 'Content-Length' header only for a keep-alive connection.
            response.headers().setInt(HttpHeaderNames.CONTENT_LENGTH, response.content().readableBytes());
            // Add keep alive header as per:
            // - http://www.w.org/Protocols/HTTP/./draft-ietf-http-v-spec-.html#Connection
            response.headers().set(HttpHeaderNames.CONNECTION, HttpHeaderValues.KEEP_ALIVE);
        }

        // Encode the cookie.
        String cookieString = request.headers().get(HttpHeaderNames.COOKIE);
        if (cookieString != null) {
            Set<Cookie> cookies = ServerCookieDecoder.STRICT.decode(cookieString);
            if (!cookies.isEmpty()) {
                // Reset the cookies if necessary.
                for (Cookie cookie: cookies) {
                    response.headers().add(HttpHeaderNames.SET_COOKIE, ServerCookieEncoder.STRICT.encode(cookie));
                }
            }
        } else {
            // Browser sent no cookie.  Add some.
            response.headers().add(HttpHeaderNames.SET_COOKIE, ServerCookieEncoder.STRICT.encode("key1", "value1"));
            response.headers().add(HttpHeaderNames.SET_COOKIE, ServerCookieEncoder.STRICT.encode("key2", "value2"));
        }

        // Write the response.
        ctx.write(response);

        return keepAlive;
    }

    private static void sendContinue(ChannelHandlerContext ctx) {
        FullHttpResponse response = new DefaultFullHttpResponse(HTTP_1_1, CONTINUE, Unpooled.EMPTY_BUFFER);
        ctx.write(response);
    }

    @Override
    public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
        cause.printStackTrace();
        ctx.close();
    }
}

@ZeddYu
Copy link
Author

ZeddYu commented Dec 10, 2019

@normanmaurer And can I request a CVE for 1)?

@ZeddYu
Copy link
Author

ZeddYu commented Dec 11, 2019

Fix my description of 2).
I use this http request.

POST / HTTP/1.1
Host:localhost
Connection: close
Content-Length: 1
Transfer-Encoding: chunked, something

0

And netty use CL header because of 'something' in TE header. And I get the '0' data successfully.

@normanmaurer
Copy link
Member

@ZeddYu yes please request a CVE for incorrectly handling Content-Length and Transfer-Encoding: chunked headers.

normanmaurer added a commit that referenced this issue Dec 11, 2019
…-length and transfer-encoding: chunked header when using HTTP/1.1

Motivation:

RFC7230 states that we should not accept multiple content-length headers and also should not accept a content-length header in combination with transfer-encoding: chunked

Modifications:

- Check for multiple content-length headers and if found mark message as invalid
- Check if we found a content-length header and also a transfer-encoding: chunked and if so mark the message as invalid
- Add unit test

Result:

Fixes #9861
normanmaurer added a commit that referenced this issue Dec 11, 2019
…-length and transfer-encoding: chunked header when using HTTP/1.1

Motivation:

RFC7230 states that we should not accept multiple content-length headers and also should not accept a content-length header in combination with transfer-encoding: chunked

Modifications:

- Check for multiple content-length headers and if found mark message as invalid
- Check if we found a content-length header and also a transfer-encoding: chunked and if so mark the message as invalid
- Add unit test

Result:

Fixes #9861
normanmaurer added a commit that referenced this issue Dec 11, 2019
…-length and transfer-encoding: chunked header when using HTTP/1.1

Motivation:

RFC7230 states that we should not accept multiple content-length headers and also should not accept a content-length header in combination with transfer-encoding: chunked

Modifications:

- Check for multiple content-length headers and if found mark message as invalid
- Check if we found a content-length header and also a transfer-encoding: chunked and if so mark the message as invalid
- Add unit test

Result:

Fixes #9861
@normanmaurer normanmaurer changed the title Found two security issue Non-proper handling of Content-Length and Transfer-Encoding: chunked headers Dec 11, 2019
normanmaurer added a commit that referenced this issue Dec 11, 2019
…-length and transfer-encoding: chunked header when using HTTP/1.1

Motivation:

RFC7230 states that we should not accept multiple content-length headers and also should not accept a content-length header in combination with transfer-encoding: chunked

Modifications:

- Check for multiple content-length headers and if found mark message as invalid
- Check if we found a content-length header and also a transfer-encoding: chunked and if so mark the message as invalid
- Add unit test

Result:

Fixes #9861
normanmaurer added a commit that referenced this issue Dec 13, 2019
…-length and transfer-encoding: chunked header when using HTTP/1.1 (#9865)

Motivation:

RFC7230 states that we should not accept multiple content-length headers and also should not accept a content-length header in combination with transfer-encoding: chunked

Modifications:

- Check for multiple content-length headers and if found mark message as invalid
- Check if we found a content-length header and also a transfer-encoding: chunked and if so mark the message as invalid
- Add unit test

Result:

Fixes #9861
normanmaurer added a commit that referenced this issue Dec 13, 2019
…-length and transfer-encoding: chunked header when using HTTP/1.1 (#9865)

Motivation:

RFC7230 states that we should not accept multiple content-length headers and also should not accept a content-length header in combination with transfer-encoding: chunked

Modifications:

- Check for multiple content-length headers and if found mark message as invalid
- Check if we found a content-length header and also a transfer-encoding: chunked and if so mark the message as invalid
- Add unit test

Result:

Fixes #9861
@joshbressers
Copy link

It appears this was assigned a CVE ID
https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7238

@artem-smotrakov
Copy link
Contributor

@joshbressers If I understand correctly, this was assigned CVE-2019-20445 (according to the CVE description)

https://nvd.nist.gov/vuln/detail/CVE-2019-20445

@normanmaurer Do you know if your pull request #9865 addresses both CVE-2019-20445 and CVE-2020-7238?

@normanmaurer
Copy link
Member

@artem-smotrakov yes

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
5 participants