Skip to content

Fix HttpObjectAggregator leaving connection stuck after 413 with AUTO (#16280)#16401

Merged
chrisvest merged 1 commit into
netty:4.2from
chrisvest:4.2-fix-413-stuck-connection
Mar 4, 2026
Merged

Fix HttpObjectAggregator leaving connection stuck after 413 with AUTO (#16280)#16401
chrisvest merged 1 commit into
netty:4.2from
chrisvest:4.2-fix-413-stuck-connection

Conversation

@chrisvest
Copy link
Copy Markdown
Member

Motivation:

When HttpObjectAggregator.handleOversizedMessage() sends a 413 response with HTTP/1.1 keep-alive
enabled, it only closes the channel on write failure. On success, the channel is left open but no
channel.read() is called.

With AUTO_READ=false, this leaves the connection stuck - subsequent requests sit unread in the kernel buffer until timeout.

Modification:

Always close the channel after sending 413, not just on write failure. This matches:

  1. The existing comment: "send back a 413 and close the connection"
  2. The behavior when keep-alive is disabled
  3. The behavior when the message is a FullHttpMessage

Closing is the correct behavior because after an oversized chunked message, there may be leftover data
in the TCP stream that cannot be reliably skipped in HTTP/1.1.

Result:

Connections are properly closed after 413, preventing stuck connections with AUTO_READ=false.

Repro:

The bug can only be reproed with a "real" channel. This test repros but isn't committed since the fix to simply close the channel is testable with a simpler EmbeddedChannel test

@Test
public void testOversizedRequestWithKeepAliveAndAutoReadFalse() throws InterruptedException {
    final CountDownLatch responseLatch = new CountDownLatch(1);
    final CountDownLatch secondRequestLatch = new CountDownLatch(1);
    final AtomicReference<HttpResponseStatus> statusRef = new AtomicReference<HttpResponseStatus>();

    NioEventLoopGroup group = new NioEventLoopGroup(2);
    try {
        ServerBootstrap sb = new ServerBootstrap();
        sb.group(group)
                .channel(NioServerSocketChannel.class)
                .childOption(ChannelOption.AUTO_READ, false)
                .childHandler(new ChannelInitializer<Channel>() {
                    @Override
                    protected void initChannel(Channel ch) {
                        ch.pipeline().addLast(new HttpServerCodec());
                        ch.pipeline().addLast(new HttpObjectAggregator(4));
                        ch.pipeline().addLast(new SimpleChannelInboundHandler<FullHttpRequest>() {
                            @Override
                            protected void channelRead0(ChannelHandlerContext ctx, FullHttpRequest msg) {
                                secondRequestLatch.countDown();
                            }
                        });
                        // Trigger the first read manually (AUTO_READ=false requires this)
                        ch.read();
                    }
                });

        Bootstrap cb = new Bootstrap();
        cb.group(group)
                .channel(NioSocketChannel.class)
                .handler(new ChannelInitializer<Channel>() {
                    @Override
                    protected void initChannel(Channel ch) {
                        ch.pipeline().addLast(new HttpClientCodec());
                        ch.pipeline().addLast(new HttpObjectAggregator(1024));
                        ch.pipeline().addLast(new SimpleChannelInboundHandler<FullHttpResponse>() {
                            @Override
                            protected void channelRead0(ChannelHandlerContext ctx, FullHttpResponse msg) {
                                statusRef.set(msg.status());
                                responseLatch.countDown();
                            }
                        });
                    }
                });

        Channel serverChannel = sb.bind(new InetSocketAddress(0)).sync().channel();
        int port = ((InetSocketAddress) serverChannel.localAddress()).getPort();
        Channel clientChannel = cb.connect(new InetSocketAddress(NetUtil.LOCALHOST, port)).sync().channel();

        HttpRequest request = new DefaultHttpRequest(HttpVersion.HTTP_1_1, HttpMethod.PUT, "/upload");
        HttpUtil.setContentLength(request, 5);
        clientChannel.writeAndFlush(request);
        clientChannel.writeAndFlush(new DefaultLastHttpContent(
                Unpooled.wrappedBuffer(new byte[]{1, 2, 3, 4, 5})));

        // Server should respond with 413
        assertTrue(responseLatch.await(5, SECONDS));
        assertEquals(HttpResponseStatus.REQUEST_ENTITY_TOO_LARGE, statusRef.get());

        // Send a second request on the same connection. With the bug, the server never
        // calls ctx.read() after the 413, so this request hangs indefinitely.
        clientChannel.writeAndFlush(
                new DefaultFullHttpRequest(HttpVersion.HTTP_1_1, HttpMethod.GET, "/next",
                        Unpooled.EMPTY_BUFFER));

        assertTrue(secondRequestLatch.await(5, SECONDS),
                "Second request was never read - channel is stuck after 413 with AUTO_READ=false");

        clientChannel.close().sync();
        serverChannel.close().sync();
    } finally {
        group.shutdownGracefully();
    }
}

(cherry picked from commit 442a8cf)

netty#16280)

### Motivation:

When HttpObjectAggregator.handleOversizedMessage() sends a 413 response
with HTTP/1.1 keep-alive
enabled, it only closes the channel on write failure. On success, the
channel is left open but no
channel.read() is called.

With AUTO_READ=false, this leaves the connection stuck -
subsequent requests sit unread in the kernel buffer until timeout.

### Modification:

Always close the channel after sending 413, not just on write failure.
This matches:
1. The existing comment: "send back a 413 and close the connection"
2. The behavior when keep-alive is disabled
3. The behavior when the message is a FullHttpMessage

Closing is the correct behavior because after an oversized chunked
message, there may be leftover data
in the TCP stream that cannot be reliably skipped in HTTP/1.1.

### Result:

Connections are properly closed after 413, preventing stuck connections
with AUTO_READ=false.

### Repro:

The bug can only be reproed with a "real" channel. This test repros but
isn't committed since the fix to simply close the channel is testable
with a simpler EmbeddedChannel test

```
@test
public void testOversizedRequestWithKeepAliveAndAutoReadFalse() throws InterruptedException {
    final CountDownLatch responseLatch = new CountDownLatch(1);
    final CountDownLatch secondRequestLatch = new CountDownLatch(1);
    final AtomicReference<HttpResponseStatus> statusRef = new AtomicReference<HttpResponseStatus>();

    NioEventLoopGroup group = new NioEventLoopGroup(2);
    try {
        ServerBootstrap sb = new ServerBootstrap();
        sb.group(group)
                .channel(NioServerSocketChannel.class)
                .childOption(ChannelOption.AUTO_READ, false)
                .childHandler(new ChannelInitializer<Channel>() {
                    @OverRide
                    protected void initChannel(Channel ch) {
                        ch.pipeline().addLast(new HttpServerCodec());
                        ch.pipeline().addLast(new HttpObjectAggregator(4));
                        ch.pipeline().addLast(new SimpleChannelInboundHandler<FullHttpRequest>() {
                            @OverRide
                            protected void channelRead0(ChannelHandlerContext ctx, FullHttpRequest msg) {
                                secondRequestLatch.countDown();
                            }
                        });
                        // Trigger the first read manually (AUTO_READ=false requires this)
                        ch.read();
                    }
                });

        Bootstrap cb = new Bootstrap();
        cb.group(group)
                .channel(NioSocketChannel.class)
                .handler(new ChannelInitializer<Channel>() {
                    @OverRide
                    protected void initChannel(Channel ch) {
                        ch.pipeline().addLast(new HttpClientCodec());
                        ch.pipeline().addLast(new HttpObjectAggregator(1024));
                        ch.pipeline().addLast(new SimpleChannelInboundHandler<FullHttpResponse>() {
                            @OverRide
                            protected void channelRead0(ChannelHandlerContext ctx, FullHttpResponse msg) {
                                statusRef.set(msg.status());
                                responseLatch.countDown();
                            }
                        });
                    }
                });

        Channel serverChannel = sb.bind(new InetSocketAddress(0)).sync().channel();
        int port = ((InetSocketAddress) serverChannel.localAddress()).getPort();
        Channel clientChannel = cb.connect(new InetSocketAddress(NetUtil.LOCALHOST, port)).sync().channel();

        HttpRequest request = new DefaultHttpRequest(HttpVersion.HTTP_1_1, HttpMethod.PUT, "/upload");
        HttpUtil.setContentLength(request, 5);
        clientChannel.writeAndFlush(request);
        clientChannel.writeAndFlush(new DefaultLastHttpContent(
                Unpooled.wrappedBuffer(new byte[]{1, 2, 3, 4, 5})));

        // Server should respond with 413
        assertTrue(responseLatch.await(5, SECONDS));
        assertEquals(HttpResponseStatus.REQUEST_ENTITY_TOO_LARGE, statusRef.get());

        // Send a second request on the same connection. With the bug, the server never
        // calls ctx.read() after the 413, so this request hangs indefinitely.
        clientChannel.writeAndFlush(
                new DefaultFullHttpRequest(HttpVersion.HTTP_1_1, HttpMethod.GET, "/next",
                        Unpooled.EMPTY_BUFFER));

        assertTrue(secondRequestLatch.await(5, SECONDS),
                "Second request was never read - channel is stuck after 413 with AUTO_READ=false");

        clientChannel.close().sync();
        serverChannel.close().sync();
    } finally {
        group.shutdownGracefully();
    }
}
```

Co-authored-by: Sam Landfried <samlland@amazon.com>
(cherry picked from commit 442a8cf)
@chrisvest chrisvest added this to the 4.2.11.Final milestone Mar 3, 2026
@chrisvest chrisvest enabled auto-merge (squash) March 3, 2026 01:26
@chrisvest chrisvest merged commit abcec6d into netty:4.2 Mar 4, 2026
34 of 35 checks passed
@chrisvest chrisvest deleted the 4.2-fix-413-stuck-connection branch March 9, 2026 23:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants