Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Detecting actual Channels idleness vs. slowness #6150

Closed
rkapsi opened this issue Dec 20, 2016 · 8 comments
Closed

Detecting actual Channels idleness vs. slowness #6150

rkapsi opened this issue Dec 20, 2016 · 8 comments

Comments

@rkapsi
Copy link
Member

rkapsi commented Dec 20, 2016

I don't know if you'd consider it a missing feature or a bug but there appears to be no effective way of measuring write idleness. The natural choice is to use Netty's own IdleStateHandler but it may not work as expected for write(). For the sake of simplicity I'm providing a HTTP example but we encountered it with H2.

Basically... The user's (our) intention is to close an idle connection. The IdleStateHandler uses the write's ChannelFuture to determine if a client is idle but it doesn't take into consideration that the client might be just slow and we've just thrown a large ByteBuf at it. I think we ran into this with H2 due to things like CoalescingBufferQueue which try to aggregate multiple writes into a single one.

The repro is very simple. You just need to throttle your client a little bit. I'm using Chrome's built-in throtting feature for it. On the server side (the code below) we want to close the connection if the client appears to be idle for 30 seconds.

The problem is that IdleStateHandler will make a pure binary decision on the completeness of the ChannelFuture vs. taking actual progression into consideration and it appears there is no API for it.

I haven't looked at all Channel implementations but NioSocketChannel and EpollSocketChannel (via AbstractEpollStreamChannel) do have knowledge about the number of bytes written in each cycle. An easy fix could be that channel's sum up these values in a volatile long field and expose it to the user. Or it could be a simple ++ for each write cycle where more than 1 byte was written to the underlying socket. The user could then use it to assess whether or not any or some progress was made between two calls.

public class IdleChannelTest {

  private static final int PORT = 8080;
  
  public static void main(String[] args) throws Exception {
    
    ChannelHandler handler = new ChannelInitializer<Channel>() {
      @Override
      protected void initChannel(Channel ch) throws Exception {
        ChannelPipeline pipeline = ch.pipeline();
        
        pipeline.addLast(new HttpServerCodec());
        pipeline.addLast(new IdleStateHandler(0L, 0L, 30L, TimeUnit.SECONDS));
        pipeline.addLast(new HttpRequestAndIdleHandler());
      }
    };
    
    EventLoopGroup group = new NioEventLoopGroup();
    
    try {
      ServerBootstrap bootstrap = new ServerBootstrap()
          .channel(NioServerSocketChannel.class)
          .group(group)
          .childHandler(handler);
      
      Channel channel = bootstrap.bind(PORT)
          .syncUninterruptibly()
          .channel();
      
      try {
        System.out.println(new Date() + ": http://localhost:" + PORT + "/");
        Thread.sleep(Long.MAX_VALUE);
      } finally {
        channel.close();
      }
    } finally {
      group.shutdownGracefully();
    }
  }
  
  private static class HttpRequestAndIdleHandler extends ChannelInboundHandlerAdapter {

    @Override
    public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
      try {
        if (!(msg instanceof HttpRequest)) {
          return;
        }
        
        HttpRequest request = (HttpRequest)msg;
        
        ByteBuf content = Unpooled.wrappedBuffer(new byte[64*1024*1024]); // 64MB
        FullHttpResponse response = new DefaultFullHttpResponse(
            HttpVersion.HTTP_1_1, HttpResponseStatus.OK, content);
        
        HttpHeaders headers = response.headers();
        headers.set(HttpHeaderNames.CONTENT_LENGTH, content.readableBytes());
        headers.set(HttpHeaderNames.CONTENT_TYPE, "application/octet-stream");
        
        System.out.println(new Date() + ": Received request, writing response: " + ctx.channel() + ", " + request.uri());
        
        ChannelFuture future = ctx.writeAndFlush(response);
        future.addListener(new ChannelFutureListener() {
          @Override
          public void operationComplete(ChannelFuture future) throws Exception {
            System.out.println(new Date() + ": Write complete: " + ctx.channel());
          }
        });
      } finally {
        ReferenceCountUtil.release(msg);
      }
    }

    @Override
    public void userEventTriggered(ChannelHandlerContext ctx, Object evt) throws Exception {
      if (evt instanceof IdleStateEvent) {
        IdleStateEvent event = (IdleStateEvent)evt;
        System.out.println(new Date() + ": Channel is idle, closing it: " + ctx.channel() + ", " + event.state());
        
        ctx.close();
      }
      
      ctx.fireUserEventTriggered(evt);
    }
  }
}
@Scottmitch
Copy link
Member

Whether this is a bug or missing feature is open to interpretation ... however if we do want to provide this functionality I don't think we should change the existing default behavior for IdleStateHandler for backwards compatibility reasons.

ChannelOutboundBuffer may be able to provide the tools we need...here are some random thoughts:

  • The ChannelOutboundBuffer provides a counter of "pending bytes" but it is currently only updated at the "message" (aka ByteBuf, FileRegion, etc...) granularity). Even if this was updated at a finer granularity it will not work to answer the "has something changed" between successive calls (ABA situation may occur).
  • An alternative approach of "save a reference to the head of the queue, and current size then check these changed after the timeout" may also be problematic because we pool objects (again ABA). However if we couple this second approach with the completion of the future we may be able to overcome this ABA type problem. The completion of the future would mean we are now looking at a "different" object (even if it happens to be the same object because of pooling).

It would be good to explore if we can pull this off with existing tools rather than add explicit support for it in the core (unless we think this will drive other useful features).

@normanmaurer
Copy link
Member

@rkapsi @Scottmitch why can't this be fixed by using a ChannelProgressivePromise ?

@rkapsi
Copy link
Member Author

rkapsi commented Dec 21, 2016

@normanmaurer it'd work at the expense of garbage. IdleStateHandler (or some other class) would have to create and swap out the promise in the write(...) method and a listener that will notify the original promise that was passed into write(...). A technique I'd be fine using in my project where I know the nature of the ChannelPipeline. Adding something like that to Netty could be risky. Ideally I'd like to avoid the garbage though.

I think for pure idless detection there is a difference in measuring "change" and "progress". And then there's also overall progress and progress in the context of a write(...). The ChannelProgressivePromise covers the latter but it comes at an expense. I'd be happy with an overall progress indicator (a long value that says: "This Channel has transferred this many bytes in its lifetime"). Lastly something that indicates change would be fine too (for this purpose) but I don't think it'd be a good API addition.

@Scottmitch
Copy link
Member

@rkapsi - does the approach I mentioned in #6150 (comment) work for you?

@rkapsi
Copy link
Member Author

rkapsi commented Dec 21, 2016

@Scottmitch - I think No. 1 wouldn't work for the reason you described. No. 2 should work. We'd get the ChannelOutboundBuffer via Channel#unsafe()?

@Scottmitch
Copy link
Member

We'd get the ChannelOutboundBuffer via Channel#unsafe()?

Yip. Yah the second bullet is the approach I was referring to ... the first bullet was just to clarify that only using the existing "pending bytes" counter won't be sufficient.

@rkapsi
Copy link
Member Author

rkapsi commented Dec 22, 2016

@Scottmitch want me to look into this? I think it can be added to IdleStateHandler with a configuration option (i.e. default will retain the current behavior).

@Scottmitch
Copy link
Member

@rkapsi - Sounds good to me. Thanks!

rkapsi pushed a commit to Squarespace/netty that referenced this issue Dec 30, 2016
Motivation

The IdleStateHandler tracks write() idleness on message granularity but does not take into consideration that the client may be just slow and has managed to consume a subset of the message's bytes in the configured period of time.

Modifications

Adding an optional configuration parameter to IdleStateHandler which tells it to observe ChannelOutboundBuffer's state.

Result

Fixes netty#6150
Scottmitch pushed a commit that referenced this issue Dec 31, 2016
Motivation

The IdleStateHandler tracks write() idleness on message granularity but does not take into consideration that the client may be just slow and has managed to consume a subset of the message's bytes in the configured period of time.

Modifications

Adding an optional configuration parameter to IdleStateHandler which tells it to observe ChannelOutboundBuffer's state.

Result

Fixes #6150
liuzhengyang pushed a commit to liuzhengyang/netty that referenced this issue Sep 10, 2017
Motivation

The IdleStateHandler tracks write() idleness on message granularity but does not take into consideration that the client may be just slow and has managed to consume a subset of the message's bytes in the configured period of time.

Modifications

Adding an optional configuration parameter to IdleStateHandler which tells it to observe ChannelOutboundBuffer's state.

Result

Fixes netty#6150
pulllock pushed a commit to pulllock/netty that referenced this issue Oct 19, 2023
Motivation

The IdleStateHandler tracks write() idleness on message granularity but does not take into consideration that the client may be just slow and has managed to consume a subset of the message's bytes in the configured period of time.

Modifications

Adding an optional configuration parameter to IdleStateHandler which tells it to observe ChannelOutboundBuffer's state.

Result

Fixes netty#6150
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants