Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Netty 5.0 swallowing exceptions #4721

Closed
Qix- opened this issue Jan 17, 2016 · 8 comments
Closed

Netty 5.0 swallowing exceptions #4721

Qix- opened this issue Jan 17, 2016 · 8 comments

Comments

@Qix-
Copy link

Qix- commented Jan 17, 2016

Perhaps this is user error, but this is completely unintuitive in my opinion. Just sat here debugging something for a few hours; turns out Netty has been swallowing a bunch of exceptions, even when exceptionCaught was overridden.

The following code produces no output, and the server being connected to gets no data (it does, however, receive a connection).

package com.boltery.bouncer;

import io.netty.bootstrap.Bootstrap;
import io.netty.buffer.ByteBuf;
import io.netty.buffer.Unpooled;
import io.netty.channel.Channel;
import io.netty.channel.ChannelHandlerAdapter;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.ChannelInitializer;
import io.netty.channel.ChannelOption;
import io.netty.channel.EventLoopGroup;
import io.netty.channel.nio.NioEventLoopGroup;
import io.netty.channel.socket.SocketChannel;
import io.netty.channel.socket.nio.NioSocketChannel;
import io.netty.handler.codec.MessageToMessageEncoder;

import java.nio.charset.Charset;
import java.util.List;
import java.util.concurrent.TimeUnit;

public class Maine2 {
    public static void main(String[] args) throws Exception {
        final EventLoopGroup loopGroup = new NioEventLoopGroup();
        final Bootstrap bootstrap = new Bootstrap()
                .group(loopGroup)
                .channel(NioSocketChannel.class)
                .handler(new ChannelInitializer<SocketChannel>() {
                    @Override
                    protected void initChannel(final SocketChannel socketChannel) throws Exception {
                        socketChannel.pipeline().addLast(new MessageToMessageEncoder<Object>() {
                            @Override
                            protected void encode(final ChannelHandlerContext channelHandlerContext, final Object o, final List<Object> list) throws Exception {
                                throw new Exception();
                            }

                            @Override
                            public void exceptionCaught(final ChannelHandlerContext ctx, final Throwable cause) throws Exception {
                                // never hit...
                                cause.printStackTrace();
                            }
                        });
                    }
                })
                .option(ChannelOption.SO_KEEPALIVE, true);

        final Channel ch = bootstrap.connect("127.0.0.1", 11792).await().channel();

        final ByteBuf buf = Unpooled.buffer();
        buf.retain();
        buf.writeBytes("Hello".getBytes(Charset.forName("UTF-8")));
        ch.writeAndFlush(buf);

        loopGroup.awaitTermination(2, TimeUnit.SECONDS);
    }
}

Am I missing something here? Adding a future listener and checking .isSuccess() works as expected...

ch.writeAndFlush(buf).addListener((future) -> {
    if (future.isSuccess()) {
        System.out.println("Success");
    } else {
        System.err.println("FAILED"); // correctly prints "FAILED" when run
    }
});

However if there isn't a listener, things should crash or exceptionCaught should be called. This is particularly useful in cases where a step shouldn't fail. What if a listener hasn't been added? This makes debugging an absolute nightmare to the point I would never want to use Netty in production.

@normanmaurer
Copy link
Member

exceptionCaught(...) is only called for inbound exceptions. All outbound exceptions must by handled in a listener by design. If you always want to have exceptions handled in exceptionCaught(...) just add a ChannelOutboundHandler that will an listener for every outbound operation

@Qix-
Copy link
Author

Qix- commented Jan 17, 2016

@normanmaurer ChannelOutboundHandler was deprecated in 5.0 I thought.

@normanmaurer
Copy link
Member

And 5.0 was deprecated completly atm. Use 4.1 in which ChannelOutboundHandler was not deprecated. (In 5.0 you would have used ChannelHandler directly)

Am 17.01.2016 um 16:36 schrieb Josh Junon notifications@github.com:

@normanmaurer ChannelOutboundHandler was deprecated in 5.0.


Reply to this email directly or view it on GitHub.

@Qix-
Copy link
Author

Qix- commented Jan 17, 2016

How is 5.0 "deprecated completely"? Is it not the next Alpha version?

@normanmaurer
Copy link
Member

No we not plan to work on 5.0 any time soon but concentrate on 4.0 and 4.1. All none breaking changes were backported to 4.1

Am 17.01.2016 um 17:37 schrieb Josh Junon notifications@github.com:

How is 5.0 "deprecated completely"? Is it not the next Alpha version?


Reply to this email directly or view it on GitHub.

@Qix-
Copy link
Author

Qix- commented Jan 17, 2016

Sad, I liked the changes that occurred in 5.0.

@normanmaurer
Copy link
Member

Can you list which ones and for what reason?

Am 17.01.2016 um 18:05 schrieb Josh Junon notifications@github.com:

Sad, I liked the changes that occurred in 5.0.


Reply to this email directly or view it on GitHub.

@Qix-
Copy link
Author

Qix- commented Jan 17, 2016

The refactoring to use Java's Executor APIs a little more. Allows better interoperability between different libraries. That was a big one.

Call me crazy, but pipeline nodes should be agnostic to whether the data they're working on is inbound/outbound. Netty 5.0 started to move towards that I feel like.

One step further would be to explicitly specify which part of the duplex stream a pipeline handler should be put on (a simple approach to this would be two pipelines). Having two streams (inbound/outbound) in the same "stack" (pipeline) gets confusing, and doing automagic determination as to which handlers should be called for inbound/outbound data makes the API a bit messier.

I imagine it would reduce under-the-hood code as well, and thus even performance (I speculate) since you don't have to determine which handlers should be fired for each message.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants