Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unhandled IllegalStateException in RedissonLock #87

Closed
jsotuyod opened this issue Nov 2, 2014 · 2 comments
Closed

Unhandled IllegalStateException in RedissonLock #87

jsotuyod opened this issue Nov 2, 2014 · 2 comments

Comments

@jsotuyod
Copy link
Contributor

jsotuyod commented Nov 2, 2014

I make heavy use of RLock in my app, and from time to time I find this in my logs.

2014-10-31 22:36:13.047 [nioEventLoopGroup-2-2] WARN i.n.channel.DefaultChannelPipeline - An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
java.lang.IllegalStateException: complete already: DefaultPromise@75028274(incomplete)
at io.netty.util.concurrent.DefaultPromise.setSuccess(DefaultPromise.java:406) ~[netty-common-4.0.19.Final.jar:4.0.19.Final]
at org.redisson.RedissonLock$1.subscribed(RedissonLock.java:182) ~[redisson-1.1.5.jar:na]
at com.lambdaworks.redis.pubsub.RedisPubSubConnection.channelRead(RedisPubSubConnection.java:133) ~[redisson-1.1.5.jar:na]
at io.netty.channel.DefaultChannelHandlerContext.invokeChannelRead(DefaultChannelHandlerContext.java:341) [netty-transport-4.0.19.Final.jar:4.0.19.Final]
at io.netty.channel.DefaultChannelHandlerContext.fireChannelRead(DefaultChannelHandlerContext.java:327) [netty-transport-4.0.19.Final.jar:4.0.19.Final]
at com.lambdaworks.redis.pubsub.PubSubCommandHandler.decode(PubSubCommandHandler.java:46) [redisson-1.1.5.jar:na]
at com.lambdaworks.redis.protocol.CommandHandler.channelRead(CommandHandler.java:52) [redisson-1.1.5.jar:na]
at io.netty.channel.DefaultChannelHandlerContext.invokeChannelRead(DefaultChannelHandlerContext.java:341) [netty-transport-4.0.19.Final.jar:4.0.19.Final]
at io.netty.channel.DefaultChannelHandlerContext.fireChannelRead(DefaultChannelHandlerContext.java:327) [netty-transport-4.0.19.Final.jar:4.0.19.Final]
at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86) [netty-transport-4.0.19.Final.jar:4.0.19.Final]
at io.netty.channel.DefaultChannelHandlerContext.invokeChannelRead(DefaultChannelHandlerContext.java:341) [netty-transport-4.0.19.Final.jar:4.0.19.Final]
at io.netty.channel.DefaultChannelHandlerContext.fireChannelRead(DefaultChannelHandlerContext.java:327) [netty-transport-4.0.19.Final.jar:4.0.19.Final]
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:785) [netty-transport-4.0.19.Final.jar:4.0.19.Final]
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:126) [netty-transport-4.0.19.Final.jar:4.0.19.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:507) [netty-transport-4.0.19.Final.jar:4.0.19.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:464) [netty-transport-4.0.19.Final.jar:4.0.19.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:378) [netty-transport-4.0.19.Final.jar:4.0.19.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:350) [netty-transport-4.0.19.Final.jar:4.0.19.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116) [netty-common-4.0.19.Final.jar:4.0.19.Final]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_60]

@jsotuyod jsotuyod changed the title Unhandled IllegalStateException in RLock Unhandled IllegalStateException in RedissonLock Nov 2, 2014
@mrniko
Copy link
Member

mrniko commented Nov 8, 2014

fixed

@mrniko mrniko closed this as completed Nov 8, 2014
@jsotuyod
Copy link
Contributor Author

jsotuyod commented Nov 8, 2014

Thanks for taking your time to look into this issue. Even though the patch removes the log, it won't address the root cause that makes this happen. I've looked deep into the problem and I have finally figured it out.

The issue can be consistently reproduced as follows (always using 1.1.5)

lock.lock();
lock.unlock();
lock.lock(); // place  breakpoint on Redisson::subscribe at the line that states connectionManager.subscribe(listener, getChannelName()); within this call

once you hit the breakpoint at the given location, restart redis, and resume.

What will happen is that 2 subscribe commands will be issued to Redis, and therefore, the listeners will be triggered twice. This happens because of the code at RedisPubSubConnection::channelRead

The channels and patterns sets are only updated when a command is executed if there are any listeners at all subscribed. If none, they remain untouched.

Since PubSubConnectionEntry::unsubscribe removes the listener before requesting the unsubscribe from the connection, the command is executed, but channels never updated, then, upon reconnect RedisPubSubConnection::channelActive does

if (channels.size() > 0) {
    subscribe(channels.toArray(new String[channels.size()]));
    channels.clear();
}

The best fix I can think of is not depending on the presence of listener to update the internal state of the pubsub connection. I'm sending you a pull request in a couple minutes with this.

jsotuyod added a commit to Monits/redisson that referenced this issue Nov 8, 2014
 - This fixes the logs seen in redisson#87
 - Makes sure the pubsub connection is kept in the proper state, regardless
    of the order in which to clients decide to operate
    (unsubscribe / removelistener)
jsotuyod added a commit to Monits/redisson that referenced this issue Nov 8, 2014
 - This fixes the logs seen in redisson#87
 - Makes sure the pubsub connection is kept in the proper state, regardless
    of the order in which to clients decide to operate
    (unsubscribe / removelistener)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

2 participants