Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory leak #21

Closed
kichik opened this issue Jan 10, 2015 · 6 comments
Closed

Memory leak #21

kichik opened this issue Jan 10, 2015 · 6 comments
Labels
type: bug A general bug

Comments

@kichik
Copy link
Contributor

kichik commented Jan 10, 2015

The following code works for a while but stops receiving messages very quickly. When run without the debugger, I actually get the following error messages:

15/01/09 21:36:50 WARN channel.DefaultChannelPipeline: An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
java.lang.OutOfMemoryError: Java heap space

To expedite the process I limit JVM memory with -Xmx32m. However, I first discovered this issue while running a production server with 6GB heap size. It was under very low load and received a small message every 20 seconds. At some point I noticed messages were not being delivered anymore and the logs threw the same OOM errors. A heap dump showed buffer in com.lambdaworks.redis.protocol.CommanHandler was a holding a contiguous 1GB array. It was trying to extend the buffer, but had no more huge contiguous blocks in the heap.

I am weirdly unable to reproduce the OOM in a debugger. However, when running in a debugger, I still do get the same issue with messages not being received after a while.

import java.util.Arrays;
import java.util.concurrent.ExecutionException;
import java.util.logging.Logger;

import com.lambdaworks.redis.RedisClient;
import com.lambdaworks.redis.RedisConnection;
import com.lambdaworks.redis.RedisURI;
import com.lambdaworks.redis.codec.Utf8StringCodec;
import com.lambdaworks.redis.pubsub.RedisPubSubConnectionImpl;
import com.lambdaworks.redis.pubsub.RedisPubSubListener;


public class LettuceMemoryLeak implements RedisPubSubListener<String, String>
{

    private static Logger LOGGER = Logger.getLogger( "LettuceMemoryLeak" );


    public static void main( String[] args ) throws InterruptedException, ExecutionException
    {
        RedisURI uri = RedisURI.Builder.sentinel( "localhost", 26379, "mymaster" ).build();
        RedisClient client = new RedisClient( uri );

        RedisPubSubConnectionImpl<String, String> subscriber = client.connectPubSub( new Utf8StringCodec() );
        subscriber.subscribe( "test" );
        subscriber.addListener( new LettuceMemoryLeak() );

        RedisConnection<String, String> publisher = client.connect( new Utf8StringCodec() );

        char[] chars = new char[8096];
        Arrays.fill( chars, 'a' );
        String str = new String( chars );

        while( true ) {
            publisher.publish( "test", str );
        }
    }


    @Override
    public void message( String channel, String message )
    {
        LOGGER.info( channel );
    }


    @Override
    public void message( String pattern, String channel, String message )
    {
    }


    @Override
    public void subscribed( String channel, long count )
    {
        LOGGER.info( "subscribed to " + channel + " " + count );
    }


    @Override
    public void psubscribed( String pattern, long count )
    {
    }


    @Override
    public void unsubscribed( String channel, long count )
    {
    }


    @Override
    public void punsubscribed( String pattern, long count )
    {
    }

}

I'll update this ticket once I have more information.

@kichik
Copy link
Contributor Author

kichik commented Jan 10, 2015

The problem seems to be with PubSubCommandHandler.decode() which never discards of read bytes in the buffer. It override the method that calls buffer.discardReadBytes() in CommandHandler but never calls buffer.discardReadBytes() itself. This means the buffer will keep growing forever while receiving messages from channel subscription.

Another oddity I found is that the connection is never closed on exception. Even if RedisStateMachine.decode() throws IllegalStateException the connection keeps going. Resetting the connection on unknown exceptions would have at least cleared the buffer in this case and let the application keep running, assuming the connection watchdog is enabled.

kichik added a commit to kichik/lettuce that referenced this issue Jan 10, 2015
read buffer bytes need to be discarded by decode()
@mp911de mp911de added the type: bug A general bug label Jan 11, 2015
mp911de added a commit that referenced this issue Jan 11, 2015
@mp911de
Copy link
Collaborator

mp911de commented Jan 11, 2015

Thanks for the memory leak fix. I'll make up my mind about exceptions on connections and the result of exceptions on the whole state.

@kichik
Copy link
Contributor Author

kichik commented Jan 11, 2015

Thanks!

mp911de added a commit that referenced this issue Jan 11, 2015
@mp911de
Copy link
Collaborator

mp911de commented Jan 11, 2015

Released lettuce 3.0.2.Final to maven central.

@kichik kichik closed this as completed Jan 24, 2015
@mp911de
Copy link
Collaborator

mp911de commented Jan 24, 2015

Thx. An experimental feature could be close connection on internal errors, as you mentioned it. Will track it in #25

@mp911de
Copy link
Collaborator

mp911de commented Feb 17, 2015

I found a better approach than closing the connection: Resetting the internal state. I updated #25 accordingly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type: bug A general bug
Projects
None yet
Development

No branches or pull requests

2 participants