New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

java.util.concurrent.TimeoutException #30

Closed
ovonick opened this Issue Apr 6, 2015 · 9 comments

Comments

Projects
None yet
3 participants
@ovonick

ovonick commented Apr 6, 2015

We are seeing bursts of this exception in production for various memcache operations. Stack traces look similar to this:

java.util.concurrent.TimeoutException: Timed out(5000 milliseconds) waiting for operation while connected to 127.0.0.1:11211
    at net.rubyeye.xmemcached.XMemcachedClient.latchWait(XMemcachedClient.java:2536)
    at net.rubyeye.xmemcached.XMemcachedClient.sendStoreCommand(XMemcachedClient.java:2498)
    at net.rubyeye.xmemcached.XMemcachedClient.set(XMemcachedClient.java:1338)
    at net.rubyeye.xmemcached.XMemcachedClient.set(XMemcachedClient.java:1408)

We could not 100% pin point what is causing it but one of the suspects is how XMemcached handles server responses.

One situation that can be reproduced in a standalone java project is how XMemcached client handles server responses for values greater than 1M (> default max size in memcache).

In the code below our expectation is to get MemcachedServerException with message "object too large for cache" for "set" operation but instead we are getting java.util.concurrent.TimeoutException

It is worth noting that spymemcached client used in the same code does handle server response properly and returns "SERVER_ERROR object too large for cache"

import net.rubyeye.xmemcached.MemcachedClient;
import net.rubyeye.xmemcached.XMemcachedClientBuilder;
import net.rubyeye.xmemcached.transcoders.SerializingTranscoder;
import net.rubyeye.xmemcached.utils.AddrUtil;

import java.io.IOException;
import java.util.Arrays;

public class LargeObjectsWithXMemcachedClient {

    private static final String KEY_LARGE_OBJECT = "largeObject";

    public static void main(String[] args) throws IOException {
        int megabyte_plus1 = 1048577; //1024 * 1024 + 1

        System.out.println("Building xmemcached client");

        XMemcachedClientBuilder builder = new XMemcachedClientBuilder(AddrUtil.getAddresses("localhost:11211"));
        MemcachedClient client = builder.build();

        // making sure that payload does not get compressed and client does not throw exception on max size limit
        // so that large value gets sent to memcached server. We expect xmemcached client to
        // throw MemcachedServerException with message "object too large for cache"
        SerializingTranscoder transcoder = new SerializingTranscoder(megabyte_plus1 * 2); // something bigger than memcached daemon max value size.
        transcoder.setCompressionThreshold(transcoder.getMaxSize()); // bumping up compression threshold so that xmemcached client does not compress.

        try {
            String largeObject = createString(megabyte_plus1);

            System.out.println("set " + KEY_LARGE_OBJECT);
            client.set(KEY_LARGE_OBJECT, 60, largeObject, transcoder);

            String readLargeObject = client.get(KEY_LARGE_OBJECT);
            System.out.println("get " + KEY_LARGE_OBJECT + ": " + (readLargeObject == null ? "does not exist in cache" : "size() = " + readLargeObject.length()));

            System.out.println("done");
        } catch (Exception exception) {
          exception.printStackTrace();
        } finally {
            System.out.println("shutting down memcached client");
            client.shutdown();
        }
    }

    private static String createString(int size) {
        char[] chars = new char[size];
        Arrays.fill(chars, 'f');
        return new String(chars);
    }
}

@killme2008 killme2008 self-assigned this Sep 28, 2016

@killme2008 killme2008 added this to the 2.1.1 milestone Sep 28, 2016

@killme2008

This comment has been minimized.

Owner

killme2008 commented Oct 1, 2016

Hi, i am sorry so late to look into this issue.

But i can't reproduce by above code LargeObjectsWithXMemcachedClient with xmemcached 2.1.0(the latest version).It throws the correct exception info:

net.rubyeye.xmemcached.exception.MemcachedServerException: object too large for cache,key=largeObject
    at net.rubyeye.xmemcached.command.Command.decodeError(Command.java:262)
    at net.rubyeye.xmemcached.command.Command.decodeError(Command.java:279)
    at net.rubyeye.xmemcached.command.text.TextStoreCommand.decode(TextStoreCommand.java:120)
    at net.rubyeye.xmemcached.codec.MemcachedDecoder.decode0(MemcachedDecoder.java:61)
    at net.rubyeye.xmemcached.codec.MemcachedDecoder.decode(MemcachedDecoder.java:56)
    at com.google.code.yanf4j.nio.impl.NioTCPSession.decode(NioTCPSession.java:297)
    at com.google.code.yanf4j.nio.impl.NioTCPSession.decodeAndDispatch(NioTCPSession.java:237)
    at com.google.code.yanf4j.nio.impl.NioTCPSession.readFromBuffer(NioTCPSession.java:207)
    at com.google.code.yanf4j.nio.impl.AbstractNioSession.onRead(AbstractNioSession.java:196)
    at com.google.code.yanf4j.nio.impl.AbstractNioSession.onEvent(AbstractNioSession.java:341)
    at com.google.code.yanf4j.nio.impl.SocketChannelController.dispatchReadEvent(SocketChannelController.java:56)
    at com.google.code.yanf4j.nio.impl.NioController.onRead(NioController.java:157)
    at com.google.code.yanf4j.nio.impl.Reactor.dispatchEvent(Reactor.java:323)
    at com.google.code.yanf4j.nio.impl.Reactor.run(Reactor.java:180)

In 2.1.0, i tweaked the performance of binary protocol implementation.The binary protocol in old xmemcached versions are not good enough, it's performance is less than text protocol command factory.But in 2.1.0, it catches up.

You may wan to try it when you are free.And if you have more info for this issue ,please let me known, thanks.

@killme2008

This comment has been minimized.

Owner

killme2008 commented Oct 1, 2016

Please ignore my last comment, i reproduced the issue at last, and i still look into why it happens.

If i find the bug, i will fixed ASAP.

Thanks for your reporting.

@killme2008 killme2008 closed this in 6d6cd8f Oct 1, 2016

killme2008 added a commit that referenced this issue Oct 1, 2016

@killme2008

This comment has been minimized.

Owner

killme2008 commented Oct 1, 2016

I fixed this issue , and i will release a new version ASAP.

Thanks a lot for your report.

@ovonick

This comment has been minimized.

ovonick commented Oct 16, 2016

Thank you for fixing it.

Based on you assessment does the fix address only specific "object too large for cache" use case or is it broader and may also address other TimeoutException scenarios?

@killme2008

This comment has been minimized.

Owner

killme2008 commented Oct 17, 2016

@ovonick

Hi, it fixed all TimeoutException scenarios both in binary and text protocol, thanks a lot for this issue, it's really an old bug in xmemcached.

I will release the new version ASAP after i fix #37 .

@sudhakv

This comment has been minimized.

sudhakv commented Oct 26, 2016

Hi - Do you have an estimated date for this release please? It is one that has been plaguing us for over a year now and we have a really critical release in November for which we desperately need this fix. Is there any way we can persuade you to release this immediately?

@killme2008

This comment has been minimized.

Owner

killme2008 commented Oct 27, 2016

I released 2.2.0 .
It may take some time to be synchronized to maven central repository.Hope it helps.

@killme2008

This comment has been minimized.

Owner

killme2008 commented Oct 27, 2016

I can release a minor version today.

2016-10-27 2:14 GMT+08:00 sudhakv notifications@github.com:

Hi - Do you have an estimated date for this release please? It is one that
has been plaguing us for over a year now and we have a really critical
release in November for which we desperately need this fix. Is there any
way we can persuade you to release this immediately?


You are receiving this because you modified the open/close state.
Reply to this email directly, view it on GitHub
#30 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAA3PlNct747FNBB_kVdqXoZLr9WK42kks5q35h8gaJpZM4D7H-m
.

庄晓丹
Email: killme2008@gmail.com xzhuang@avos.com
Site: http://fnil.net
Twitter: @killme2008

@sudhakv

This comment has been minimized.

sudhakv commented Nov 1, 2016

Much appreciated - we picked it up. Thank you very much!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment