item_size_max support #8

Closed
wants to merge 4 commits into from

3 participants

@bmatheny

This patch causes object sizes to be checked before forwarding the request to the appropriate server. If the object is larger than this value (defaults to 1048576), the proxy closes the connection (which is also what memcached does). The proxy does NOT respond with a SERVER_ERROR, mostly because I couldn't determine a reasonable way to do that from the req_filter.

In a number of client libraries, compression occurs within the library and so you don't know whether a value exceeds the maximum object size until after it leaves this client. This allows you to pass 'large' items into a client method but not send the request across the wire if it would fail.

@travisbot

This pull request passes (merged 9a17a01 into b85ac82).

@travisbot

This pull request passes (merged f34559a into b85ac82).

@bmatheny

One additional comment. Based on feedback from Manju I moved the old vlen values to vlen_rem, and made vlen be an immutable value representing the total size of the value. This allows a much cleaner calculation in the req_filter of whether the object value exceeds the configured item_size_max or not.

@manjuraj

blake the patch looks great. One thing I realized is that memcache has some item header overhead and slab header overhead even for the largest item. So, even if you configure memcache with 1MB slab, the largest item that can be stored in the slab is < 1MB. Furthermore the item size not only includes the value length, but also key length.

Given this, do you think we should have two extra keys in yml configuration
item_max_kvlen: (maximum key + value length)
item_overhead:

and we discard requests whose key + value length + overhead > item_max_kvlen

thoughts?

@bmatheny

I'm going to close this and reopen with the changes you recommended and also the merge conflicts handled.

@bmatheny bmatheny closed this Nov 20, 2012
@idning idning added a commit to idning/twemproxy that referenced this pull request May 25, 2014
@idning idning nutcracker core on heavy multi-del
today we got a core of twemproxy::

    $ gdb -c core.14420 ./bin/nutcracker

    (gdb) bt
    #0  0x000000302af2e2ed in raise () from /lib64/tls/libc.so.6
    #1  0x000000302af2fa3e in abort () from /lib64/tls/libc.so.6
    #2  0x0000000000419c82 in nc_assert (cond=0x444dc0 "!TAILQ_EMPTY(&send_msgq) && nsend != 0", file=0x444aa8 "nc_message.c", line=745, panic=1) at nc_util.c:308
    #3  0x000000000040d0d6 in msg_send_chain (ctx=0x553090, conn=0x55b380, msg=0x0) at nc_message.c:745
    #4  0x000000000040d568 in msg_send (ctx=0x553090, conn=0x55b380) at nc_message.c:820
    #5  0x00000000004059af in core_send (ctx=0x553090, conn=0x55b380) at nc_core.c:173
    #6  0x0000000000405ffe in core_core (arg=0x55b380, events=65280) at nc_core.c:301
    #7  0x0000000000429297 in event_wait (evb=0x5652e0, timeout=389) at nc_epoll.c:269
    #8  0x000000000040606f in core_loop (ctx=0x553090) at nc_core.c:316
    #9  0x000000000041b109 in nc_run (nci=0x7fbfffea80) at nc.c:530
    #10 0x000000000041b20d in main (argc=14, argv=0x7fbfffecc8) at nc.c:579
    (gdb) f 3
    #3  0x000000000040d0d6 in msg_send_chain (ctx=0x553090, conn=0x55b380, msg=0x0) at nc_message.c:745
    745         ASSERT(!TAILQ_EMPTY(&send_msgq) && nsend != 0);
    (gdb) l
    740             if (msg == NULL) {
    741                 break;
    742             }
    743         }
    744
    745         ASSERT(!TAILQ_EMPTY(&send_msgq) && nsend != 0);
    746
    747         conn->smsg = NULL;
    748
    749         n = conn_sendv(conn, &sendv, nsend);

it is caused by this ``ASSERT`` at nc_message.c:745,

``conn_send`` send no more than ``NC_IOV_MAX(128)`` pieces in ``msg_send_chain``,

if the first fragment of MULTI-DEL response is send on last batch. and this is the last msg in send queue, the next call of ``msg_send_chain`` will got ``nsend == 0``::

following case show such a case:
1. mget on ``126`` keys
2. a mutli-del cmd

::

    def test_multi_delete_20140525():
        conn = redis.Redis('127.0.0.5', 4100)
        cnt = 126
        keys = ['key-%s'%i for i in range(cnt)]
        pipe = conn.pipeline(transaction=False)
        pipe.mget(keys)
        pipe.delete(*keys)
        print pipe.execute()

see: https://github.com/idning/test-twemproxy/blob/master/test_redis/test_del.py#L56-L63

more detail: http://idning.github.io/twemproxy-core-20140523.html
d5ec284
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment