Commits on Jan 2, 2015
  1. Set optional 'static' for Quicklist+Redis

    mattsta committed Dec 20, 2014
    This also defines REDIS_STATIC='' for building everything
    inside src/ and everything inside deps/lua/.
  2. Add more quicklist info to DEBUG OBJECT

    mattsta committed Dec 19, 2014
    Adds: ql_compressed (boolean, 1 if compression enabled for list, 0
    Adds: ql_uncompressed_size (actual uncompressed size of all quicklistNodes)
    Adds: ql_ziplist_max (quicklist max ziplist fill factor)
    Compression ratio of the list is then ql_uncompressed_size / serializedlength
    We report ql_uncompressed_size for all quicklists because serializedlength
    is a _compressed_ representation anyway.
    Sample output from a large list:> llen abc
    (integer) 38370061> debug object abc
    Value at:0x7ff97b51d140 refcount:1 encoding:quicklist serializedlength:19878335 lru:9718164 lru_seconds_idle:5 ql_nodes:21945 ql_avg_node:1748.46 ql_ziplist_max:-2 ql_compressed:0 ql_uncompressed_size:1643187761
    The 1.36s result time is because rdbSavedObjectLen() is serializing the
    object, not because of any new stats reporting.
    If we run DEBUG OBJECT on a compressed list, DEBUG OBJECT takes almost *zero*
    time because rdbSavedObjectLen() reuses already-compressed ziplists:> debug object abc
    Value at:0x7fe5c5800040 refcount:1 encoding:quicklist serializedlength:19878335 lru:9718109 lru_seconds_idle:5 ql_nodes:21945 ql_avg_node:1748.46 ql_ziplist_max:-2 ql_compressed:1 ql_uncompressed_size:1643187761
  3. Config: Add quicklist, remove old list options

    mattsta committed Dec 16, 2014
    This removes:
      - list-max-ziplist-entries
      - list-max-ziplist-value
    This adds:
      - list-max-ziplist-size
      - list-compress-depth
    Also updates config file with new sections and updates
    tests to use quicklist settings instead of old list settings.
  4. Add branch prediction hints to quicklist

    mattsta committed Dec 16, 2014
    Actually makes a noticeable difference.
    Branch hints were selected based on profiler hotspots.
  5. Cleanup quicklist style

    mattsta committed Dec 30, 2014
    Small fixes due to a new version of clang-format (it's less
    crazy than the older version).
  6. Allow compression of interior quicklist nodes

    mattsta committed Dec 11, 2014
    Let user set how many nodes to *not* compress.
    We can specify a compression "depth" of how many nodes
    to leave uncompressed on each end of the quicklist.
    Depth 0 = disable compression.
    Depth 1 = only leave head/tail uncompressed.
      - (read as: "skip 1 node on each end of the list before compressing")
    Depth 2 = leave head, head->next, tail->prev, tail uncompressed.
      - ("skip 2 nodes on each end of the list before compressing")
    Depth 3 = Depth 2 + head->next->next + tail->prev->prev
      - ("skip 3 nodes...")
    This also:
      - updates RDB storage to use native quicklist compression (if node is
        already compressed) instead of uncompressing, generating the RDB string,
        then re-compressing the quicklist node.
      - internalizes the "fill" parameter for the quicklist so we don't
        need to pass it to _every_ function.  Now it's just a property of
        the list.
      - allows a runtime-configurable compression option, so we can
        expose a compresion parameter in the configuration file if people
        want to trade slight request-per-second performance for up to 90%+
        memory savings in some situations.
      - updates the quicklist tests to do multiple passes: 200k+ tests now.
  7. Add quicklist info to DEBUG OBJECT

    mattsta committed Dec 11, 2014
    Added field 'ql_nodes' and 'ql_avg_per_node'.
    ql_nodes is the number of quicklist nodes in the quicklist.
    ql_avg_node is the average fill level in each quicklist node. (LLEN / QL_NODES)
    Sample output:> DEBUG object b
    Value at:0x7fa42bf2fed0 refcount:1 encoding:quicklist serializedlength:18489 lru:8983768 lru_seconds_idle:3 ql_nodes:430 ql_avg_per_node:511.73> llen b
    (integer) 220044
  8. Remove malloc failure checks

    mattsta committed Dec 11, 2014
    We trust zmalloc to kill the whole process on memory failure
  9. Increase test size for migrating large values

    mattsta committed Dec 10, 2014
    Previously, the old test ran 5,000 loops and used about 500k.
    With quicklist, storing those same 5,000 loops takes up 24k, so the
    "large value check" failed!
    This increases the test to 20,000 loops which makes the object dump 96k.
  10. Convert quicklist RDB to store ziplist nodes

    mattsta committed Dec 10, 2014
    Turns out it's a huge improvement during save/reload/migrate/restore
    because, with compression enabled, we're compressing 4k or 8k
    chunks of data consisting of multiple elements in one ziplist
    instead of compressing series of smaller individual elements.
  11. Convert RDB ziplist loading to sdsnative()

    mattsta committed Dec 10, 2014
    This saves us an unnecessary zmalloc, memcpy, and two frees.
  12. Add sdsnative()

    mattsta committed Dec 10, 2014
    Use the existing memory space for an SDS to convert it to a regular
    character buffer so we don't need to allocate duplicate space just
    to extract a usable buffer for native operations.
  13. Add adaptive quicklist fill factor

    mattsta committed Nov 26, 2014
    Fill factor now has two options:
      - negative (1-5) for size-based ziplist filling
      - positive for length-based ziplist filling with implicit size cap.
    Negative offsets define ziplist size limits of:
      -1: 4k
      -2: 8k
      -3: 16k
      -4: 32k
      -5: 64k
    Positive offsets now automatically limit their max size to 8k.  Any
    elements larger than 8k will be in individual nodes.
    Positive ziplist fill factors will keep adding elements
    to a ziplist until one of:
      - ziplist has FILL number of elements
        - or -
      - ziplist grows above our ziplist max size (currently 8k)
    When using positive fill factors, if you insert a large
    element (over 8k), that element will automatically allocate
    an individual quicklist node with one element and no other elements will be
    in the same ziplist inside that quicklist node.
    When using negative fill factors, elements up to the size
    limit can be added to one quicklist node.  If an element
    is added larger than the max ziplist size, that element
    will be allocated an individual ziplist in a new quicklist node.
    Tests also updated to start testing at fill factor -5.
  14. Free ziplist test lists during tests

    mattsta committed Nov 22, 2014
    Freeing our test lists helps keep valgrind output clean
  15. Add ziplistMerge()

    mattsta committed Nov 21, 2014
    This started out as antirez#2158 by sunheehnus, but I kept rewriting it
    until I could understand things more easily and get a few more
    correctness guarantees out of the readability flow.
    The original commit created and returned a new ziplist with the contents of
    both input ziplists, but I prefer to grow one of the input ziplists
    and destroy the other one.
    So, instead of malloc+copy as in antirez#2158, the merge now reallocs one of
    the existing ziplists and copies the other ziplist into the new space.
    Also added merge test cases to ziplistTest()
  16. Add quicklist implementation

    mattsta committed Nov 13, 2014
    This replaces individual ziplist vs. linkedlist representations
    for Redis list operations.
    Big thanks for all the reviews and feedback from everybody in
Commits on Dec 23, 2014
  1. Cleanup ziplist valgrind warnings

    mattsta committed Nov 13, 2014
    Valgrind can't detect 'memset' initializes things, so let's
    statically initialize them to remove some unnecessary warnings.
  2. Fix ziplist test for pop()

    mattsta committed Nov 13, 2014
    The previous test wasn't returning the new ziplist, so the test
    was invalid.  Now the test works properly.
    These problems were simultaenously discovered in antirez#2154 and that
    PR also had an additional fix we included here.
  3. Fix ziplistDeleteRange index parameter

    mattsta committed Nov 13, 2014
    It's valid to delete from negative offsets, so we *don't*
    want unsigned arguments here.
  4. Fix how zipEntry returns values

    mattsta committed Nov 14, 2014
    zipEntry was returning a struct, but that caused some
    problems with tests under 32 bit builds.
    The tests run better if we operate on structs allocated in the
    caller without worrying about copying on return.
  5. Add simple ll2string() tests

    mattsta committed Nov 20, 2014
  6. Allow all code tests to run using Redis args

    mattsta committed Nov 13, 2014
    Previously, many files had individual main() functions for testing,
    but each required being compiled with their own testing flags.
    That gets difficult when you have 8 different flags you need
    to set just to run all tests (plus, some test files required
    other files to be compiled aaginst them, and it seems some didn't
    build at all without including the rest of Redis).
    Now all individual test main() funcions are renamed to a test
    function for the file itself and one global REDIS_TEST define enables
    testing across the entire codebase.
    Tests can now be run with:
      - `./redis-server test <test>`
      e.g. ./redis-server test ziplist
    If REDIS_TEST is not defined, then no tests get included and no
    tests are included in the final redis-server binary.
  7. Remove ziplist compiler warnings

    mattsta committed Nov 8, 2014
    Only happen when compiled with the test define.

    mattsta committed Nov 16, 2014
    Uses jemalloc function malloc_stats_print() to return
    stats about what jemalloc has allocated internally.
  9. Add addReplyBulkSds() function

    mattsta committed Nov 16, 2014
    Refactor a common pattern into one function so we don't
    end up with copy/paste programming.
  10. INFO loading stats: three fixes.

    antirez committed Dec 23, 2014
    1. Server unxtime may remain not updated while loading AOF, so ETA is
    not updated correctly.
    2. Number of processed byte was not initialized.
    3. Possible division by zero condition (likely cause of issue antirez#1932).
  11. Merge pull request antirez#2227 from mattsta/fix/trib/assignment/mast…

    antirez committed Dec 23, 2014
    Improve redis-trib replica assignment
  12. Merge pull request antirez#2234 from mattsta/feature/sentinel-info-ca…

    antirez committed Dec 23, 2014
    Add 'age' value to SENTINEL INFO-CACHE
Commits on Dec 22, 2014
  1. Merge pull request antirez#2229 from advance512/spopWithCount

    antirez committed Dec 22, 2014
    Memory leak fixes (+ code style fixes)