Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP
Commits on Dec 14, 2012
  1. @antirez

    serverCron() frequency is now a runtime parameter (was REDIS_HZ).

    antirez authored
    REDIS_HZ is the frequency our serverCron() function is called with.
    A more frequent call to this function results into less latency when the
    server is trying to handle very expansive background operations like
    mass expires of a lot of keys at the same time.
    
    Redis 2.4 used to have an HZ of 10. This was good enough with almost
    every setup, but the incremental key expiration algorithm was working a
    bit better under *extreme* pressure when HZ was set to 100 for Redis
    2.6.
    
    However for most users a latency spike of 30 milliseconds when million
    of keys are expiring at the same time is acceptable, on the other hand a
    default HZ of 100 in Redis 2.6 was causing idle instances to use some
    CPU time compared to Redis 2.4. The CPU usage was in the order of 0.3%
    for an idle instance, however this is a shame as more energy is consumed
    by the server, if not important resources.
    
    This commit introduces HZ as a runtime parameter, that can be queried by
    INFO or CONFIG GET, and can be modified with CONFIG SET. At the same
    time the default frequency is set back to 10.
    
    In this way we default to a sane value of 10, but allows users to
    easily switch to values up to 500 for near real-time applications if
    needed and if they are willing to pay this small CPU usage penalty.
Commits on Dec 12, 2012
  1. @antirez

    Merge pull request #824 from ptjm/unstable

    antirez authored
    Define _XOPEN_SOURCE appropriately on NetBSD.
  2. Define _XOPEN_SOURCE appropriately on NetBSD.

    Patrick TJ McPhee authored
Commits on Dec 11, 2012
  1. @antirez

    Fix config.h endianess detection to work on Linux / PPC64.

    antirez authored
    Config.h performs endianess detection including OS-specific headers to
    define the endianess macros, or when this is not possible, checking the
    processor type via ifdefs.
    
    Sometimes when the OS-specific macro is included, only __BYTE_ORDER is
    defined, while BYTE_ORDER remains undefined. There is code at the end of
    config.h endianess detection in order to define the macros without the
    underscore, but it was not working correctly.
    
    This commit fixes endianess detection fixing Redis on Linux / PPC64 and
    possibly other systems.
Commits on Dec 3, 2012
  1. @antirez

    Merge pull request #807 from bmcmanus/issue-804

    antirez authored
    Resolution for Issue #804 Add Default-Start and Default-Stop LSB tags for RedHat startup...
  2. @antirez

    Memory leak fixed: release client's bpop->keys dictionary.

    antirez authored
    Refactoring performed after issue #801 resolution (see commit
    2f87cf8) introduced a memory leak that
    is fixed by this commit.
    
    I simply forgot to free the new allocated dictionary in the client
    structure trusting the output of "make test" on OSX.
    
    However due to changes in the "leaks" utility the test was no longer
    testing memory leaks. This problem was also fixed.
    
    Fortunately the CI test running at ci.redis.io spotted the bug in the
    valgrind run.
    
    The leak never ended into a stable release.
  3. @antirez

    Test: fixed osx "leaks" support in test.

    antirez authored
    Due to changes in recent releases of osx leaks utility, the osx leak
    detection no longer worked. Now it is fixed in a way that should be
    backward compatible.
  4. @bmcmanus

    Issue 804 Add Default-Start and Default-Stop LSB tags for RedHat star…

    bmcmanus authored
    …tup and update-rc.d compatability.
Commits on Dec 2, 2012
  1. @antirez

    Blocking POP: use a dictionary to store keys clinet side.

    antirez authored
    To store the keys we block for during a blocking pop operation, in the
    case the client is blocked for more data to arrive, we used a simple
    linear array of redis objects, in the blockingState structure:
    
        robj **keys;
        int count;
    
    However in order to fix issue #801 we also use a dictionary in order to
    avoid to end in the blocked clients queue for the same key multiple
    times with the same client.
    
    The dictionary was only temporary, just to avoid duplicates, but since
    we create / destroy it there is no point in doing this duplicated work,
    so this commit simply use a dictionary as the main structure to store
    the keys we are blocked for. So instead of the previous fields we now
    just have:
    
        dict *keys;
    
    This simplifies the code and reduces the work done by the server during
    a blocking POP operation.
  2. @antirez
  3. @antirez

    Client should not block multiple times on the same key.

    antirez authored
    Sending a command like:
    
    BLPOP foo foo foo foo 0
    
    Resulted into a crash before this commit since the client ended being
    inserted in the waiting list for this key multiple times.
    This resulted into the function handleClientsBlockedOnLists() to fail
    because we have code like that:
    
        if (de) {
            list *clients = dictGetVal(de);
            int numclients = listLength(clients);
    
            while(numclients--) {
                listNode *clientnode = listFirst(clients);
    
                /* server clients here... */
            }
        }
    
    The code to serve clients used to remove the served client from the
    waiting list, so if a client is blocking multiple times, eventually the
    call to listFirst() will return NULL or worse will access random memory
    since the list may no longer exist as it is removed by the function
    unblockClientWaitingData() if there are no more clients waiting for this
    list.
    
    To avoid making the rest of the implementation more complex, this commit
    modifies blockForKeys() so that a client will be put just a single time
    into the waiting list for a given key.
    
    Since it is Saturday, I hope this fixes issue #801.
Commits on Nov 30, 2012
  1. @antirez

    SDIFF is now able to select between two algorithms for speed.

    antirez authored
    SDIFF used an algorithm that was O(N) where N is the total number
    of elements of all the sets involved in the operation.
    
    The algorithm worked like that:
    
    ALGORITHM 1:
    
    1) For the first set, add all the members to an auxiliary set.
    2) For all the other sets, remove all the members of the set from the
    auxiliary set.
    
    So it is an O(N) algorithm where N is the total number of elements in
    all the sets involved in the diff operation.
    
    Cristobal Viedma suggested to modify the algorithm to the following:
    
    ALGORITHM 2:
    
    1) Iterate all the elements of the first set.
    2) For every element, check if the element also exists in all the other
    remaining sets.
    3) Add the element to the auxiliary set only if it does not exist in any
    of the other sets.
    
    The complexity of this algorithm on the worst case is O(N*M) where N is
    the size of the first set and M the total number of sets involved in the
    operation.
    
    However when there are elements in common, with this algorithm we stop
    the computation for a given element as long as we find a duplicated
    element into another set.
    
    I (antirez) added an additional step to algorithm 2 to make it faster,
    that is to sort the set to subtract from the biggest to the
    smallest, so that it is more likely to find a duplicate in a larger sets
    that are checked before the smaller ones.
    
    WHAT IS BETTER?
    
    None of course, for instance if the first set is much larger than the
    other sets the second algorithm does a lot more work compared to the
    first algorithm.
    
    Similarly if the first set is much smaller than the other sets, the
    original algorithm will less work.
    
    So this commit makes Redis able to guess the number of operations
    required by each algorithm, and select the best at runtime according
    to the input received.
    
    However, since the second algorithm has better constant times and can do
    less work if there are duplicated elements, an advantage is given to the
    second algorithm.
  2. @antirez
  3. @antirez

    SDIFF fuzz test added.

    antirez authored
Commits on Nov 29, 2012
  1. @antirez
  2. @antirez

    Introduced the Build ID in INFO and --version output.

    antirez authored
    The idea is to be able to identify a build in a unique way, so for
    instance after a bug report we can recognize that the build is the one
    of a popular Linux distribution and perform the debugging in the same
    environment.
  3. @antirez

    On crash memory test rewrote so that it actaully works.

    antirez authored
    1) We no longer test location by location, otherwise the CPU write cache
    completely makes our business useless.
    2) We still need a memory test that operates in steps from the first to
    the last location in order to never hit the cache, but that is still
    able to retain the memory content.
    
    This was tested using a Linux box containing a bad memory module with a
    zingle bit error (always zero).
    
    So the final solution does has an error propagation step that is:
    
    1) Invert bits at every location.
    2) Swap adiacent locations.
    3) Swap adiacent locations again.
    4) Invert bits at every location.
    5) Swap adiacent locations.
    6) Swap adiacent locations again.
    
    Before and after these steps, and after step 4, a CRC64 checksum is computed.
    If the three CRC64 checksums don't match, a memory error was detected.
Commits on Nov 28, 2012
  1. @antirez
  2. @openbaas @antirez
  3. @antirez
  4. @arsenm @antirez

    It's a watchdog, not a watchdong.

    arsenm authored antirez committed
Commits on Nov 23, 2012
  1. @antirez

    Merge pull request #787 from charsyam/remove-warning-bio

    antirez authored
    remove compile warning bioKillThreads
Commits on Nov 22, 2012
  1. @openbaas
  2. @antirez

    EVALSHA is now case insensitive.

    antirez authored
    EVALSHA used to crash if the SHA1 was not lowercase (Issue #783).
    Fixed using a case insensitive dictionary type for the sha -> script
    map used for replication of scripts.
  3. @antirez

    Fix integer overflow in zunionInterGenericCommand().

    antirez authored
    This fixes issue #761.
  4. @antirez
  5. @antirez
  6. @antirez
  7. @antirez

    Safer handling of MULTI/EXEC on errors.

    antirez authored
    After the transcation starts with a MULIT, the previous behavior was to
    return an error on problems such as maxmemory limit reached. But still
    to execute the transaction with the subset of queued commands on EXEC.
    
    While it is true that the client was able to check for errors
    distinguish QUEUED by an error reply, MULTI/EXEC in most client
    implementations uses pipelining for speed, so all the commands and EXEC
    are sent without caring about replies.
    
    With this change:
    
    1) EXEC fails if at least one command was not queued because of an
    error. The EXECABORT error is used.
    2) A generic error is always reported on EXEC.
    3) The client DISCARDs the MULTI state after a failed EXEC, otherwise
    pipelining multiple transactions would be basically impossible:
    After a failed EXEC the next transaction would be simply queued as
    the tail of the previous transaction.
  8. @antirez

    Make bio.c threads killable ASAP if needed.

    antirez authored
    We use this new bio.c feature in order to stop our I/O threads if there
    is a memory test to do on crash. In this case we don't want anything
    else than the main thread to run, otherwise the other threads may mess
    with the heap and the memory test will report a false positive.
Commits on Nov 21, 2012
  1. @antirez
  2. @antirez
Commits on Nov 19, 2012
  1. @antirez

    Children creating AOF or RDB files now report memory used by COW.

    antirez authored
    Finally Redis is able to report the amount of memory used by
    copy-on-write while saving an RDB or writing an AOF file in background.
    
    Note that this information is currently only logged (at NOTICE level)
    and not shown in INFO because this is less trivial (but surely doable
    with some minor form of interprocess communication).
    
    The reason we can't capture this information on the parent before we
    call wait3() is that the Linux kernel will release the child memory
    ASAP, and only retain the minimal state for the process that is useful
    to report the child termination to the parent.
    
    The COW size is obtained by summing all the Private_Dirty fields found
    in the "smap" file inside the proc filesystem for the process.
    
    All this is Linux specific and is not available on other systems.
  2. @antirez

    zmalloc_get_private_dirty() function added (Linux only).

    antirez authored
    For non Linux systmes it just returns 0.
    
    This function is useful to estimate copy-on-write because of childs
    saving stuff on disk.
Commits on Nov 14, 2012
  1. @antirez
Something went wrong with that request. Please try again.