Commits on Feb 10, 2016
  1. cmsgpack: pass correct osize values to lua allocator, update correct …

    …buf free space in cmsgpack
    yoav-steinberg committed with Feb 7, 2016
Commits on Dec 18, 2015
  1. Redis 2.8.24

    committed Dec 18, 2015
Commits on Dec 17, 2015
  1. Fix a race that may lead to the active (slave) client to be freed.

    In issue #2948 a crash was reported in processCommand(). Later Oran Agra
    (@oranagra) traced the bug (in private chat) in the following sequence
    of events:
    1. Some maxmemory is set.
    2. The slave is the currently active client and is executing PING or
       REPLCONF or whatever a slave can send to its master.
    3. freeMemoryIfNeeded() is called since maxmemory is set.
    4. flushSlavesOutputBuffers() is called by freeMemoryIfNeeded().
    5. During slaves buffers flush, a write error could be encoutered in
       writeToClient() or sendReplyToClient() depending on the version of
       Redis. This will trigger freeClient() against the currently active
       client, so a segmentation fault will likely happen in
       processCommand() immediately after the call to freeMemoryIfNeeded().
    There are different possible fixes:
    1. Add flags to writeToClient() (recent versions code base) so that
       we can ignore the write errors, and use this flag in
       flushSlavesOutputBuffers(). However this is not simple to do in older
       versions of Redis.
    2. Use freeClientAsync() during write errors. This works but changes the
       current behavior of releasing clients ASAP when possible. Normally
       we write to clients during the normal event loop processing, in the
       writable client, where there is no active client, so no care must be
    3. The fix of this commit: to detect that the current client is no
       longer valid. This fix is a bit "ad-hoc", but works across all the
       versions and has the advantage of not changing the remaining
       behavior. Only alters what happens during this race condition,
    committed Dec 17, 2015
Commits on Dec 15, 2015
  1. Log address causing SIGSEGV.

    committed Dec 15, 2015
Commits on Dec 14, 2015
Commits on Oct 15, 2015
  1. Redis 2.8.23.

    committed Oct 15, 2015
  2. Regression test for issue #2813.

    committed Oct 15, 2015
  3. Move end-comment of handshake states.

    For an error I missed the last handshake state.
    Related to issue #2813.
    committed Oct 15, 2015
  4. Make clear that slave handshake states must be ordered.

    Make sure that people from the future will not break this rule.
    Related to issue #2813.
    committed Oct 15, 2015
  5. Minor changes to PR #2813.

    * Function to test for slave handshake renamed slaveIsInHandshakeState.
    * Function no longer accepts arguments since it always tests the
      same global state.
    * Test for state translated to a range test since defines are guaranteed
      to stay in order in the future.
    * Use the new function in the ROLE command implementation as well.
    committed Oct 15, 2015
  6. Merge pull request #2813 from kevinmcgehee/2.8

    Fix master timeout during handshake
    committed Oct 15, 2015
Commits on Oct 14, 2015
  1. Fix master timeout during handshake

    This change allows a slave to properly time out a dead master during
    the extended asynchronous synchronization state machine.  Now, slaves
    will record their last interaction with the master and apply the
    replication timeout before a response to the PSYNC request is received.
    kevinmcgehee committed Oct 14, 2015
Commits on Sep 30, 2015
  1. redis-cli pipe mode: don't stay in the write loop forever.

    The code was broken and resulted in redis-cli --pipe to, most of the
    times, writing everything received in the standard input to the Redis
    connection socket without ever reading back the replies, until all the
    content to write was written.
    This means that Redis had to accumulate all the output in the output
    buffers of the client, consuming a lot of memory.
    Fixed thanks to the original report of anomalies in the behavior
    provided by Twitter user @fsaintjacques.
    committed Sep 30, 2015
Commits on Sep 15, 2015
  1. Test: fix false positive in HSTRLEN test.

    HINCRBY* tests later used the value "tmp" that was sometimes generated
    by the random key generation function. The result was ovewriting what
    Tcl expected to be inside Redis with another value, causing the next
    HSTRLEN test to fail.
    committed Sep 15, 2015
Commits on Sep 14, 2015
  1. Test: MOVE expire test improved.

    Related to #2765.
    committed Sep 14, 2015
  2. MOVE re-add TTL check fixed.

    getExpire() returns -1 when no expire exists.
    Related to #2765.
    committed Sep 14, 2015
  3. MOVE now can move TTL metadata as well.

    MOVE was not able to move the TTL: when a key was moved into a different
    database number, it became persistent like if PERSIST was used.
    In some incredible way (I guess almost nobody uses Redis MOVE) this bug
    remained unnoticed inside Redis internals for many years.
    Finally Andy Grunwald discovered it and opened an issue.
    This commit fixes the bug and adds a regression test.
    Close #2765.
    committed Sep 14, 2015
Commits on Sep 8, 2015
  1. Redis 2.8.22.

    committed Sep 8, 2015
  2. pfcount support multi keys

    MOON-CLJ committed with Jun 26, 2015
Commits on Sep 7, 2015
  1. Fix merge issues in 490847c.

    committed Sep 7, 2015
  2. Undo slaves state change on failed rdbSaveToSlavesSockets().

    As Oran Agra suggested, in startBgsaveForReplication() when the BGSAVE
    attempt returns an error, we scan the list of slaves in order to remove
    them since there is no way to serve them currently.
    However we check for the replication state BGSAVE_START, which was
    modified by rdbSaveToSlaveSockets() before forking(). So when fork fails
    the state of slaves remain BGSAVE_END and no cleanup is performed.
    This commit fixes the problem by making rdbSaveToSlavesSockets() able to
    undo the state change on fork failure.
    committed Sep 7, 2015
  3. Sentinel: fix bug in config rewriting during failover

    We have a check to rewrite the config properly when a failover is in
    progress, in order to add the current (already failed over) master as
    slave, and don't include in the slave list the promoted slave itself.
    However there was an issue, the variable with the right address was
    computed but never used when the code was modified, and no tests are
    available for this feature for two reasons:
    1. The Sentinel unit test currently does not test Sentinel ability to
    persist its state at all.
    2. It is a very hard to trigger state since it lasts for little time in
    the context of the testing framework.
    However this feature should be covered in the test in some way.
    The bug was found by @badboy using the clang static analyzer.
    Effects of the bug on safety of Sentinel
    This bug results in severe issues in the following case:
    1. A Sentinel is elected leader.
    2. During the failover, it persists a wrong config with a known-slave
    entry listing the master address.
    3. The Sentinel crashes and restarts, reading invalid configuration from
    4. It sees that the slave now does not obey the logical configuration
    (should replicate from the current master), so it sends a SLAVEOF
    command to the master (since the slave master is the same) creating a
    replication loop (attempt to replicate from itself) which Redis is
    currently unable to detect.
    5. This means that the master is no longer available because of the bug.
    However the lack of availability should be only transient (at least
    in my tests, but other states could be possible where the problem
    is not recovered automatically) because:
    6. Sentinels treat masters reporting to be slaves as failing.
    7. A new failover is triggered, and a slave is promoted to master.
    Bug lifetime
    The bug is there forever. Commit 16237d7 actually tried to fix the bug
    but in the wrong way (the computed variable was never used! My fault).
    So this bug is there basically since the start of Sentinel.
    Since the bug is hard to trigger, I remember little reports matching
    this condition, but I remember at least a few. Also in automated tests
    where instances were stopped and restarted multiple times automatically
    I remember hitting this issue, however I was not able to reproduce nor
    to determine with the information I had at the time what was causing the
    committed Jun 12, 2015
  4. SCAN iter parsing changed from atoi to chartoull

    ubuntu committed with Sep 7, 2015
Commits on Aug 21, 2015
  1. Force slaves to resync after unsuccessful PSYNC.

    Using chained replication where C is slave of B which is in turn slave of
    A, if B reconnects the replication link with A but discovers it is no
    longer possible to PSYNC, slaves of B must be disconnected and PSYNC
    not allowed, since the new B dataset may be completely different after
    the synchronization with the master.
    Note that there are varius semantical differences in the way this is
    handled now compared to the past. In the past the semantics was:
    1. When a slave lost connection with its master, disconnected the chained
    slaves ASAP. Which is not needed since after a successful PSYNC with the
    master, the slaves can continue and don't need to resync in turn.
    2. However after a failed PSYNC the replication backlog was not reset, so a
    slave was able to PSYNC successfully even if the instance did a full
    sync with its master, containing now an entirely different data set.
    Now instead chained slaves are not disconnected when the slave lose the
    connection with its master, but only when it is forced to full SYNC with
    its master. This means that if the slave having chained slaves does a
    successful PSYNC all its slaves can continue without troubles.
    See issue #2694 for more details.
    committed Jul 28, 2015
  2. flushSlavesOutputBuffers(): details clarified via comments.

    Talking with @oranagra we had to reason a little bit to understand if
    this function could ever flush the output buffers of the wrong slaves,
    having online state but actually not being ready to receive writes
    before the first ACK is received from them (this happens with diskless
    Next time we'll just read this comment.
    committed Aug 6, 2015
  3. startBgsaveForReplication(): handle waiting slaves state change.

    Before this commit, after triggering a BGSAVE it was up to the caller of
    startBgsavForReplication() to handle slaves in WAIT_BGSAVE_START in
    order to update them accordingly. However when the replication target is
    the socket, this is not possible since the process of updating the
    slaves and sending the FULLRESYNC reply must be coupled with the process
    of starting an RDB save (the reason is, we need to send the FULLSYNC
    command and spawn a child that will start to send RDB data to the slaves
    This commit moves the responsibility of handling slaves in
    WAIT_BGSAVE_START to startBgsavForReplication() so that for both
    diskless and disk-based replication we have the same chain of
    responsiblity. In order accomodate such change, the syncCommand() also
    needs to put the client in the slave list ASAP (just after the initial
    checks) and not at the end, so that startBgsavForReplication() can find
    the new slave alrady in the list.
    Another related change is what happens if the BGSAVE fails because of
    fork() or other errors: we now remove the slave from the list of slaves
    and send an error, scheduling the slave connection to be terminated.
    As a side effect of this change the following errors found by
    Oran Agra are fixed (thanks!):
    1. rdbSaveToSlavesSockets() on failed fork will get the slaves cleaned
    up, otherwise they remain in a wrong state forever since we setup them
    for full resync before actually trying to fork.
    2. updateSlavesWaitingBgsave() with replication target set as "socket"
    was broken since the function changed the slaves state from
    replicationSetupSlaveForFullResync(), so later rdbSaveToSlavesSockets()
    will not find any slave in the right state (WAIT_BGSAVE_START) to feed.
    committed Aug 20, 2015
Commits on Aug 7, 2015
  1. slaveTryPartialResynchronization and syncWithMaster: better synergy.

    It is simpler if removing the read event handler from the FD is up to
    slaveTryPartialResynchronization, after all it is only called in the
    context of syncWithMaster.
    This commit also makes sure that on error all the event handlers are
    removed from the socket before closing it.
    committed Aug 7, 2015