Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP
Commits on Jun 21, 2012
  1. @antirez
  2. @antirez

    Fixed a timing attack on AUTH (Issue #560).

    antirez authored
    The way we compared the authentication password using strcmp() allowed
    an attacker to gain information about the password using a well known
    class of attacks called "timing attacks".
    
    The bug appears to be practically not exploitable in most modern systems
    running Redis since even using multiple bytes of differences in the
    input at a time instead of one the difference in running time in in the
    order of 10 nanoseconds, making it hard to exploit even on LAN. However
    attacks always get better so we are providing a fix ASAP.
    
    The new implementation uses two fixed length buffers and a constant time
    comparison function, with the goal of:
    
    1) Completely avoid leaking information about the content of the
    password, since the comparison is always performed between 512
    characters and without conditionals.
    2) Partially avoid leaking information about the length of the
    password.
    
    About "2" we still have a stage in the code where the real password and
    the user provided password are copied in the static buffers, we also run
    two strlen() operations against the two inputs, so the running time
    of the comparison is a fixed amount plus a time proportional to
    LENGTH(A)+LENGTH(B). This means that the absolute time of the operation
    performed is still related to the length of the password in some way,
    but there is no way to change the input in order to get a difference in
    the execution time in the comparison that is not just proportional to
    the string provided by the user (because the password length is fixed).
    
    Thus in practical terms the user should try to discover LENGTH(PASSWORD)
    looking at the whole execution time of the AUTH command and trying to
    guess a proportionality between the whole execution time and the
    password length: this appears to be mostly unfeasible in the real world.
    
    Also protecting from this attack is not very useful in the case of Redis
    as a brute force attack is anyway feasible if the password is too short,
    while with a long password makes it not an issue that the attacker knows
    the length.
Commits on Jun 15, 2012
  1. @antirez

    Fix c->reply_bytes computation in setDeferredMultiBulkLength()

    antirez authored
    In order to implement reply buffer limits introduced in 2.6 and useful
    to close the connection under user-selected circumastances of big output
    buffers (for instance slow consumers in pub/sub, a blocked slave, and so
    forth) Redis takes a counter with the amount of used memory in objects
    inside the output list stored into c->reply.
    
    The computation was broken in the function setDeferredMultiBulkLength(),
    in the case the object was glued with the next one. This caused the
    c->reply_bytes field to go out of sync, be subtracted more than needed,
    and wrap back near to ULONG_MAX values.
    
    This commit fixes this bug and adds an assertion that is able to trap
    this class of problems.
    
    This problem was discovered looking at the INFO output of an unrelated
    issue (issue #547).
Commits on Jun 14, 2012
  1. @antirez

    ziplistFind(): don't assume that entries are comparable by encoding.

    antirez authored
    Because Redis 2.6 introduced new integer encodings it is no longer true
    that if two entries have a different encoding they are not equal.
    
    An old ziplist can be loaded from an RDB file generated with Redis 2.4,
    in this case for instance a small unsigned integers is encoded with a
    16 bit encoding, while in Redis 2.6 a more specific 8 bit encoding
    format is used.
    
    Because of this bug hashes ended with duplicated values or fields lookup
    failed, causing many bad behaviors.
    This in turn caused a crash while converting the ziplist encoded hash into
    a real hash table because an assertion was raised on duplicated elements.
    
    This commit fixes issue #547.
    
    Many thanks to Pinterest's Marty Weiner and colleagues for discovering
    the problem and helping us in the debugging process.
Commits on Jun 13, 2012
  1. @antirez

    Merge pull request #552 from tnm/unstable

    antirez authored
    Standardize punctuation in redis-cli help.
  2. @tnm

    Standardize punctuation in redis-cli help.

    tnm authored
    Right there is a mix of help entries ending with periods or
    without periods. This standardizes the end of command as without
    periods, which seems to be the general custom in most unix tools,
    at least.
Commits on Jun 12, 2012
  1. @antirez

    Added a new hash fuzzy tester.

    antirez authored
    The new fuzzy tester also removes elements from the hash instead of just
    adding random fields. This should increase the probability to find bugs
    in the implementations of the hash type internal representations.
Commits on Jun 11, 2012
  1. @antirez

    Dump ziplist hex value on failed assertion.

    antirez authored
    The ziplist -> hashtable conversion code is triggered every time an hash
    value must be promoted to a full hash table because the number or size of
    elements reached the threshold.
    
    If a problem in the ziplist causes the same field to be present
    multiple times, the assertion of successful addition of the element
    inside the hash table will fail, crashing server with a failed
    assertion, but providing little information about the problem.
    
    This code adds a new logging function to perform the hex dump of binary
    data, and makes sure that the ziplist -> hashtable conversion code uses
    this new logging facility to dump the content of the ziplist when the
    assertion fails.
    
    This change was originally made in order to investigate issue #547.
  2. @antirez

    New test: hash ziplist -> hashtable encoding conversion.

    antirez authored
    A new stress test was added to stress test the code converting a ziplist
    into an hash table.
    
    In this commit also randomValue helper function was modified to also
    return negative values.
Commits on Jun 2, 2012
  1. @antirez

    EVAL replication test: less false positives.

    antirez authored
    wait_for_condition is now used instead of the usual "after 1000" (that
    is the way to sleep in Tcl). This should avoid to find the replica in
    a state where it is loading the RDB in memory, returning -LOADING error.
    
    This test used to fail when running the test over valgrind, due to the
    added latencies.
  2. @antirez

    Fixed RESTORE hash failure (Issue #532)

    Alex Mitrofanov authored antirez committed
    (additional commit notes by antirez@gmail.com):
    
    The rdbIsObjectType() macro was not updated when the new RDB object type
    of ziplist encoded hashes was added.
    
    As a result RESTORE, that uses rdbLoadObjectType(), failed when a
    ziplist encoded hash was loaded.
    This does not affected normal RDB loading because in that case we use
    the lower-level function rdbLoadType().
    
    The commit also adds a regression test.
  3. @antirez

    RDB type loading functions clarified in comments.

    antirez authored
    Improved comments to make clear that rdbLoadType() just loads a
    general TYPE in the context of RDB that can be an object type or an
    expire type, end-of-file, and so forth.
    
    While rdbLoadObjectType() enforces that the type is a valid Object Type
    otherwise it returns -1.
Commits on May 31, 2012
  1. @antirez

    BITOP bug when called against non existing keys fixed.

    antirez authored
    In the issue #529 an user reported a bug that can be triggered with the
    following code:
    
    flushdb
    set a
    "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
    bitop or x a b
    
    The bug was introduced with the speed optimization in commit 8bbc076
    that specializes every BITOP operation loop up to the minimum length of
    the input strings.
    
    However the computation of the minimum length contained an error when a
    non existing key was present in the input, after a key that was non zero
    length.
    
    This commit fixes the bug and adds a regression test for it.
Commits on May 25, 2012
  1. @antirez

    Tests modified to account for INFO fields renaming.

    antirez authored
    Commit 33e1db3 modified the name of a
    few INFO fields. This commit changes the Redis test to account for this
    changes.
  2. @antirez

    Four new persistence fields in INFO. A few renamed.

    antirez authored
    The 'persistence' section of INFO output now contains additional four
    fields related to RDB and AOF persistence:
    
     rdb_last_bgsave_time_sec       Duration of latest BGSAVE in sec.
     rdb_current_bgsave_time_sec    Duration of current BGSAVE in sec.
     aof_last_rewrite_time_sec      Duration of latest AOF rewrite in sec.
     aof_current_rewrite_time_sec   Duration of current AOF rewrite in sec.
    
    The 'current' fields are set to -1 if a BGSAVE / AOF rewrite is not in
    progress. The 'last' fileds are set to -1 if no previous BGSAVE / AOF
    rewrites were performed.
    
    Additionally a few fields in the persistence section were renamed for
    consistency:
    
     changes_since_last_save -> rdb_changes_since_last_save
     bgsave_in_progress -> rdb_bgsave_in_progress
     last_save_time -> rdb_last_save_time
     last_bgsave_status -> rdb_last_bgsave_status
     bgrewriteaof_in_progress -> aof_rewrite_in_progress
     bgrewriteaof_scheduled -> aof_rewrite_scheduled
    
    After the renaming, fields in the persistence section start with rdb_ or
    aof_ prefix depending on the persistence method they describe.
    The field 'loading' and related fields are not prefixed because they are
    unique for both the persistence methods.
Commits on May 24, 2012
  1. @antirez

    BITOP command 10x speed improvement.

    antirez authored
    This commit adds a fast-path to the BITOP that can be used for all the
    bytes from 0 to the minimal length of the string, and if there are
    at max 16 input keys.
    
    Often the intersected bitmaps are roughly the same size, so this
    optimization can provide a 10x speed boost to most real world usages
    of the command.
    
    Bytes are processed four full words at a time, in loops specialized
    for the specific BITOP sub-command, without the need to check for
    length issues with the inputs (since we run this algorithm only as far
    as there is data from all the keys at the same time).
    
    The remaining part of the string is intersected in the usual way using
    the slow but generic algorith.
    
    It is possible to do better than this with inputs that are not roughly
    the same size, sorting the input keys by length, by initializing the
    result string in a smarter way, and noticing that the final part of the
    output string composed of only data from the longest string does not
    need any proecessing since AND, OR and XOR against an empty string does
    not alter the output (zero in the first case, and the original string in
    the other two cases).
    
    More implementations will be implemented later likely, but this should
    be enough to release Redis 2.6-RC4 with bitops merged in.
    
    Note: this commit also adds better testing for BITOP NOT command, that
    is currently the faster and hard to optimize further since it just
    flips the bits of a single input string.
  2. @antirez

    BITOP: handle integer encoded objects correctly.

    antirez authored
    A bug in the implementation caused BITOP to crash the server if at least
    one one of the source objects was integer encoded.
    
    The new implementation takes an additional array of Redis objects
    pointers and calls getDecodedObject() to get a reference to a string
    encoded object, and then uses decrRefCount() to release the object.
    
    Tests modified to cover the regression and improve coverage.
  3. @antirez

    BITCOUNT performance improved.

    antirez authored
    At Redis's default optimization level the command is now much faster,
    always using a constant-time bit manipualtion technique to count bits
    instead of GCC builtin popcount, and unrolling the loop.
    
    The current implementation performance is 1.5GB/s in a MBA 11" (1.8 Ghz
    i7) compiled with both GCC and clang.
    
    The algorithm used is described here:
    
    http://graphics.stanford.edu/~seander/bithacks.html
  4. @antirez

    bitop.c renamed bitops.c

    antirez authored
    bitop.c contains the "Bit related string operations" so it seems more
    logical to call it bitops instead of bitop.
    This also makes it matching the name of the test (unit/bitops.tcl).
  5. @antirez

    Bit operations tests improved.

    antirez authored
    Fuzzing tests of BITCOUNT / BITOP are iterated multiple times.
    The new BITCOUNT fuzzing test uses random strings in a wider interval of
    lengths including zero-len strings.
  6. @antirez

    popcount() optimization for speed.

    antirez authored
    We run the array by 32 bit words instead of processing it byte per byte.
    If the code is compiled using GCC __builtin_popcount() builtin function
    is used instead.
  7. @antirez

    BITCOUNT refactoring.

    antirez authored
    The low level popualtion counting function is now separated from the
    BITCOUNT command implementation, so that the low level function can be
    further optimized and eventually used in other contexts if needed.
  8. @antirez

    Bit-related string operations moved to bitop.c

    antirez authored
    All the general string operations are implemented in t_string.c, however
    the bit operations, while targeting the string type, are better served
    in a specific file where we have the implementations of the following
    four commands and helper functions:
    
        GETBIT
        SETBIT
        BITOP
        BITCOUNT
    
    In the future this file will probably contain more code related to
    making the BITOP and BITCOUNT operations faster.
  9. @antirez

    BITOP and BITCOUNT tests.

    antirez authored
    The Redis implementation is tested against Tcl implementations of the
    same operation. Both fuzzing and testing of specific aspects of the
    commands behavior are performed.
  10. @antirez

    New commands: BITOP and BITCOUNT.

    antirez authored
    The motivation for this new commands is to be search in the usage of
    Redis for real time statistics. See the article "Fast real time metrics
    using Redis".
    
    http://blog.getspool.com/2011/11/29/fast-easy-realtime-metrics-using-redis-bitmaps/
    
    In general Redis strings when used as bitmaps using the SETBIT/GETBIT
    command provide a very space-efficient and fast way to store statistics.
    For instance in a web application with users, every user can be
    associated with a key that shows every day in which the user visited the
    web service. This information can be really valuable to extract user
    behaviour information.
    
    With Redis bitmaps doing this is very simple just saying that a given
    day is 0 (the data the service was put online) and all the next days are
    1, 2, 3, and so forth. So with SETBIT it is possible to set the bit
    corresponding to the current day every time the user visits the site.
    
    It is possible to take the count of the bit sets on the run, this is
    extremely easy using a Lua script. However a fast bit count native
    operation can be useful, especially if it can operate on ranges, or when
    the string is small like in the case of days (even if you consider many
    years it is still extremely little data).
    
    For this reason BITOP was introduced. The command counts the number of
    bits set to 1 in a string, with optional range:
    
    BITCOUNT key [start end]
    
    The start/end parameters are similar to GETRANGE. If omitted the whole
    string is tested.
    
    Population counting is more useful when bit-level operations like AND,
    OR and XOR are avaialble. For instance I can test multiple users to see
    the number of days three users visited the site at the same time. To do
    this we can take the AND of all the bitmaps, and then count the set bits.
    
    For this reason the BITOP command was introduced:
    
    BITOP [AND|OR|XOR|NOT] dest_key src_key1 src_key2 src_key3 ... src_keyN
    
    In the special case of NOT (that inverts the bits) only one source key
    can be passed.
    
    The judicious use of BITCOUNT and BITOP combined can lead to interesting
    use cases with very space efficient representation of data.
    
    The implementation provided is still not tested and optimized for speed,
    next commits will introduce unit tests. Later the implementation will be
    profiled to see if it is possible to gain an important amount of speed
    without making the code much more complex.
  11. @antirez

    Add aof_rewrite_buffer_length INFO field.

    antirez authored
    The INFO output, persistence section, already contained the field
    describing the size of the current AOF buffer to flush on disk. However
    the other AOF buffer, used to accumulate changes during an AOF rewrite,
    was not mentioned in the INFO output.
    
    This commit introduces a new field called aof_rewrite_buffer_length with
    the length of the rewrite buffer.
  12. @antirez

    Allow an AOF rewrite buffer > 2GB (Fix for issue #504).

    antirez authored
    During the AOF rewrite process, the parent process needs to accumulate
    the new writes in an in-memory buffer: when the child will terminate the
    AOF rewriting process this buffer (that ist the difference between the
    dataset when the rewrite was started, and the current dataset) is
    flushed to the new AOF file.
    
    We used to implement this buffer using an sds.c string, but sds.c has a
    2GB limit. Sometimes the dataset can be big enough, the amount of writes
    so high, and the rewrite process slow enough that we overflow the 2GB
    limit, causing a crash, documented on github by issue #504.
    
    In order to prevent this from happening, this commit introduces a new
    system to accumulate writes, implemented by a linked list of blocks of
    10 MB each, so that we also avoid paying the reallocation cost.
    
    Note that theoretically modern operating systems may implement realloc()
    simply as a remaping of the old pages, thus with very good performances,
    see for instance the mremap() syscall on Linux. However this is not
    always true, and jemalloc by default avoids doing this because there are
    issues with the current implementation of mremap().
    
    For this reason we are using a linked list of blocks instead of a single
    block that gets reallocated again and again.
    
    The changes in this commit lacks testing, that will be performed before
    merging into the unstable branch. This fix will not enter 2.4 because it
    is too invasive. However 2.4 will log a warning when the AOF rewrite
    buffer is near to the 2GB limit.
  13. @antirez

    Dead code removed from replication.c.

    antirez authored
    The user @jokea noticed that the following line of code into
    replication.c made little sense:
    
        addReplySds(slave,sdsempty());
    
    Investigating a bit I found that this was introduced by commit 6208b3a
    three years ago in the early stages of Redis. The code apparently is not
    useful at all, so I'm removing it.
    
    This change will not be backported into 2.4 so that in the rare case
    this should introduce a bug, we'll have a chance to detect it into the
    development branch. However following the code path it seems like the
    code is not useful at all, so the risk is truly small.
Commits on May 23, 2012
  1. @jokea @antirez
  2. @antirez

    Fixed issue #516 (ZINTERSTORE mixing sets and zsets).

    antirez authored
    Weeks ago trying to fix an harmless GCC warning I introduced a bug in
    the ziplist-encoded implementations of sorted sets.
    
    The bug completely broke zuiNext() iterator, that is used in the
    ZINTERSTORE and ZUNIONSTORE implementation, so those two commands are no
    longer reliable starting from Redis version 2.4.12 and latest 2.6.0-RC
    releases.
    
    This commit fixes the problem and adds a regression test.
Commits on May 22, 2012
  1. @antirez

    Redis test: include bug report on crash.

    antirez authored
    Due to a change in the format of the bug report in case of crash of
    failed assertion the test suite was no longer able to properly log it.
    Instead just a protocol error was logged by the Redis TCL client that
    provided no clue about the actual problem.
    
    This commit resolves the issue by logging everything from the first line
    of the log including the string REDIS BUG REPORT, till the end of the
    file.
Commits on May 21, 2012
  1. @antirez

    Use comments to split aof.c into sections.

    antirez authored
    This makes the code more readable, it is still not the case to split the
    file itself into three different files, but the logical separation
    improves the readability especially since new commits are going to
    introduce an additional section.
Commits on May 20, 2012
  1. @antirez

    TODO file removed.

    antirez authored
    The list of things to do is since long time in two places:
    
    1) Github issues.
    2) I've a private TOOD list of random ideas, what makes sense is later
    moved to github issues. So github is anyway the true source of things to
    do.
Commits on May 16, 2012
  1. @antirez

    Deleted jemalloc.orig from /deps.

    antirez authored
    In the commit upgrading jemalloc to version 3.0.0 I added the old
    version of Jemalloc in the 'jemalloc.orig' directory for an error.
    This commit removes the not useful version of jemalloc.
  2. @antirez

    Jemalloc updated to 3.0.0.

    antirez authored
    Full changelog here:
    
    http://www.canonware.com/cgi-bin/gitweb.cgi?p=jemalloc.git;a=blob_plain;f=ChangeLog;hb=master
    
    Notable improvements from the point of view of Redis:
    
    1) Bugfixing.
    2) Support for Valgrind.
    3) Support for OSX Lion, FreeBSD.
Something went wrong with that request. Please try again.