Permalink
Browse files

The universe is 80 characters wide...

  • Loading branch information...
1 parent 3e542b3 commit 066c91d4f7383a81cf4c07d8e5c4dd75372ca45f @pietern pietern committed Jun 18, 2012
Showing with 754 additions and 618 deletions.
  1. +39 −0 Rakefile
  2. +16 −10 commands/append.md
  3. +6 −7 commands/auth.md
  4. +8 −4 commands/bgrewriteaof.md
  5. +6 −5 commands/bgsave.md
  6. +20 −21 commands/bitcount.md
  7. +14 −11 commands/bitop.md
  8. +14 −16 commands/blpop.md
  9. +8 −7 commands/config get.md
  10. +10 −10 commands/config set.md
  11. +5 −7 commands/decr.md
  12. +4 −5 commands/decrby.md
  13. +1 −1 commands/del.md
  14. +7 −3 commands/dump.md
  15. +148 −147 commands/eval.md
  16. +3 −5 commands/exec.md
  17. +45 −38 commands/expire.md
  18. +2 −1 commands/expireat.md
  19. +2 −1 commands/flushall.md
  20. +3 −3 commands/get.md
  21. +2 −2 commands/hdel.md
  22. +2 −2 commands/hgetall.md
  23. +1 −2 commands/hincrby.md
  24. +9 −3 commands/hincrbyfloat.md
  25. +2 −2 commands/hmget.md
  26. +3 −3 commands/hmset.md
  27. +2 −2 commands/hset.md
  28. +25 −24 commands/incr.md
  29. +4 −5 commands/incrby.md
  30. +12 −5 commands/incrbyfloat.md
  31. +2 −2 commands/info.md
  32. +3 −3 commands/keys.md
  33. +4 −4 commands/lastsave.md
  34. +5 −5 commands/lindex.md
  35. +2 −2 commands/linsert.md
  36. +3 −3 commands/llen.md
  37. +10 −6 commands/lpush.md
  38. +3 −3 commands/lpushx.md
  39. +6 −7 commands/lrange.md
  40. +5 −5 commands/lrem.md
  41. +4 −4 commands/ltrim.md
  42. +3 −3 commands/mget.md
  43. +25 −10 commands/migrate.md
  44. +9 −9 commands/monitor.md
  45. +2 −2 commands/move.md
  46. +1 −1 commands/mset.md
  47. +2 −2 commands/msetnx.md
  48. +2 −2 commands/multi.md
  49. +5 −2 commands/object.md
  50. +3 −1 commands/persist.md
  51. +3 −1 commands/pttl.md
  52. +5 −5 commands/punsubscribe.md
  53. +2 −2 commands/renamenx.md
  54. +4 −2 commands/restore.md
  55. +16 −11 commands/rpoplpush.md
  56. +10 −6 commands/rpush.md
  57. +3 −3 commands/rpushx.md
  58. +3 −3 commands/sadd.md
  59. +10 −3 commands/save.md
  60. +8 −3 commands/script exists.md
  61. +2 −1 commands/script flush.md
  62. +12 −5 commands/script kill.md
  63. +10 −5 commands/script load.md
  64. +2 −2 commands/sdiffstore.md
  65. +2 −2 commands/select.md
  66. +1 −1 commands/setex.md
  67. +11 −14 commands/setnx.md
  68. +8 −8 commands/setrange.md
  69. +6 −5 commands/shutdown.md
  70. +2 −2 commands/sinter.md
  71. +10 −10 commands/slaveof.md
  72. +19 −20 commands/slowlog.md
  73. +2 −3 commands/smove.md
  74. +22 −21 commands/sort.md
  75. +2 −2 commands/spop.md
  76. +1 −1 commands/srem.md
  77. +2 −2 commands/strlen.md
  78. +2 −2 commands/subscribe.md
  79. +1 −2 commands/sunion.md
  80. +4 −2 commands/time.md
  81. +3 −3 commands/ttl.md
  82. +3 −3 commands/type.md
  83. +5 −5 commands/unsubscribe.md
  84. +2 −1 commands/watch.md
  85. +7 −4 commands/zadd.md
  86. +2 −2 commands/zcard.md
  87. +4 −4 commands/zcount.md
  88. +5 −5 commands/zinterstore.md
  89. +3 −3 commands/zrange.md
  90. +8 −8 commands/zrangebyscore.md
  91. +2 −1 commands/zrem.md
  92. +2 −1 commands/zrevrangebyscore.md
  93. +1 −1 commands/zunionstore.md
View
39 Rakefile
@@ -39,3 +39,42 @@ task :spellcheck do
puts "#{file}: #{words.uniq.sort.join(" ")}" if words.any?
end
end
+
+namespace :format do
+
+ def format(file)
+ return unless File.exist?(file)
+
+ STDOUT.print "formatting #{file}..."
+ STDOUT.flush
+
+ matcher = /^(?:\A|\r?\n)((?:[a-zA-Z].+?\r?\n)+)/m
+ body = File.read(file).gsub(matcher) do |match|
+ formatted = nil
+
+ IO.popen("par p0s0w80", "r+") do |io|
+ io.puts match
+ io.close_write
+ formatted = io.read
+ end
+
+ formatted
+ end
+
+ File.open(file, "w") do |f|
+ f.print body
+ end
+
+ STDOUT.puts
+ end
+
+ task :file, :path do |t, args|
+ format(args[:path])
+ end
+
+ task :all do
+ Dir["commands/*.md"].each do |path|
+ format(path)
+ end
+ end
+end
View
26 commands/append.md
@@ -1,6 +1,6 @@
-If `key` already exists and is a string, this command appends the `value` at
-the end of the string. If `key` does not exist it is created and set as an
-empty string, so `APPEND` will be similar to `SET` in this special case.
+If `key` already exists and is a string, this command appends the `value` at the
+end of the string. If `key` does not exist it is created and set as an empty
+string, so `APPEND` will be similar to `SET` in this special case.
@return
@@ -16,9 +16,9 @@ empty string, so `APPEND` will be similar to `SET` in this special case.
## Pattern: Time series
-the `APPEND` command can be used to create a very compact representation of
-a list of fixed-size samples, usually referred as *time series*.
-Every time a new sample arrives we can store it using the command
+the `APPEND` command can be used to create a very compact representation of a
+list of fixed-size samples, usually referred as *time series*. Every time a new
+sample arrives we can store it using the command
APPEND timeseries "fixed-size sample"
@@ -28,12 +28,18 @@ Accessing individual elements in the time series is not hard:
* `GETRANGE` allows for random access of elements. If our time series have an associated time information we can easily implement a binary search to get range combining `GETRANGE` with the Lua scripting engine available in Redis 2.6.
* `SETRANGE` can be used to overwrite an existing time serie.
-The limitations of this pattern is that we are forced into an append-only mode of operation, there is no way to cut the time series to a given size easily because Redis currently lacks a command able to trim string objects. However the space efficiency of time series stored in this way is remarkable.
+The limitations of this pattern is that we are forced into an append-only mode
+of operation, there is no way to cut the time series to a given size easily
+because Redis currently lacks a command able to trim string objects. However the
+space efficiency of time series stored in this way is remarkable.
-Hint: it is possible to switch to a different key based on the current Unix time, in this way it is possible to have just a relatively small amount of samples per key, to avoid dealing with very big keys, and to make this pattern more
-friendly to be distributed across many Redis instances.
+Hint: it is possible to switch to a different key based on the current Unix
+time, in this way it is possible to have just a relatively small amount of
+samples per key, to avoid dealing with very big keys, and to make this pattern
+more friendly to be distributed across many Redis instances.
-An example sampling the temperature of a sensor using fixed-size strings (using a binary format is better in real implementations).
+An example sampling the temperature of a sensor using fixed-size strings (using
+a binary format is better in real implementations).
@cli
APPEND ts "0043"
View
13 commands/auth.md
@@ -1,11 +1,10 @@
-Request for authentication in a password protected Redis server.
-Redis can be instructed to require a password before allowing clients
-to execute commands. This is done using the `requirepass` directive in the
-configuration file.
+Request for authentication in a password protected Redis server. Redis can be
+instructed to require a password before allowing clients to execute commands.
+This is done using the `requirepass` directive in the configuration file.
-If `password` matches the password in the configuration file, the server replies with
-the `OK` status code and starts accepting commands.
-Otherwise, an error is returned and the clients needs to try a new password.
+If `password` matches the password in the configuration file, the server replies
+with the `OK` status code and starts accepting commands. Otherwise, an error is
+returned and the clients needs to try a new password.
**Note**: because of the high performance nature of Redis, it is possible to try
a lot of passwords in parallel in very short time, so make sure to generate
View
12 commands/bgrewriteaof.md
@@ -1,17 +1,21 @@
-Instruct Redis to start an [Append Only File][aof] rewrite process. The rewrite will create a small optimized version of the current Append Only File.
+Instruct Redis to start an [Append Only File][aof] rewrite process. The rewrite
+will create a small optimized version of the current Append Only File.
[aof]: /topics/persistence#append-only-file
If `BGREWRITEAOF` fails, no data gets lost as the old AOF will be untouched.
-The rewrite will be only triggered by Redis if there is not already a background process doing persistence. Specifically:
+The rewrite will be only triggered by Redis if there is not already a background
+process doing persistence. Specifically:
* If a Redis child is creating a snapshot on disk, the AOF rewrite is *scheduled* but not started until the saving child producing the RDB file terminates. In this case the `BGREWRITEAOF` will still return an OK code, but with an appropriate message. You can check if an AOF rewrite is scheduled looking at the `INFO` command starting from Redis 2.6.
* If an AOF rewrite is already in progress the command returns an error and no AOF rewrite will be scheduled for a later time.
-Since Redis 2.4 the AOF rewrite is automatically triggered by Redis, however the `BGREWRITEAOF` command can be used to trigger a rewrite at any time.
+Since Redis 2.4 the AOF rewrite is automatically triggered by Redis, however the
+`BGREWRITEAOF` command can be used to trigger a rewrite at any time.
-Please refer to the [persistence documentation][persistence] for detailed information.
+Please refer to the [persistence documentation][persistence] for detailed
+information.
[persistence]: /topics/persistence
View
11 commands/bgsave.md
@@ -1,11 +1,12 @@
-Save the DB in background. The OK code is immediately returned.
-Redis forks, the parent continues to server the clients, the child
-saves the DB on disk then exit. A client my be able to check if the
-operation succeeded using the `LASTSAVE` command.
+Save the DB in background. The OK code is immediately returned. Redis forks,
+the parent continues to server the clients, the child saves the DB on disk
+then exit. A client my be able to check if the operation succeeded using the
+`LASTSAVE` command.
-Please refer to the [persistence documentation][persistence] for detailed information.
+Please refer to the [persistence documentation][persistence] for detailed
+information.
[persistence]: /topics/persistence
View
41 commands/bitcount.md
@@ -4,12 +4,11 @@ By default all the bytes contained in the string are examined. It is possible
to specify the counting operation only in an interval passing the additional
arguments *start* and *end*.
-Like for the `GETRANGE` command start and end can contain negative values
-in order to index bytes starting from the end of the string, where -1 is the
-last byte, -2 is the penultimate, and so forth.
+Like for the `GETRANGE` command start and end can contain negative values in
+order to index bytes starting from the end of the string, where -1 is the last
+byte, -2 is the penultimate, and so forth.
-Non existing keys are treated as empty strings, so the command will return
-zero.
+Non existing keys are treated as empty strings, so the command will return zero.
@return
@@ -28,32 +27,32 @@ The number of bits set to 1.
## Pattern: real time metrics using bitmaps
Bitmaps are a very space efficient representation of certain kinds of
-information. One example is a web application that needs the history
-of user visits, so that for instance it is possible to determine what
-users are good targets of beta features, or for any other purpose.
+information. One example is a web application that needs the history of user
+visits, so that for instance it is possible to determine what users are good
+targets of beta features, or for any other purpose.
Using the `SETBIT` command this is trivial to accomplish, identifying every
-day with a small progressive integer. For instance day 0 is the first day
-the application was put online, day 1 the next day, and so forth.
+day with a small progressive integer. For instance day 0 is the first day the
+application was put online, day 1 the next day, and so forth.
-Every time an user performs a page view, the application can register that
-in the current day the user visited the web site using the `SETBIT` command
-setting the bit corresponding to the current day.
+Every time an user performs a page view, the application can register that in
+the current day the user visited the web site using the `SETBIT` command setting
+the bit corresponding to the current day.
-Later it will be trivial to know the number of single days the user visited
-the web site simply calling the `BITCOUNT` command against the bitmap.
+Later it will be trivial to know the number of single days the user visited the
+web site simply calling the `BITCOUNT` command against the bitmap.
-A similar pattern where user IDs are used instead of days is described
-in the article called "[Fast easy realtime metrics using Redis bitmaps][bitmaps]".
+A similar pattern where user IDs are used instead of days is described in the
+article called "[Fast easy realtime metrics using Redis bitmaps][bitmaps]".
[bitmaps]: http://blog.getspool.com/2011/11/29/fast-easy-realtime-metrics-using-redis-bitmaps
## Performance considerations
-In the above example of counting days, even after 10 years the application
-is online we still have just `365*10` bits of data per user, that is
-just 456 bytes per user. With this amount of data `BITCOUNT` is still as fast
-as any other O(1) Redis command like `GET` or `INCR`.
+In the above example of counting days, even after 10 years the application is
+online we still have just `365*10` bits of data per user, that is just 456 bytes
+per user. With this amount of data `BITCOUNT` is still as fast as any other O(1)
+Redis command like `GET` or `INCR`.
When the bitmap is big, there are two alternatives:
View
25 commands/bitop.md
@@ -1,7 +1,8 @@
-Perform a bitwise operation between multiple keys (containing string
-values) and store the result in the destination key.
+Perform a bitwise operation between multiple keys (containing string values) and
+store the result in the destination key.
-The `BITOP` command supports four bitwise operations: **AND**, **OR**, **XOR** and **NOT**, thus the valid forms to call the command are:
+The `BITOP` command supports four bitwise operations: **AND**, **OR**, **XOR**
+and **NOT**, thus the valid forms to call the command are:
+ BITOP AND *destkey srckey1 srckey2 srckey3 ... srckeyN*
+ BITOP OR *destkey srckey1 srckey2 srckey3 ... srckeyN*
@@ -15,9 +16,9 @@ The result of the operation is always stored at *destkey*.
## Handling of strings with different lengths
-When an operation is performed between strings having different lengths, all
-the strings shorter than the longest string in the set are treated as if
-they were zero-padded up to the length of the longest string.
+When an operation is performed between strings having different lengths, all the
+strings shorter than the longest string in the set are treated as if they were
+zero-padded up to the length of the longest string.
The same holds true for non-existing keys, that are considered as a stream of
zero bytes up to the length of the longest string.
@@ -26,7 +27,8 @@ zero bytes up to the length of the longest string.
@integer-reply
-The size of the string stored into the destination key, that is equal to the size of the longest input string.
+The size of the string stored into the destination key, that is equal to the
+size of the longest input string.
@examples
@@ -41,7 +43,8 @@ The size of the string stored into the destination key, that is equal to the siz
`BITOP` is a good complement to the pattern documented in the `BITCOUNT` command documentation. Different bitmaps can be combined in order to obtain a target
bitmap where to perform the population counting operation.
-See the article called "[Fast easy realtime metrics using Redis bitmaps][bitmaps]" for an interesting use cases.
+See the article called "[Fast easy realtime metrics using Redis
+bitmaps][bitmaps]" for an interesting use cases.
[bitmaps]: http://blog.getspool.com/2011/11/29/fast-easy-realtime-metrics-using-redis-bitmaps
@@ -50,6 +53,6 @@ See the article called "[Fast easy realtime metrics using Redis bitmaps][bitmaps
`BITOP` is a potentially slow command as it runs in O(N) time.
Care should be taken when running it against long input strings.
-For real time metrics and statistics involving large inputs a good approach
-is to use a slave (with read-only option disabled) where to perform the
-bit-wise operations without blocking the master instance.
+For real time metrics and statistics involving large inputs a good approach is
+to use a slave (with read-only option disabled) where to perform the bit-wise
+operations without blocking the master instance.
View
30 commands/blpop.md
@@ -21,15 +21,14 @@ that order).
## Blocking behavior
-If none of the specified keys exist, `BLPOP` blocks
-the connection until another client performs an `LPUSH` or `RPUSH` operation
-against one of the keys.
+If none of the specified keys exist, `BLPOP` blocks the connection until another
+client performs an `LPUSH` or `RPUSH` operation against one of the keys.
Once new data is present on one of the lists, the client returns with the name
of the key unblocking it and the popped value.
-When `BLPOP` causes a client to block and a non-zero timeout is specified, the
-client will unblock returning a `nil` multi-bulk value when the specified
+When `BLPOP` causes a client to block and a non-zero timeout is specified,
+the client will unblock returning a `nil` multi-bulk value when the specified
timeout has expired without a push operation against at least one of the
specified keys.
@@ -38,9 +37,9 @@ be used to block indefinitely.
## Multiple clients blocking for the same keys
-Multiple clients can block for the same key. They are put into
-a queue, so the first to be served will be the one that started to wait
-earlier, in a first-`!BLPOP` first-served fashion.
+Multiple clients can block for the same key. They are put into a queue, so
+the first to be served will be the one that started to wait earlier, in a
+first-`!BLPOP` first-served fashion.
## `!BLPOP` inside a `!MULTI`/`!EXEC` transaction
@@ -51,8 +50,8 @@ execute the block atomically, which in turn does not allow other clients to
perform a push operation.
The behavior of `BLPOP` inside `MULTI`/`EXEC` when the list is empty is to
-return a `nil` multi-bulk reply, which is the same thing that happens when the
-timeout is reached. If you like science fiction, think of time flowing at
+return a `nil` multi-bulk reply, which is the same thing that happens when
+the timeout is reached. If you like science fiction, think of time flowing at
infinite speed inside a `MULTI`/`EXEC` block.
@return
@@ -76,12 +75,11 @@ infinite speed inside a `MULTI`/`EXEC` block.
## Pattern: Event notification
Using blocking list operations it is possible to mount different blocking
-primitives. For instance for some application you may need to block
-waiting for elements into a Redis Set, so that as far as a new element is
-added to the Set, it is possible to retrieve it without resort to polling.
-This would require a blocking version of `SPOP` that is
-not available, but using blocking list operations we can easily accomplish
-this task.
+primitives. For instance for some application you may need to block waiting for
+elements into a Redis Set, so that as far as a new element is added to the Set,
+it is possible to retrieve it without resort to polling. This would require
+a blocking version of `SPOP` that is not available, but using blocking list
+operations we can easily accomplish this task.
The consumer will do:
View
15 commands/config get.md
@@ -1,7 +1,7 @@
The `CONFIG GET` command is used to read the configuration parameters of a
-running Redis server. Not all the configuration parameters are
-supported in Redis 2.4, while Redis 2.6 can read the whole configuration of
-a server using this command.
+running Redis server. Not all the configuration parameters are supported in
+Redis 2.4, while Redis 2.6 can read the whole configuration of a server using
+this command.
The symmetric command used to alter the configuration at run time is
`CONFIG SET`.
@@ -22,7 +22,8 @@ You can obtain a list of all the supported configuration parameters typing
`CONFIG GET *` in an open `redis-cli` prompt.
All the supported parameters have the same meaning of the equivalent
-configuration parameter used in the [redis.conf][conf] file, with the following important differences:
+configuration parameter used in the [redis.conf][conf] file, with the following
+important differences:
[conf]: http://github.com/antirez/redis/raw/2.2/redis.conf
@@ -34,9 +35,9 @@ For instance what in `redis.conf` looks like:
save 900 1
save 300 10
-that means, save after 900 seconds if there is at least 1 change to the
-dataset, and after 300 seconds if there are at least 10 changes to the
-datasets, will be reported by `CONFIG GET` as "900 1 300 10".
+that means, save after 900 seconds if there is at least 1 change to the dataset,
+and after 300 seconds if there are at least 10 changes to the datasets, will be
+reported by `CONFIG GET` as "900 1 300 10".
@return
View
20 commands/config set.md
@@ -2,17 +2,17 @@ The `CONFIG SET` command is used in order to reconfigure the server at run time
without the need to restart Redis. You can change both trivial parameters or
switch from one to another persistence option using this command.
-The list of configuration parameters supported by `CONFIG SET` can be
-obtained issuing a `CONFIG GET *` command, that is the symmetrical command
-used to obtain information about the configuration of a running
-Redis instance.
+The list of configuration parameters supported by `CONFIG SET` can be obtained
+issuing a `CONFIG GET *` command, that is the symmetrical command used to obtain
+information about the configuration of a running Redis instance.
All the configuration parameters set using `CONFIG SET` are immediately loaded
by Redis that will start acting as specified starting from the next command
executed.
All the supported parameters have the same meaning of the equivalent
-configuration parameter used in the [redis.conf][conf] file, with the following important differences:
+configuration parameter used in the [redis.conf][conf] file, with the following
+important differences:
[conf]: http://github.com/antirez/redis/raw/2.2/redis.conf
@@ -24,9 +24,9 @@ For instance what in `redis.conf` looks like:
save 900 1
save 300 10
-that means, save after 900 seconds if there is at least 1 change to the
-dataset, and after 300 seconds if there are at least 10 changes to the
-datasets, should be set using `CONFIG SET` as "900 1 300 10".
+that means, save after 900 seconds if there is at least 1 change to the dataset,
+and after 300 seconds if there are at least 10 changes to the datasets, should
+be set using `CONFIG SET` as "900 1 300 10".
It is possible to switch persistence from RDB snapshotting to append only file
(and the other way around) using the `CONFIG SET` command. For more information
@@ -40,8 +40,8 @@ In general what you should know is that setting the `appendonly` parameter to
commands on the append only file, thus obtaining exactly the same effect of
a Redis server that started with AOF turned on since the start.
-You can have both the AOF enabled with RDB snapshotting if you want, the
-two options are not mutually exclusive.
+You can have both the AOF enabled with RDB snapshotting if you want, the two
+options are not mutually exclusive.
@return
View
12 commands/decr.md
@@ -1,11 +1,9 @@
-Decrements the number stored at `key` by one.
-If the key does not exist, it is set to `0` before performing the operation. An
-error is returned if the key contains a value of the wrong type or contains a
-string that can not be represented as integer. This operation is limited to **64
-bit signed integers**.
+Decrements the number stored at `key` by one. If the key does not exist, it
+is set to `0` before performing the operation. An error is returned if the
+key contains a value of the wrong type or contains a string that can not be
+represented as integer. This operation is limited to **64 bit signed integers**.
-See `INCR` for extra information on increment/decrement
-operations.
+See `INCR` for extra information on increment/decrement operations.
@return
View
9 commands/decrby.md
@@ -1,8 +1,7 @@
-Decrements the number stored at `key` by `decrement`.
-If the key does not exist, it is set to `0` before performing the operation. An
-error is returned if the key contains a value of the wrong type or contains a
-string that can not be represented as integer. This operation is limited to 64
-bit signed integers.
+Decrements the number stored at `key` by `decrement`. If the key does not exist,
+it is set to `0` before performing the operation. An error is returned if the
+key contains a value of the wrong type or contains a string that can not be
+represented as integer. This operation is limited to 64 bit signed integers.
See `INCR` for extra information on increment/decrement operations.
View
2 commands/del.md
@@ -1,4 +1,4 @@
-Removes the specified keys. A key is ignored if it does not exist.
+Removes the specified keys. A key is ignored if it does not exist.
@return
View
10 commands/dump.md
@@ -1,12 +1,16 @@
-Serialize the value stored at key in a Redis-specific format and return it to the user. The returned value can be synthesized back into a Redis key using the `RESTORE` command.
+Serialize the value stored at key in a Redis-specific format and return it to
+the user. The returned value can be synthesized back into a Redis key using the
+`RESTORE` command.
-The serialization format is opaque and non-standard, however it has a few semantical characteristics:
+The serialization format is opaque and non-standard, however it has a few
+semantical characteristics:
* It contains a 64bit checksum that is used to make sure errors will be detected. The `RESTORE` command makes sure to check the checksum before synthesizing a key using the serialized value.
* Values are encoded in the same format used by RDB.
* An RDB version is encoded inside the serialized value, so that different Redis versions with incompatible RDB formats will refuse to process the serialized value.
-The serialized value does NOT contain expire information. In order to capture the time to live of the current value the `PTTL` command should be used.
+The serialized value does NOT contain expire information. In order to capture
+the time to live of the current value the `PTTL` command should be used.
If `key` does not exist a nil bulk reply is returned.
View
295 commands/eval.md
@@ -3,17 +3,18 @@
`EVAL` and `EVALSHA` are used to evaluate scripts using the Lua interpreter
built into Redis starting from version 2.6.0.
-The first argument of `EVAL` is a Lua 5.1 script. The script does not need
-to define a Lua function (and should not). It is just a Lua program that will run in the context of the Redis server.
+The first argument of `EVAL` is a Lua 5.1 script. The script does not need to
+define a Lua function (and should not). It is just a Lua program that will run
+in the context of the Redis server.
-The second argument of `EVAL` is the number of arguments that follows
-the script (starting from the third argument) that represent Redis key names.
-This arguments can be accessed by Lua using the `KEYS` global variable in
-the form of a one-based array (so `KEYS[1]`, `KEYS[2]`, ...).
+The second argument of `EVAL` is the number of arguments that follows the
+script (starting from the third argument) that represent Redis key names. This
+arguments can be accessed by Lua using the `KEYS` global variable in the form of
+a one-based array (so `KEYS[1]`, `KEYS[2]`, ...).
-All the additional arguments should not represent key names and can
-be accessed by Lua using the `ARGV` global variable, very similarly to
-what happens with keys (so `ARGV[1]`, `ARGV[2]`, ...).
+All the additional arguments should not represent key names and can be accessed
+by Lua using the `ARGV` global variable, very similarly to what happens with
+keys (so `ARGV[1]`, `ARGV[2]`, ...).
The following example should clarify what stated above:
@@ -23,12 +24,12 @@ The following example should clarify what stated above:
3) "first"
4) "second"
-Note: as you can see Lua arrays are returned as Redis multi bulk
-replies, that is a Redis return type that your client library will
-likely convert into an Array type in your programming language.
+Note: as you can see Lua arrays are returned as Redis multi bulk replies, that
+is a Redis return type that your client library will likely convert into an
+Array type in your programming language.
-It is possible to call Redis commands from a Lua script using two different
-Lua functions:
+It is possible to call Redis commands from a Lua script using two different Lua
+functions:
* `redis.call()`
* `redis.pcall()`
@@ -39,44 +40,47 @@ error that in turn will force `EVAL` to return an error to the command caller,
while `redis.pcall` will trap the error returning a Lua table representing the
error.
-The arguments of the `redis.call()` and `redis.pcall()` functions are simply
-all the arguments of a well formed Redis command:
+The arguments of the `redis.call()` and `redis.pcall()` functions are simply all
+the arguments of a well formed Redis command:
> eval "return redis.call('set','foo','bar')" 0
OK
-The above script actually sets the key `foo` to the string `bar`.
-However it violates the `EVAL` command semantics as all the keys that the
-script uses should be passed using the KEYS array, in the following way:
+The above script actually sets the key `foo` to the string `bar`. However it
+violates the `EVAL` command semantics as all the keys that the script uses
+should be passed using the KEYS array, in the following way:
> eval "return redis.call('set',KEYS[1],'bar')" 1 foo
OK
-The reason for passing keys in the proper way is that, before of `EVAL` all
-the Redis commands could be analyzed before execution in order to
-establish what are the keys the command will operate on.
+The reason for passing keys in the proper way is that, before of `EVAL` all the
+Redis commands could be analyzed before execution in order to establish what are
+the keys the command will operate on.
-In order for this to be true for `EVAL` also keys must be explicit.
-This is useful in many ways, but especially in order to make sure Redis Cluster
-is able to forward your request to the appropriate cluster node (Redis
-Cluster is a work in progress, but the scripting feature was designed
-in order to play well with it). However this rule is not enforced in order to provide the user with opportunities to abuse the Redis single instance configuration, at the cost of writing scripts not compatible with Redis Cluster.
+In order for this to be true for `EVAL` also keys must be explicit. This is
+useful in many ways, but especially in order to make sure Redis Cluster is able
+to forward your request to the appropriate cluster node (Redis Cluster is a
+work in progress, but the scripting feature was designed in order to play well
+with it). However this rule is not enforced in order to provide the user with
+opportunities to abuse the Redis single instance configuration, at the cost of
+writing scripts not compatible with Redis Cluster.
-Lua scripts can return a value, that is converted from the Lua type to the Redis protocol using a set of conversion rules.
+Lua scripts can return a value, that is converted from the Lua type to the Redis
+protocol using a set of conversion rules.
## Conversion between Lua and Redis data types
-Redis return values are converted into Lua data types when Lua calls a
-Redis command using call() or pcall(). Similarly Lua data types are
-converted into Redis protocol when a Lua script returns some value, so that
-scripts can control what `EVAL` will reply to the client.
+Redis return values are converted into Lua data types when Lua calls a Redis
+command using call() or pcall(). Similarly Lua data types are converted into
+Redis protocol when a Lua script returns some value, so that scripts can control
+what `EVAL` will reply to the client.
-This conversion between data types is designed in a way that if
-a Redis type is converted into a Lua type, and then the result is converted
-back into a Redis type, the result is the same as of the initial value.
+This conversion between data types is designed in a way that if a Redis type is
+converted into a Lua type, and then the result is converted back into a Redis
+type, the result is the same as of the initial value.
-In other words there is a one to one conversion between Lua and Redis types.
-The following table shows you all the conversions rules:
+In other words there is a one to one conversion between Lua and Redis types. The
+following table shows you all the conversions rules:
**Redis to Lua** conversion table.
@@ -115,30 +119,29 @@ The followings are a few conversion examples:
> eval "return redis.call('get','foo')" 0
"bar"
-The last example shows how it is possible to directly return from Lua
-the return value of `redis.call()` and `redis.pcall()` with the result of
-returning exactly what the called command would return if called directly.
+The last example shows how it is possible to directly return from Lua the return
+value of `redis.call()` and `redis.pcall()` with the result of returning exactly
+what the called command would return if called directly.
## Atomicity of scripts
Redis uses the same Lua interpreter to run all the commands. Also Redis
-guarantees that a script is executed in an atomic way: no other script
-or Redis command will be executed while a script is being executed.
-This semantics is very similar to the one of `MULTI` / `EXEC`.
-From the point of view of all the other clients the effects of a script
-are either still not visible or already completed.
-
-However this also means that executing slow scripts is not a good idea.
-It is not hard to create fast scripts, as the script overhead is very low,
-but if you are going to use slow scripts you should be aware that while the
-script is running no other client can execute commands since the server
-is busy.
+guarantees that a script is executed in an atomic way: no other script or Redis
+command will be executed while a script is being executed. This semantics is
+very similar to the one of `MULTI` / `EXEC`. From the point of view of all the
+other clients the effects of a script are either still not visible or already
+completed.
+
+However this also means that executing slow scripts is not a good idea. It is
+not hard to create fast scripts, as the script overhead is very low, but if
+you are going to use slow scripts you should be aware that while the script is
+running no other client can execute commands since the server is busy.
## Error handling
As already stated calls to `redis.call()` resulting into a Redis command error
-will stop the execution of the script and will return that error back, in a
-way that makes it obvious that the error was generated by a script:
+will stop the execution of the script and will return that error back, in a way
+that makes it obvious that the error was generated by a script:
> del foo
(integer) 1
@@ -147,17 +150,17 @@ way that makes it obvious that the error was generated by a script:
> eval "return redis.call('get','foo')" 0
(error) ERR Error running script (call to f_6b1bf486c81ceb7edf3c093f4c48582e38c0e791): ERR Operation against a key holding the wrong kind of value
-Using the `redis.pcall()` command no error is raised, but an error object
-is returned in the format specified above (as a Lua table with an `err`
-field). The user can later return this exact error to the user just returning
-the error object returned by `redis.pcall()`.
+Using the `redis.pcall()` command no error is raised, but an error object is
+returned in the format specified above (as a Lua table with an `err` field).
+The user can later return this exact error to the user just returning the error
+object returned by `redis.pcall()`.
## Bandwidth and EVALSHA
-The `EVAL` command forces you to send the script body again and again.
-Redis does not need to recompile the script every time as it uses an internal
-caching mechanism, however paying the cost of the additional bandwidth may
-not be optimal in many contexts.
+The `EVAL` command forces you to send the script body again and again. Redis
+does not need to recompile the script every time as it uses an internal caching
+mechanism, however paying the cost of the additional bandwidth may not be
+optimal in many contexts.
On the other hand defining commands using a special command or via `redis.conf`
would be a problem for a few reasons:
@@ -168,8 +171,8 @@ would be a problem for a few reasons:
* Reading an application code the full semantic could not be clear since the application would call commands defined server side.
-In order to avoid the above three problems and at the same time don't incur
-in the bandwidth penalty, Redis implements the `EVALSHA` command.
+In order to avoid the above three problems and at the same time don't incur in
+the bandwidth penalty, Redis implements the `EVALSHA` command.
`EVALSHA` works exactly as `EVAL`, but instead of having a script as first argument it has the SHA1 sum of a script. The behavior is the following:
@@ -192,43 +195,42 @@ Example:
The client library implementation can always optimistically send `EVALSHA` under
the hoods even when the client actually called `EVAL`, in the hope the script
-was already seen by the server. If the `NOSCRIPT` error is returned `EVAL` will be used instead.
+was already seen by the server. If the `NOSCRIPT` error is returned `EVAL` will
+be used instead.
-Passing keys and arguments as `EVAL` additional arguments is also
-very useful in this context as the script string remains constant and can be
-efficiently cached by Redis.
+Passing keys and arguments as `EVAL` additional arguments is also very useful in
+this context as the script string remains constant and can be efficiently cached
+by Redis.
## Script cache semantics
-Executed scripts are guaranteed to be in the script cache **forever**.
-This means that if an `EVAL` is performed against a Redis instance all the
-subsequent `EVALSHA` calls will succeed.
-
-The only way to flush the script cache is by explicitly calling the
-SCRIPT FLUSH command, that will *completely flush* the scripts cache removing
-all the scripts executed so far. This is usually
-needed only when the instance is going to be instantiated for another
-customer or application in a cloud environment.
-
-The reason why scripts can be cached for long time is that it is unlikely
-for a well written application to have so many different scripts to create
-memory problems. Every script is conceptually like the implementation of
-a new command, and even a large application will likely have just a few
-hundreds of that. Even if the application is modified many times and
-scripts will change, still the memory used is negligible.
-
-The fact that the user can count on Redis not removing scripts
-is semantically a very good thing. For instance an application taking
-a persistent connection to Redis can stay sure that if a script was
-sent once it is still in memory, thus for instance can use EVALSHA
-against those scripts in a pipeline without the chance that an error
-will be generated since the script is not known (we'll see this problem
-in its details later).
+Executed scripts are guaranteed to be in the script cache **forever**. This
+means that if an `EVAL` is performed against a Redis instance all the subsequent
+`EVALSHA` calls will succeed.
+
+The only way to flush the script cache is by explicitly calling the SCRIPT FLUSH
+command, that will *completely flush* the scripts cache removing all the scripts
+executed so far. This is usually needed only when the instance is going to be
+instantiated for another customer or application in a cloud environment.
+
+The reason why scripts can be cached for long time is that it is unlikely for
+a well written application to have so many different scripts to create memory
+problems. Every script is conceptually like the implementation of a new command,
+and even a large application will likely have just a few hundreds of that. Even
+if the application is modified many times and scripts will change, still the
+memory used is negligible.
+
+The fact that the user can count on Redis not removing scripts is semantically a
+very good thing. For instance an application taking a persistent connection to
+Redis can stay sure that if a script was sent once it is still in memory, thus
+for instance can use EVALSHA against those scripts in a pipeline without the
+chance that an error will be generated since the script is not known (we'll see
+this problem in its details later).
## The SCRIPT command
-Redis offers a SCRIPT command that can be used in order to control
-the scripting subsystem. SCRIPT currently accepts three different commands:
+Redis offers a SCRIPT command that can be used in order to control the scripting
+subsystem. SCRIPT currently accepts three different commands:
* SCRIPT FLUSH. This command is the only way to force Redis to flush the
scripts cache. It is mostly useful in a cloud environment where the same
@@ -257,18 +259,18 @@ See the next sections for more information about long running scripts.
## Scripts as pure functions
A very important part of scripting is writing scripts that are pure functions.
-Scripts executed in a Redis instance are replicated on slaves sending the
-same script, instead of the resulting commands. The same happens for the
-Append Only File. The reason is that scripts are much faster than sending
-commands one after the other to a Redis instance, so if the client is
-taking the master very busy sending scripts, turning this scripts into single
-commands for the slave / AOF would result in too much bandwidth for the
-replication link or the Append Only File (and also too much CPU since
-dispatching a command received via network is a lot more work for Redis
-compared to dispatching a command invoked by Lua scripts).
-
-The only drawback with this approach is that scripts are required to
-have the following property:
+Scripts executed in a Redis instance are replicated on slaves sending the same
+script, instead of the resulting commands. The same happens for the Append Only
+File. The reason is that scripts are much faster than sending commands one after
+the other to a Redis instance, so if the client is taking the master very busy
+sending scripts, turning this scripts into single commands for the slave / AOF
+would result in too much bandwidth for the replication link or the Append Only
+File (and also too much CPU since dispatching a command received via network
+is a lot more work for Redis compared to dispatching a command invoked by Lua
+scripts).
+
+The only drawback with this approach is that scripts are required to have the
+following property:
* The script always evaluates the same Redis *write* commands with the
same arguments given the same input data set. Operations performed by
@@ -301,9 +303,9 @@ time a new script is executed. This means that calling `math.random` will
always generate the same sequence of numbers every time a script is
executed if `math.randomseed` is not used.
-However the user is still able to write commands with random behaviors
-using the following simple trick. Imagine I want to write a Redis
-script that will populate a list with N random integers.
+However the user is still able to write commands with random behaviors using
+the following simple trick. Imagine I want to write a Redis script that will
+populate a list with N random integers.
I can start writing the following script, using a small Ruby program:
@@ -340,11 +342,10 @@ following elements:
9) "0.74990198051087"
10) "0.17082803611217"
-In order to make it a pure function, but still making sure that every
-invocation of the script will result in different random elements, we can
-simply add an additional argument to the script, that will be used in order to
-seed the Lua pseudo random number generator. The new script will be like the
-following:
+In order to make it a pure function, but still making sure that every invocation
+of the script will result in different random elements, we can simply add an
+additional argument to the script, that will be used in order to seed the Lua
+pseudo random number generator. The new script will be like the following:
RandomPushScript = <<EOF
local i = tonumber(ARGV[1])
@@ -360,13 +361,13 @@ following:
r.del(:mylist)
puts r.eval(RandomPushScript,1,:mylist,10,rand(2**32))
-What we are doing here is sending the seed of the PRNG as one of the
-arguments. This way the script output will be the same given the same
-arguments, but we are changing one of the argument at every invocation,
-generating the random seed client side. The seed will be propagated as
-one of the arguments both in the replication link and in the Append Only
-File, guaranteeing that the same changes will be generated when the AOF
-is reloaded or when the slave will process the script.
+What we are doing here is sending the seed of the PRNG as one of the arguments.
+This way the script output will be the same given the same arguments, but we are
+changing one of the argument at every invocation, generating the random seed
+client side. The seed will be propagated as one of the arguments both in the
+replication link and in the Append Only File, guaranteeing that the same changes
+will be generated when the AOF is reloaded or when the slave will process the
+script.
Note: an important part of this behavior is that the PRNG that Redis implements
as `math.random` and `math.randomseed` is guaranteed to have the same output
@@ -376,10 +377,11 @@ like big or little endian systems will still produce the same output.
## Global variables protection
Redis scripts are not allowed to create global variables, in order to avoid
-leaking data into the Lua state. If a script requires to take state across
-calls (a pretty uncommon need) it should use Redis keys instead.
+leaking data into the Lua state. If a script requires to take state across calls
+(a pretty uncommon need) it should use Redis keys instead.
-When a global variable access is attempted the script is terminated and EVAL returns with an error:
+When a global variable access is attempted the script is terminated and EVAL
+returns with an error:
redis 127.0.0.1:6379> eval 'a=10' 0
(error) ERR Error running script (call to f_933044db579a2f8fd45d8065f04a8d0249383e57): user_script:1: Script attempted to create global variable 'a'
@@ -392,7 +394,8 @@ protection, is not hard. However it is hardly possible to do it accidentally.
If the user messes with the Lua global state, the consistency of AOF and
replication is not guaranteed: don't do it.
-Note for Lua newbies: in order to avoid using global variables in your scripts simply declare every variable you are going to use using the *local* keyword.
+Note for Lua newbies: in order to avoid using global variables in your scripts
+simply declare every variable you are going to use using the *local* keyword.
## Available libraries
@@ -406,8 +409,8 @@ The Redis Lua interpreter loads the following Lua libraries:
* cjson lib.
* cmsgpack lib.
-Every Redis instance is *guaranteed* to have all the above libraries so you
-can be sure that the environment for your Redis scripts is always the same.
+Every Redis instance is *guaranteed* to have all the above libraries so you can
+be sure that the environment for your Redis scripts is always the same.
The CJSON library allows to manipulate JSON data in a very fast way from Lua.
All the other libraries are standard Lua libraries.
@@ -426,8 +429,9 @@ It is possible to write to the Redis log file from Lua scripts using the
* `redis.LOG_NOTICE`
* `redis.LOG_WARNING`
-They exactly correspond to the normal Redis log levels. Only logs emitted by scripting using a log level that is equal or greater than the currently configured
-Redis instance log level will be emitted.
+They exactly correspond to the normal Redis log levels. Only logs emitted
+by scripting using a log level that is equal or greater than the currently
+configured Redis instance log level will be emitted.
The `message` argument is simply a string. Example:
@@ -440,26 +444,24 @@ Will generate the following:
## Sandbox and maximum execution time
Scripts should never try to access the external system, like the file system,
-nor calling any other system call. A script should just do its work operating
-on Redis data and passed arguments.
+nor calling any other system call. A script should just do its work operating on
+Redis data and passed arguments.
Scripts are also subject to a maximum execution time (five seconds by default).
This default timeout is huge since a script should run usually in a sub
millisecond amount of time. The limit is mostly needed in order to avoid
-problems when developing scripts that may loop forever for a programming
-error.
+problems when developing scripts that may loop forever for a programming error.
-It is possible to modify the maximum time a script can be executed
-with milliseconds precision, either via `redis.conf` or using the
-CONFIG GET / CONFIG SET command. The configuration parameter
-affecting max execution time is called `lua-time-limit`.
+It is possible to modify the maximum time a script can be executed with
+milliseconds precision, either via `redis.conf` or using the CONFIG GET / CONFIG
+SET command. The configuration parameter affecting max execution time is called
+`lua-time-limit`.
-When a script reaches the timeout it is not automatically terminated by
-Redis since this violates the contract Redis has with the scripting engine
-to ensure that scripts are atomic in nature. Stopping a script half-way means
-to possibly leave the dataset with half-written data inside.
-For this reasons when a script executes for more than the specified time
-the following happens:
+When a script reaches the timeout it is not automatically terminated by Redis
+since this violates the contract Redis has with the scripting engine to ensure
+that scripts are atomic in nature. Stopping a script half-way means to possibly
+leave the dataset with half-written data inside. For this reasons when a script
+executes for more than the specified time the following happens:
* Redis logs that a script that is running for too much time is still in execution.
* It starts accepting commands again from other clients, but will reply with a BUSY error to all the clients sending normal commands. The only allowed commands in this status are `SCRIPT KILL` and `SHUTDOWN NOSAVE`.
@@ -469,12 +471,11 @@ the following happens:
## EVALSHA in the context of pipelining
Care should be taken when executing `EVALSHA` in the context of a pipelined
-request, since even in a pipeline the order of execution of commands must
-be guaranteed. If `EVALSHA` will return a `NOSCRIPT` error the command can not
-be reissued later otherwise the order of execution is violated.
+request, since even in a pipeline the order of execution of commands must be
+guaranteed. If `EVALSHA` will return a `NOSCRIPT` error the command can not be
+reissued later otherwise the order of execution is violated.
-The client library implementation should take one of the following
-approaches:
+The client library implementation should take one of the following approaches:
* Always use plain `EVAL` when in the context of a pipeline.
View
8 commands/exec.md
@@ -4,9 +4,8 @@ normal.
[transactions]: /topics/transactions
-When using `WATCH`, `EXEC` will execute commands only if the
-watched keys were not modified, allowing for a [check-and-set
-mechanism][cas].
+When using `WATCH`, `EXEC` will execute commands only if the watched keys were
+not modified, allowing for a [check-and-set mechanism][cas].
[cas]: /topics/transactions#cas
@@ -15,5 +14,4 @@ mechanism][cas].
@multi-bulk-reply: each element being the reply to each of the commands
in the atomic transaction.
-When using `WATCH`, `EXEC` can return a @nil-reply if the execution was
-aborted.
+When using `WATCH`, `EXEC` can return a @nil-reply if the execution was aborted.
View
83 commands/expire.md
@@ -2,8 +2,8 @@ Set a timeout on `key`. After the timeout has expired, the key will
automatically be deleted. A key with an associated timeout is often said to be
_volatile_ in Redis terminology.
-The timeout is cleared only when the key is removed using the `DEL` command or
-overwritten using the `SET` or `GETSET` commands. This means that all the
+The timeout is cleared only when the key is removed using the `DEL` command
+or overwritten using the `SET` or `GETSET` commands. This means that all the
operations that conceptually *alter* the value stored at the key without
replacing it with a new one will leave the timeout untouched. For instance,
incrementing the value of a key with `INCR`, pushing a new value into a list
@@ -25,15 +25,15 @@ matter if the original `Key_A` had a timeout associated or not, the new key
It is possible to call `EXPIRE` using as argument a key that already has an
existing expire set. In this case the time to live of a key is *updated* to the
-new value. There are many useful applications for this, an example is
-documented in the *Navigation session* pattern section below.
+new value. There are many useful applications for this, an example is documented
+in the *Navigation session* pattern section below.
## Differences in Redis prior 2.1.3
-In Redis versions prior **2.1.3** altering a key with an expire set using
-a command altering its value had the effect of removing the key entirely.
-This semantics was needed because of limitations in the replication layer that
-are now fixed.
+In Redis versions prior **2.1.3** altering a key with an expire set using a
+command altering its value had the effect of removing the key entirely. This
+semantics was needed because of limitations in the replication layer that are
+now fixed.
@return
@@ -60,8 +60,8 @@ at this set of page views as a *Navigation session* if your user, that may
contain interesting information about what kind of products he or she is
looking for currently, so that you can recommend related products.
-You can easily model this pattern in Redis using the following strategy:
-every time the user does a page view you call the following commands:
+You can easily model this pattern in Redis using the following strategy: every
+time the user does a page view you call the following commands:
MULTI
RPUSH pagewviews.user:<userid> http://.....
@@ -79,68 +79,75 @@ using `RPUSH`.
## Keys with an expire
-Normally Redis keys are created without an associated time to live. The key
-will simply live forever, unless it is removed by the user in an explicit
-way, for instance using the `DEL` command.
+Normally Redis keys are created without an associated time to live. The key will
+simply live forever, unless it is removed by the user in an explicit way, for
+instance using the `DEL` command.
The `EXPIRE` family of commands is able to associate an expire to a given key,
at the cost of some additional memory used by the key. When a key has an expire
set, Redis will make sure to remove the key when the specified amount of time
elapsed.
-The key time to live can be updated or entirely removed using the `EXPIRE` and `PERSIST` command (or other strictly related commands).
+The key time to live can be updated or entirely removed using the `EXPIRE` and
+`PERSIST` command (or other strictly related commands).
## Expire accuracy
-In Redis 2.4 the expire might not be pin-point accurate, and it could be
-between zero to one seconds out.
+In Redis 2.4 the expire might not be pin-point accurate, and it could be between
+zero to one seconds out.
Since Redis 2.6 the expire error is from 0 to 1 milliseconds.
## Expires and persistence
-Keys expiring information is stored as absolute Unix timestamps (in milliseconds in case of Redis version 2.6 or greater). This means that the time is flowing even when the Redis instance is not active.
+Keys expiring information is stored as absolute Unix timestamps (in milliseconds
+in case of Redis version 2.6 or greater). This means that the time is flowing
+even when the Redis instance is not active.
-For expires to work well, the computer time must be taken stable. If you move an RDB file from two computers with a big desync in their clocks, funny things may happen (like all the keys loaded to be expired at loading time).
+For expires to work well, the computer time must be taken stable. If you move an
+RDB file from two computers with a big desync in their clocks, funny things may
+happen (like all the keys loaded to be expired at loading time).
-Even running instances will always check the computer clock, so for instance if you set a key with a time to live of 1000 seconds, and then set your computer time 2000 seconds in the future, the key will be expired immediately, instead of lasting for 1000 seconds.
+Even running instances will always check the computer clock, so for instance if
+you set a key with a time to live of 1000 seconds, and then set your computer
+time 2000 seconds in the future, the key will be expired immediately, instead of
+lasting for 1000 seconds.
## How Redis expires keys
Redis keys are expired in two ways: a passive way, and an active way.
-A key is actively expired simply when some client tries to access it, and
-the key is found to be timed out.
+A key is actively expired simply when some client tries to access it, and the
+key is found to be timed out.
-Of course this is not enough as there are expired keys that will never
-be accessed again. This keys should be expired anyway, so periodically
-Redis test a few keys at random among keys with an expire set.
-All the keys that are already expired are deleted from the keyspace.
+Of course this is not enough as there are expired keys that will never be
+accessed again. This keys should be expired anyway, so periodically Redis test a
+few keys at random among keys with an expire set. All the keys that are already
+expired are deleted from the keyspace.
Specifically this is what Redis does 10 times per second:
1. Test 100 random keys from the set of keys with an associated expire.
2. Delete all the keys found expired.
3. If more than 25 keys were expired, start again from step 1.
-This is a trivial probabilistic algorithm, basically the assumption is
-that our sample is representative of the whole key space,
-and we continue to expire until the percentage of keys that are likely
-to be expired is under 25%
+This is a trivial probabilistic algorithm, basically the assumption is that our
+sample is representative of the whole key space, and we continue to expire until
+the percentage of keys that are likely to be expired is under 25%
-This means that at any given moment the maximum amount of keys already
-expired that are using memory is at max equal to max amount of write
-operations per second divided by 4.
+This means that at any given moment the maximum amount of keys already expired
+that are using memory is at max equal to max amount of write operations per
+second divided by 4.
## How expires are handled in the replication link and AOF file
-In order to obtain a correct behavior without sacrificing consistency, when
-a key expires, a `DEL` operation is synthesized in both the AOF file and gains
-all the attached slaves. This way the expiration process is centralized in
-the master instance, and there is no chance of consistency errors.
+In order to obtain a correct behavior without sacrificing consistency, when a
+key expires, a `DEL` operation is synthesized in both the AOF file and gains
+all the attached slaves. This way the expiration process is centralized in the
+master instance, and there is no chance of consistency errors.
However while the slaves connected to a master will not expire keys
independently (but will wait for the `DEL` coming from the master), they'll
still take the full state of the expires existing in the dataset, so when a
-slave is elected to a master it will be able to expire the keys
-independently, fully acting as a master.
+slave is elected to a master it will be able to expire the keys independently,
+fully acting as a master.
View
3 commands/expireat.md
@@ -1,7 +1,8 @@
`EXPIREAT` has the same effect and semantic as `EXPIRE`, but
instead of specifying the number of seconds representing the TTL (time to live), it takes an absolute [Unix timestamp][2] (seconds since January 1, 1970).
-Please for the specific semantics of the command refer to the documentation of `EXPIRE`.
+Please for the specific semantics of the command refer to the documentation of
+`EXPIRE`.
[2]: http://en.wikipedia.org/wiki/Unix_time
View
3 commands/flushall.md
@@ -1,4 +1,5 @@
-Delete all the keys of all the existing databases, not just the currently selected one. This command never fails.
+Delete all the keys of all the existing databases, not just the currently
+selected one. This command never fails.
@return
View
6 commands/get.md
@@ -1,6 +1,6 @@
-Get the value of `key`. If the key does not exist the special value `nil` is returned.
-An error is returned if the value stored at `key` is not a string, because `GET`
-only handles string values.
+Get the value of `key`. If the key does not exist the special value `nil` is
+returned. An error is returned if the value stored at `key` is not a string,
+because `GET` only handles string values.
@return
View
4 commands/hdel.md
@@ -1,6 +1,6 @@
Removes the specified fields from the hash stored at `key`. Specified fields
-that do not exist within this hash are ignored.
-If `key` does not exist, it is treated as an empty hash and this command returns
+that do not exist within this hash are ignored. If `key` does not exist, it is
+treated as an empty hash and this command returns
`0`.
@return
View
4 commands/hgetall.md
@@ -1,6 +1,6 @@
Returns all fields and values of the hash stored at `key`. In the returned
-value, every field name is followed by its value, so the length
-of the reply is twice the size of the hash.
+value, every field name is followed by its value, so the length of the reply is
+twice the size of the hash.
@return
View
3 commands/hincrby.md
@@ -3,8 +3,7 @@ Increments the number stored at `field` in the hash stored at `key` by
`field` does not exist the value is set to `0` before the operation is
performed.
-The range of values supported by `HINCRBY` is limited to 64 bit signed
-integers.
+The range of values supported by `HINCRBY` is limited to 64 bit signed integers.
@return
View
12 commands/hincrbyfloat.md
@@ -1,9 +1,14 @@
-Increment the specified `field` of an hash stored at `key`, and representing a floating point number, by the specified `increment`. If the field does not exist, it is set to `0` before performing the operation. An error is returned if one of the following conditions occur:
+Increment the specified `field` of an hash stored at `key`, and representing
+a floating point number, by the specified `increment`. If the field does not
+exist, it is set to `0` before performing the operation. An error is returned if
+one of the following conditions occur:
* The field contains a value of the wrong type (not a string).
* The current field content or the specified increment are not parsable as a double precision floating point number.
-The exact behavior of this command is identical to the one of the `INCRBYFLOAT` command, please refer to the documentation of `INCRBYFLOAT` for further information.
+The exact behavior of this command is identical to the one of the `INCRBYFLOAT`
+command, please refer to the documentation of `INCRBYFLOAT` for further
+information.
@return
@@ -19,5 +24,6 @@ The exact behavior of this command is identical to the one of the `INCRBYFLOAT`
## Implementation details
-The command is always propagated in the replication link and the Append Only File as a `HSET` operation, so that differences in the underlying floating point
+The command is always propagated in the replication link and the Append Only
+File as a `HSET` operation, so that differences in the underlying floating point
math implementation will not be sources of inconsistency.
View
4 commands/hmget.md
@@ -2,8 +2,8 @@ Returns the values associated with the specified `fields` in the hash stored at
`key`.
For every `field` that does not exist in the hash, a `nil` value is returned.
-Because a non-existing keys are treated as empty hashes, running `HMGET`
-against a non-existing `key` will return a list of `nil` values.
+Because a non-existing keys are treated as empty hashes, running `HMGET` against
+a non-existing `key` will return a list of `nil` values.
@return
View
6 commands/hmset.md
@@ -1,6 +1,6 @@
-Sets the specified fields to their respective values in the hash
-stored at `key`. This command overwrites any existing fields in the hash.
-If `key` does not exist, a new key holding a hash is created.
+Sets the specified fields to their respective values in the hash stored at
+`key`. This command overwrites any existing fields in the hash. If `key` does
+not exist, a new key holding a hash is created.
@return
View
4 commands/hset.md
@@ -1,6 +1,6 @@
Sets `field` in the hash stored at `key` to `value`. If `key` does not exist, a
-new key holding a hash is created. If `field` already exists in the hash, it
-is overwritten.
+new key holding a hash is created. If `field` already exists in the hash, it is
+overwritten.
@return
View
49 commands/incr.md
@@ -1,16 +1,15 @@
-Increments the number stored at `key` by one.
-If the key does not exist, it is set to `0` before performing the operation. An
-error is returned if the key contains a value of the wrong type or contains a
-string that can not be represented as integer. This operation is limited to 64
-bit signed integers.
+Increments the number stored at `key` by one. If the key does not exist, it
+is set to `0` before performing the operation. An error is returned if the
+key contains a value of the wrong type or contains a string that can not be
+represented as integer. This operation is limited to 64 bit signed integers.
**Note**: this is a string operation because Redis does not have a dedicated
integer type. The the string stored at the key is interpreted as a base-10 **64
bit signed integer** to execute the operation.
Redis stores integers in their integer representation, so for string values
-that actually hold an integer, there is no overhead for storing the
-string representation of the integer.
+that actually hold an integer, there is no overhead for storing the string
+representation of the integer.
@return
@@ -44,12 +43,12 @@ This simple pattern can be extended in many ways:
The rate limiter pattern is a special counter that is used to limit the rate
at which an operation can be performed. The classical materialization of this
-pattern involves limiting the number of requests that can be performed against
-a public API.
+pattern involves limiting the number of requests that can be performed against a
+public API.
We provide two implementations of this pattern using `INCR`, where we assume
-that the problem to solve is limiting the number of API calls to a maximum
-of *ten requests per second per IP address*.
+that the problem to solve is limiting the number of API calls to a maximum of
+*ten requests per second per IP address*.
## Pattern: Rate limiter 1
@@ -69,19 +68,17 @@ The more simple and direct implementation of this pattern is the following:
PERFORM_API_CALL()
END
-Basically we have a counter for every IP, for every different second.
-But this counters are always incremented setting an expire of 10 seconds so
-that they'll be removed by Redis automatically when the current second is
-a different one.
+Basically we have a counter for every IP, for every different second. But this
+counters are always incremented setting an expire of 10 seconds so that they'll
+be removed by Redis automatically when the current second is a different one.
Note the used of `MULTI` and `EXEC` in order to make sure that we'll both
increment and set the expire at every API call.
## Pattern: Rate limiter 2
-An alternative implementation uses a single counter, but is a bit more
-complex to get it right without race conditions. We'll examine different
-variants.
+An alternative implementation uses a single counter, but is a bit more complex
+to get it right without race conditions. We'll examine different variants.
FUNCTION LIMIT_API_CALL(ip):
current = GET(ip)
@@ -104,9 +101,9 @@ from the first request performed in the current second. If there are more than
client performs the `INCR` command but does not perform the `EXPIRE` the
key will be leaked until we'll see the same IP address again.
-This can be fixed easily turning the `INCR` with optional `EXPIRE` into a
-Lua script that is send using the `EVAL` command (only available since Redis
-version 2.6).
+This can be fixed easily turning the `INCR` with optional `EXPIRE` into a Lua
+script that is send using the `EVAL` command (only available since Redis version
+2.6).
local current
current = redis.call("incr",KEYS[1])
@@ -115,8 +112,10 @@ version 2.6).
end
There is a different way to fix this issue without using scripting, but using
-Redis lists instead of counters.
-The implementation is more complex and uses more advanced features but has the advantage of remembering the IP addresses of the clients currently performing an API call, that may be useful or not depending on the application.
+Redis lists instead of counters. The implementation is more complex and uses
+more advanced features but has the advantage of remembering the IP addresses
+of the clients currently performing an API call, that may be useful or not
+depending on the application.
FUNCTION LIMIT_API_CALL(ip)
current = LLEN(ip)
@@ -136,6 +135,8 @@ The implementation is more complex and uses more advanced features but has the a
The `RPUSHX` command only pushes the element if the key already exists.
-Note that we have a race here, but it is not a problem: `EXISTS` may return false but the key may be created by another client before we create it inside the
+Note that we have a race here, but it is not a problem: `EXISTS` may return
+false but the key may be created by another client before we create it inside
+the
`MULTI`/`EXEC` block. However this race will just miss an API call under rare
conditions, so the rate limiting will still work correctly.
View
9 commands/incrby.md
@@ -1,8 +1,7 @@
-Increments the number stored at `key` by `increment`.
-If the key does not exist, it is set to `0` before performing the operation. An
-error is returned if the key contains a value of the wrong type or contains a
-string that can not be represented as integer. This operation is limited to 64
-bit signed integers.
+Increments the number stored at `key` by `increment`. If the key does not exist,
+it is set to `0` before performing the operation. An error is returned if the
+key contains a value of the wrong type or contains a string that can not be
+represented as integer. This operation is limited to 64 bit signed integers.
See `INCR` for extra information on increment/decrement operations.
View
17 commands/incrbyfloat.md
@@ -1,14 +1,20 @@
-Increment the string representing a floating point number stored at `key` by
-the specified `increment`. If the key does not exist, it is set to `0` before performing the operation. An error is returned if one of the following conditions occur:
+Increment the string representing a floating point number stored at `key`
+by the specified `increment`. If the key does not exist, it is set to `0`
+before performing the operation. An error is returned if one of the following
+conditions occur:
* The key contains a value of the wrong type (not a string).
* The current key content or the specified increment are not parsable as a double precision floating point number.
-If the command is successful the new incremented value is stored as the new value of the key (replacing the old one), and returned to the caller as a string.
+If the command is successful the new incremented value is stored as the new
+value of the key (replacing the old one), and returned to the caller as a
+string.
Both the value already contained in the string key and the increment argument
can be optionally provided in exponential notation, however the value computed
-after the increment is stored consistently in the same format, that is, an integer number followed (if needed) by a dot, and a variable number of digits representing the decimal part of the number. Trailing zeroes are always removed.
+after the increment is stored consistently in the same format, that is, an
+integer number followed (if needed) by a dot, and a variable number of digits
+representing the decimal part of the number. Trailing zeroes are always removed.
The precision of the output is fixed at 17 digits after the decimal point
regardless of the actual internal precision of the computation.
@@ -27,5 +33,6 @@ regardless of the actual internal precision of the computation.
## Implementation details
-The command is always propagated in the replication link and the Append Only File as a `SET` operation, so that differences in the underlying floating point
+The command is always propagated in the replication link and the Append Only
+File as a `SET` operation, so that differences in the underlying floating point
math implementation will not be sources of inconsistency.
View
4 commands/info.md
@@ -1,5 +1,5 @@
-The `INFO` command returns information and statistics about the server
-in a format that is simple to parse by computers and easy to read by humans.
+The `INFO` command returns information and statistics about the server in a
+format that is simple to parse by computers and easy to read by humans.
@return
View
6 commands/keys.md
@@ -1,8 +1,8 @@
Returns all keys matching `pattern`.
-While the time complexity for this operation is O(N), the constant
-times are fairly low. For example, Redis running on an entry level laptop can
-scan a 1 million key database in 40 milliseconds.
+While the time complexity for this operation is O(N), the constant times are
+fairly low. For example, Redis running on an entry level laptop can scan a 1
+million key database in 40 milliseconds.
**Warning**: consider `KEYS` as a command that should only be used in
production environments with extreme care. It may ruin performance when it is
View
8 commands/lastsave.md
@@ -1,7 +1,7 @@
-Return the UNIX TIME of the last DB save executed with success.
-A client may check if a `BGSAVE` command succeeded reading the `LASTSAVE`
-value, then issuing a `BGSAVE` command and checking at regular intervals
-every N seconds if `LASTSAVE` changed.
+Return the UNIX TIME of the last DB save executed with success. A client may
+check if a `BGSAVE` command succeeded reading the `LASTSAVE` value, then
+issuing a `BGSAVE` command and checking at regular intervals every N seconds if
+`LASTSAVE` changed.
@return
View
10 commands/lindex.md
@@ -1,8 +1,8 @@
-Returns the element at index `index` in the list stored at `key`.
-The index is zero-based, so `0` means the first element, `1` the second
-element and so on. Negative indices can be used to designate elements
-starting at the tail of the list. Here, `-1` means the last element, `-2` means
-the penultimate and so forth.
+Returns the element at index `index` in the list stored at `key`. The index
+is zero-based, so `0` means the first element, `1` the second element and so
+on. Negative indices can be used to designate elements starting at the tail of
+the list. Here, `-1` means the last element, `-2` means the penultimate and so
+forth.
When the value at `key` is not a list, an error is returned.
View
4 commands/linsert.md
@@ -1,5 +1,5 @@
-Inserts `value` in the list stored at `key` either before or after the
-reference value `pivot`.
+Inserts `value` in the list stored at `key` either before or after the reference
+value `pivot`.
When `key` does not exist, it is considered an empty list and no operation is
performed.
View
6 commands/llen.md
@@ -1,6 +1,6 @@
-Returns the length of the list stored at `key`.
-If `key` does not exist, it is interpreted as an empty list and `0` is returned.
-An error is returned when the value stored at `key` is not a list.
+Returns the length of the list stored at `key`. If `key` does not exist, it is
+interpreted as an empty list and `0` is returned. An error is returned when the
+value stored at `key` is not a list.
@return
View
16 commands/lpush.md
@@ -1,9 +1,13 @@
-Insert all the specified values at the head of the list stored at `key`.
-If `key` does not exist, it is created as empty list before performing
-the push operations.
-When `key` holds a value that is not a list, an error is returned.
-
-It is possible to push multiple elements using a single command call just specifying multiple arguments at the end of the command. Elements are inserted one after the other to the head of the list, from the leftmost element to the rightmost element. So for instance the command `LPUSH mylist a b c` will result into a list containing `c` as first element, `b` as second element and `a` as third element.
+Insert all the specified values at the head of the list stored at `key`. If
+`key` does not exist, it is created as empty list before performing the push
+operations. When `key` holds a value that is not a list, an error is returned.
+
+It is possible to push multiple elements using a single command call just
+specifying multiple arguments at the end of the command. Elements are inserted
+one after the other to the head of the list, from the leftmost element to the
+rightmost element. So for instance the command `LPUSH mylist a b c` will result
+into a list containing `c` as first element, `b` as second element and `a` as
+third element.
@return
View
6 commands/lpushx.md
@@ -1,6 +1,6 @@
-Inserts `value` at the head of the list stored at `key`, only if `key`
-already exists and holds a list. In contrary to `LPUSH`, no operation will
-be performed when `key` does not yet exist.
+Inserts `value` at the head of the list stored at `key`, only if `key` already
+exists and holds a list. In contrary to `LPUSH`, no operation will be performed
+when `key` does not yet exist.
@return
View
13 commands/lrange.md
@@ -1,4 +1,4 @@
-Returns the specified elements of the list stored at `key`. The offsets
+Returns the specified elements of the list stored at `key`. The offsets
`start` and `stop` are zero-based indexes, with `0` being the first element of
the list (the head of the list), `1` being the next element and so on.
@@ -8,18 +8,17 @@ penultimate, and so on.
## Consistency with range functions in various programming languages
-Note that if you have a list of numbers from 0 to 100, `LRANGE list 0 10` will
-return 11 elements, that is, the rightmost item is included. This **may or may
-not** be consistent with behavior of range-related functions in your
+Note that if you have a list of numbers from 0 to 100, `LRANGE list 0 10`
+will return 11 elements, that is, the rightmost item is included. This **may
+or may not** be consistent with behavior of range-related functions in your
programming language of choice (think Ruby's `Range.new`, `Array#slice` or
Python's `range()` function).
## Out-of-range indexes
Out of range indexes will not produce an error. If `start` is larger than the
-end of the list, an empty list is returned. If `stop` is
-larger than the actual end of the list, Redis will treat it like the last
-element of the list.
+end of the list, an empty list is returned. If `stop` is larger than the actual
+end of the list, Redis will treat it like the last element of the list.
@return
View
10 commands/lrem.md
@@ -1,6 +1,6 @@
-Removes the first `count` occurrences of elements equal to `value` from the
-list stored at `key`. The `count` argument influences the operation in the
-following ways:
+Removes the first `count` occurrences of elements equal to `value` from the list
+stored at `key`. The `count` argument influences the operation in the following
+ways:
* `count > 0`: Remove elements equal to `value` moving from head to tail.
* `count < 0`: Remove elements equal to `value` moving from tail to head.
@@ -9,8 +9,8 @@ following ways:
For example, `LREM list -2 "hello"` will remove the last two occurrences of
`"hello"` in the list stored at `list`.
-Note that non-existing keys are treated like empty lists, so when `key` does
-not exist, the command will always return `0`.
+Note that non-existing keys are treated like empty lists, so when `key` does not
+exist, the command will always return `0`.
@return
View
8 commands/ltrim.md
@@ -1,6 +1,6 @@
Trim an existing list so that it will contain only the specified range of
-elements specified. Both `start` and `stop` are zero-based indexes, where `0`
-is the first element of the list (the head), `1` the next element and so on.
+elements specified. Both `start` and `stop` are zero-based indexes, where `0` is
+the first element of the list (the head), `1` the next element and so on.
For example: `LTRIM foobar 0 2` will modify the list stored at `foobar` so that
only the first three elements of the list will remain.
@@ -11,8 +11,8 @@ element and so on.
Out of range indexes will not produce an error: if `start` is larger than the
end of the list, or `start > end`, the result will be an empty list (which
-causes `key` to be removed). If `end` is larger than the end of the list,
-Redis will treat it like the last element of the list.
+causes `key` to be removed). If `end` is larger than the end of the list, Redis
+will treat it like the last element of the list.
A common use of `LTRIM` is together with `LPUSH`/`RPUSH`. For example:
View
6 commands/mget.md
@@ -1,6 +1,6 @@
-Returns the values of all specified keys. For every key that does not hold a string value
-or does not exist, the special value `nil` is returned.
-Because of this, the operation never fails.
+Returns the values of all specified keys. For every key that does not hold a
+string value or does not exist, the special value `nil` is returned. Because of
+this, the operation never fails.
@return
View
35 commands/migrate.md
@@ -1,20 +1,35 @@
-Atomically transfer a key from a source Redis instance to a destination Redis instance. On success the key is deleted from the original instance and is guaranteed to exist in the target instance.
-
-The command is atomic and blocks the two instances for the time required to transfer the key, at any given time the key will appear to exist in a given instance or in the other instance, unless a timeout error occurs.
-
-The command internally uses `DUMP` to generate the serialized version of the key value, and `RESTORE` in order to synthesize the key in the target instance.
-The source instance acts as a client for the target instance. If the target instance returns OK to the `RESTORE` command, the source instance deletes the key using `DEL`.
-
-The timeout specifies the maximum idle time in any moment of the communication with the destination instance in milliseconds. This means that the operation does not need to be completed within the specified amount of milliseconds, but that the transfer should make progresses without blocking for more than the specified amount of milliseconds.
+Atomically transfer a key from a source Redis instance to a destination Redis
+instance. On success the key is deleted from the original instance and is
+guaranteed to exist in the target instance.
+
+The command is atomic and blocks the two instances for the time required to
+transfer the key, at any given time the key will appear to exist in a given
+instance or in the other instance, unless a timeout error occurs.
+
+The command internally uses `DUMP` to generate the serialized version of the key
+value, and `RESTORE` in order to synthesize the key in the target instance. The
+source instance acts as a client for the target instance. If the target instance
+returns OK to the `RESTORE` command, the source instance deletes the key using
+`DEL`.
+
+The timeout specifies the maximum idle time in any moment of the communication
+with the destination instance in milliseconds. This means that the operation
+does not need to be completed within the specified amount of milliseconds, but
+that the transfer should make progresses without blocking for more than the
+specified amount of milliseconds.
`MIGRATE` needs to perform I/O operations and to honor the specified timeout. When there is an I/O error during the transfer or if the timeout is reached the operation is aborted and the special error -`IOERR` returned. When this happens the following two cases are possible:
* The key may be on both the instances.
* The key may be only in the source instance.
-It is not possible for the key to get lost in the event of a timeout, but the client calling `MIGRATE`, in the event of a timeout error, should check if the key is *also* present in the target instance and act accordingly.
+It is not possible for the key to get lost in the event of a timeout, but the
+client calling `MIGRATE`, in the event of a timeout error, should check if the
+key is *also* present in the target instance and act accordingly.
-When any other error is returned (starting with `ERR`) `MIGRATE` guarantees that the key is still only present in the originating instance (unless a key with the same name was also *already* present on the target instance).
+When any other error is returned (starting with `ERR`) `MIGRATE` guarantees that
+the key is still only present in the originating instance (unless a key with the
+same name was also *already* present on the target instance).
On success OK is returned.
View
18 commands/monitor.md
@@ -3,9 +3,9 @@ processed by the Redis server. It can help in understanding what is
happening to the database. This command can both be used via `redis-cli`
and via `telnet`.
-The ability to see all the requests processed by the server is useful in
-order to spot bugs in an application both when using Redis as a database
-and as a distributed caching system.
+The ability to see all the requests processed by the server is useful in order
+to spot bugs in an application both when using Redis as a database and as a
+distributed caching system.
$ redis-cli monitor
1339518083.107412 [0 127.0.0.1:60866] "keys" "*"
@@ -39,9 +39,9 @@ Manually issue the `QUIT` command to stop a `MONITOR` stream running via
## Cost of running `MONITOR`
-Because `MONITOR` streams back **all** commands, its use comes at a
-cost. The following (totally unscientific) benchmark numbers illustrate
-what the cost of running `MONITOR` can be.
+Because `MONITOR` streams back **all** commands, its use comes at a cost. The
+following (totally unscientific) benchmark numbers illustrate what the cost of
+running `MONITOR` can be.
Benchmark result **without** `MONITOR` running:
@@ -62,9 +62,9 @@ Benchmark result **with** `MONITOR` running (`redis-cli monitor >
GET: 45330.91 requests per second
INCR: 41771.09 requests per second
-In this particular case, running a single `MONITOR` client can reduce
-the throughput by more than 50%. Running more `MONITOR` clients will
-reduce throughput even more.
+In this particular case, running a single `MONITOR` client can reduce the
+throughput by more than 50%. Running more `MONITOR` clients will reduce
+throughput even more.
@return
View
4 commands/move.md
@@ -1,7 +1,7 @@
Move `key` from the currently selected database (see `SELECT`) to the specified
destination database. When `key` already exists in the destination database, or
-it does not exist in the source database, it does nothing. It is possible to
-use `MOVE` as a locking primitive because of this.
+it does not exist in the source database, it does nothing. It is possible to use
+`MOVE` as a locking primitive because of this.
@return
View
2 commands/mset.md
@@ -1,5 +1,5 @@
Sets the given keys to their respective values. `MSET` replaces existing values
-with new values, just as regular `SET`. See `MSETNX` if you don't want to
+with new values, just as regular `SET`. See `MSETNX` if you don't want to
overwrite existing values.
`MSET` is atomic, so all given keys are set at once. It is not possible for
View
4 commands/msetnx.md
@@ -2,8 +2,8 @@ Sets the given keys to their respective values. `MSETNX` will not perform any
operation at all even if just a single key already exists.
Because of this semantic `MSETNX` can be used in order to set different keys
-representing different fields of an unique logic object in a way that
-ensures that either all the fields or none at all are set.
+representing different fields of an unique logic object in a way that ensures
+that either all the fields or none at all are set.
`MSETNX` is atomic, so all given keys are set at once. It is not possible for
clients to see that some of the keys were updated while others are unchanged.
View
4 commands/multi.md
@@ -1,5 +1,5 @@
-Marks the start of a [transaction][transactions]
-block. Subsequent commands will be queued for atomic execution using
+Marks the start of a [transaction][transactions] block. Subsequent commands will
+be queued for atomic execution using
`EXEC`.
[transactions]: /topics/transactions
View
7 commands/object.md
@@ -18,7 +18,9 @@ Objects can be encoded in different ways:
* Hashes can be encoded as `zipmap` or `hashtable`. The `zipmap` is a special encoding used for small hashes.
* Sorted Sets can be encoded as `ziplist` or `skiplist` format. As for the List type small sorted sets can be specially encoded using `ziplist`, while the `skiplist` encoding is the one that works with sorted sets of any size.
-All the specially encoded types are automatically converted to the general type once you perform an operation that makes it no possible for Redis to retain the space saving encoding.
+All the specially encoded types are automatically converted to the general type
+once you perform an operation that makes it no possible for Redis to retain the
+space saving encoding.
@return
@@ -40,7 +42,8 @@ If the object you try to inspect is missing, a null bulk reply is returned.
redis> object idletime mylist
(integer) 10