Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with
or
.
Download ZIP
Browse files

Start every sentence on a new line

  • Loading branch information...
commit 339e5130b2f930a5986b8927ad8bb39a261e59ad 1 parent 99169cf
@pietern pietern authored
Showing with 922 additions and 745 deletions.
  1. +11 −10 commands/append.md
  2. +5 −4 commands/auth.md
  3. +10 −6 commands/bgrewriteaof.md
  4. +6 −4 commands/bgsave.md
  5. +14 −11 commands/bitcount.md
  6. +5 −4 commands/bitop.md
  7. +28 −23 commands/blpop.md
  8. +5 −4 commands/brpop.md
  9. +5 −4 commands/brpoplpush.md
  10. +9 −8 commands/config get.md
  11. +9 −8 commands/config set.md
  12. +2 −2 commands/debug object.md
  13. +2 −2 commands/debug segfault.md
  14. +5 −4 commands/decr.md
  15. +5 −4 commands/decrby.md
  16. +2 −1  commands/del.md
  17. +9 −6 commands/dump.md
  18. +157 −128 commands/eval.md
  19. +44 −38 commands/expire.md
  20. +3 −2 commands/expireat.md
  21. +2 −1  commands/flushall.md
  22. +2 −1  commands/flushdb.md
  23. +4 −3 commands/get.md
  24. +4 −3 commands/getbit.md
  25. +4 −3 commands/getrange.md
  26. +5 −4 commands/getset.md
  27. +6 −5 commands/hdel.md
  28. +3 −3 commands/hgetall.md
  29. +2 −1  commands/hincrby.md
  30. +4 −4 commands/hincrbyfloat.md
  31. +3 −2 commands/hmset.md
  32. +3 −3 commands/hset.md
  33. +3 −2 commands/hsetnx.md
  34. +38 −31 commands/incr.md
  35. +5 −4 commands/incrby.md
  36. +6 −5 commands/incrbyfloat.md
  37. +13 −9 commands/info.md
  38. +10 −7 commands/keys.md
  39. +4 −4 commands/lastsave.md
  40. +6 −5 commands/lindex.md
  41. +3 −3 commands/llen.md
  42. +12 −10 commands/lpush.md
  43. +3 −2 commands/lpushx.md
  44. +16 −13 commands/lrange.md
  45. +2 −2 commands/lrem.md
  46. +2 −2 commands/lset.md
  47. +13 −9 commands/ltrim.md
  48. +4 −3 commands/mget.md
  49. +13 −12 commands/migrate.md
  50. +9 −8 commands/monitor.md
  51. +4 −3 commands/move.md
  52. +6 −5 commands/mset.md
  53. +6 −4 commands/msetnx.md
  54. +2 −2 commands/multi.md
  55. +20 −15 commands/object.md
  56. +3 −2 commands/ping.md
  57. +3 −2 commands/punsubscribe.md
  58. +3 −2 commands/quit.md
  59. +4 −3 commands/rename.md
  60. +2 −2 commands/renamenx.md
  61. +2 −2 commands/restore.md
  62. +21 −17 commands/rpoplpush.md
  63. +12 −10 commands/rpush.md
  64. +3 −2 commands/rpushx.md
  65. +6 −5 commands/sadd.md
  66. +5 −4 commands/save.md
  67. +3 −3 commands/script exists.md
  68. +7 −6 commands/script kill.md
  69. +4 −4 commands/script load.md
  70. +2 −2 commands/select.md
  71. +2 −2 commands/set.md
  72. +15 −12 commands/setbit.md
  73. +5 −5 commands/setex.md
  74. +21 −15 commands/setnx.md
  75. +16 −15 commands/setrange.md
  76. +11 −8 commands/shutdown.md
  77. +3 −3 commands/sinter.md
  78. +8 −7 commands/slaveof.md
  79. +22 −16 commands/slowlog.md
  80. +9 −6 commands/smove.md
  81. +42 −36 commands/sort.md
  82. +6 −5 commands/srem.md
  83. +2 −2 commands/strlen.md
  84. +3 −3 commands/ttl.md
  85. +3 −3 commands/type.md
  86. +3 −2 commands/unsubscribe.md
  87. +10 −7 commands/zadd.md
  88. +7 −6 commands/zincrby.md
  89. +7 −6 commands/zinterstore.md
  90. +16 −13 commands/zrange.md
  91. +12 −11 commands/zrangebyscore.md
  92. +3 −2 commands/zrank.md
  93. +5 −4 commands/zrem.md
  94. +7 −5 commands/zremrangebyrank.md
  95. +2 −2 commands/zrevrange.md
  96. +3 −3 commands/zrevrangebyscore.md
  97. +3 −2 commands/zrevrank.md
  98. +12 −11 commands/zunionstore.md
  99. +6 −1 remarkdown.rb
View
21 commands/append.md
@@ -1,6 +1,7 @@
If `key` already exists and is a string, this command appends the `value` at the
-end of the string. If `key` does not exist it is created and set as an empty
-string, so `APPEND` will be similar to `SET` in this special case.
+end of the string.
+If `key` does not exist it is created and set as an empty string, so `APPEND`
+will be similar to `SET` in this special case.
@return
@@ -17,24 +18,24 @@ string, so `APPEND` will be similar to `SET` in this special case.
## Pattern: Time series
the `APPEND` command can be used to create a very compact representation of a
-list of fixed-size samples, usually referred as _time series_. Every time a new
-sample arrives we can store it using the command
+list of fixed-size samples, usually referred as _time series_.
+Every time a new sample arrives we can store it using the command
APPEND timeseries "fixed-size sample"
Accessing individual elements in the time series is not hard:
* `STRLEN` can be used in order to obtain the number of samples.
-* `GETRANGE` allows for random access of elements. If our time series have an
- associated time information we can easily implement a binary search to get
- range combining `GETRANGE` with the Lua scripting engine available in Redis
- 2.6.
+* `GETRANGE` allows for random access of elements.
+ If our time series have an associated time information we can easily implement
+ a binary search to get range combining `GETRANGE` with the Lua scripting
+ engine available in Redis 2.6.
* `SETRANGE` can be used to overwrite an existing time serie.
The limitations of this pattern is that we are forced into an append-only mode
of operation, there is no way to cut the time series to a given size easily
-because Redis currently lacks a command able to trim string objects. However the
-space efficiency of time series stored in this way is remarkable.
+because Redis currently lacks a command able to trim string objects.
+However the space efficiency of time series stored in this way is remarkable.
Hint: it is possible to switch to a different key based on the current Unix
time, in this way it is possible to have just a relatively small amount of
View
9 commands/auth.md
@@ -1,10 +1,11 @@
-Request for authentication in a password protected Redis server. Redis can be
-instructed to require a password before allowing clients to execute commands.
+Request for authentication in a password protected Redis server.
+Redis can be instructed to require a password before allowing clients to execute
+commands.
This is done using the `requirepass` directive in the configuration file.
If `password` matches the password in the configuration file, the server replies
-with the `OK` status code and starts accepting commands. Otherwise, an error is
-returned and the clients needs to try a new password.
+with the `OK` status code and starts accepting commands.
+Otherwise, an error is returned and the clients needs to try a new password.
**Note**: because of the high performance nature of Redis, it is possible to try
a lot of passwords in parallel in very short time, so make sure to generate a
View
16 commands/bgrewriteaof.md
@@ -1,18 +1,22 @@
-Instruct Redis to start an [Append Only File][tpaof] rewrite process. The
-rewrite will create a small optimized version of the current Append Only File.
+Instruct Redis to start an [Append Only File][tpaof] rewrite process.
+The rewrite will create a small optimized version of the current Append Only
+File.
[tpaof]: /topics/persistence#append-only-file
If `BGREWRITEAOF` fails, no data gets lost as the old AOF will be untouched.
The rewrite will be only triggered by Redis if there is not already a background
-process doing persistence. Specifically:
+process doing persistence.
+Specifically:
* If a Redis child is creating a snapshot on disk, the AOF rewrite is
_scheduled_ but not started until the saving child producing the RDB file
- terminates. In this case the `BGREWRITEAOF` will still return an OK code, but
- with an appropriate message. You can check if an AOF rewrite is scheduled
- looking at the `INFO` command starting from Redis 2.6.
+ terminates.
+ In this case the `BGREWRITEAOF` will still return an OK code, but with an
+ appropriate message.
+ You can check if an AOF rewrite is scheduled looking at the `INFO` command
+ starting from Redis 2.6.
* If an AOF rewrite is already in progress the command returns an error and no
AOF rewrite will be scheduled for a later time.
View
10 commands/bgsave.md
@@ -1,7 +1,9 @@
-Save the DB in background. The OK code is immediately returned. Redis forks,
-the parent continues to server the clients, the child saves the DB on disk
-then exit. A client my be able to check if the operation succeeded using the
-`LASTSAVE` command.
+Save the DB in background.
+The OK code is immediately returned.
+Redis forks, the parent continues to server the clients, the child saves the DB
+on disk then exit.
+A client my be able to check if the operation succeeded using the `LASTSAVE`
+command.
Please refer to the [persistence documentation][tp] for detailed information.
View
25 commands/bitcount.md
@@ -1,8 +1,8 @@
Count the number of set bits (population counting) in a string.
-By default all the bytes contained in the string are examined. It is possible
-to specify the counting operation only in an interval passing the additional
-arguments _start_ and _end_.
+By default all the bytes contained in the string are examined.
+It is possible to specify the counting operation only in an interval passing the
+additional arguments _start_ and _end_.
Like for the `GETRANGE` command start and end can contain negative values in
order to index bytes starting from the end of the string, where -1 is the last
@@ -27,13 +27,15 @@ The number of bits set to 1.
## Pattern: real time metrics using bitmaps
Bitmaps are a very space efficient representation of certain kinds of
-information. One example is a web application that needs the history of user
-visits, so that for instance it is possible to determine what users are good
-targets of beta features, or for any other purpose.
+information.
+One example is a web application that needs the history of user visits, so that
+for instance it is possible to determine what users are good targets of beta
+features, or for any other purpose.
-Using the `SETBIT` command this is trivial to accomplish, identifying every
-day with a small progressive integer. For instance day 0 is the first day the
-application was put online, day 1 the next day, and so forth.
+Using the `SETBIT` command this is trivial to accomplish, identifying every day
+with a small progressive integer.
+For instance day 0 is the first day the application was put online, day 1 the
+next day, and so forth.
Every time an user performs a page view, the application can register that in
the current day the user visited the web site using the `SETBIT` command setting
@@ -52,8 +54,9 @@ bitmaps][hbgc212fermurb]".
In the above example of counting days, even after 10 years the application is
online we still have just `365*10` bits of data per user, that is just 456 bytes
-per user. With this amount of data `BITCOUNT` is still as fast as any other O(1)
-Redis command like `GET` or `INCR`.
+per user.
+With this amount of data `BITCOUNT` is still as fast as any other O(1) Redis
+command like `GET` or `INCR`.
When the bitmap is big, there are two alternatives:
View
9 commands/bitop.md
@@ -41,8 +41,9 @@ size of the longest input string.
## Pattern: real time metrics using bitmaps
`BITOP` is a good complement to the pattern documented in the `BITCOUNT` command
-documentation. Different bitmaps can be combined in order to obtain a target
-bitmap where to perform the population counting operation.
+documentation.
+Different bitmaps can be combined in order to obtain a target bitmap where to
+perform the population counting operation.
See the article called "[Fast easy realtime metrics using Redis
bitmaps][hbgc212fermurb]" for an interesting use cases.
@@ -51,8 +52,8 @@ bitmaps][hbgc212fermurb]" for an interesting use cases.
## Performance considerations
-`BITOP` is a potentially slow command as it runs in O(N) time. Care should be
-taken when running it against long input strings.
+`BITOP` is a potentially slow command as it runs in O(N) time.
+Care should be taken when running it against long input strings.
For real time metrics and statistics involving large inputs a good approach is
to use a slave (with read-only option disabled) where to perform the bit-wise
View
51 commands/blpop.md
@@ -1,7 +1,8 @@
-`BLPOP` is a blocking list pop primitive. It is the blocking version of `LPOP`
-because it blocks the connection when there are no elements to pop from any of
-the given lists. An element is popped from the head of the first list that is
-non-empty, with the given keys being checked in the order that they are given.
+`BLPOP` is a blocking list pop primitive.
+It is the blocking version of `LPOP` because it blocks the connection when there
+are no elements to pop from any of the given lists.
+An element is popped from the head of the first list that is non-empty, with the
+given keys being checked in the order that they are given.
## Non-blocking behavior
@@ -9,9 +10,10 @@ When `BLPOP` is called, if at least one of the specified keys contain a
non-empty list, an element is popped from the head of the list and returned to
the caller together with the `key` it was popped from.
-Keys are checked in the order that they are given. Let's say that the key
-`list1` doesn't exist and `list2` and `list3` hold non-empty lists. Consider the
-following command:
+Keys are checked in the order that they are given.
+Let's say that the key `list1` doesn't exist and `list2` and `list3` hold
+non-empty lists.
+Consider the following command:
BLPOP list1 list2 list3 0
@@ -32,27 +34,29 @@ the client will unblock returning a `nil` multi-bulk value when the specified
timeout has expired without a push operation against at least one of the
specified keys.
-The timeout argument is interpreted as an integer value. A timeout of zero can
-be used to block indefinitely.
+The timeout argument is interpreted as an integer value.
+A timeout of zero can be used to block indefinitely.
## Multiple clients blocking for the same keys
-Multiple clients can block for the same key. They are put into a queue, so the
-first to be served will be the one that started to wait earlier, in a first-
-`!BLPOP` first-served fashion.
+Multiple clients can block for the same key.
+They are put into a queue, so the first to be served will be the one that
+started to wait earlier, in a first- `!BLPOP` first-served fashion.
## `!BLPOP` inside a `!MULTI` / `!EXEC` transaction
`BLPOP` can be used with pipelining (sending multiple commands and reading the
replies in batch), but it does not make sense to use `BLPOP` inside a `MULTI` /
-`EXEC` block. This would require blocking the entire server in order to execute
-the block atomically, which in turn does not allow other clients to perform a
-push operation.
+`EXEC` block.
+This would require blocking the entire server in order to execute the block
+atomically, which in turn does not allow other clients to perform a push
+operation.
The behavior of `BLPOP` inside `MULTI` / `EXEC` when the list is empty is to
-return a `nil` multi-bulk reply, which is the same thing that happens when
-the timeout is reached. If you like science fiction, think of time flowing at
-infinite speed inside a `MULTI` / `EXEC` block.
+return a `nil` multi-bulk reply, which is the same thing that happens when the
+timeout is reached.
+If you like science fiction, think of time flowing at infinite speed inside a
+`MULTI` / `EXEC` block.
@return
@@ -76,11 +80,12 @@ infinite speed inside a `MULTI` / `EXEC` block.
## Pattern: Event notification
Using blocking list operations it is possible to mount different blocking
-primitives. For instance for some application you may need to block waiting for
-elements into a Redis Set, so that as far as a new element is added to the Set,
-it is possible to retrieve it without resort to polling. This would require
-a blocking version of `SPOP` that is not available, but using blocking list
-operations we can easily accomplish this task.
+primitives.
+For instance for some application you may need to block waiting for elements
+into a Redis Set, so that as far as a new element is added to the Set, it is
+possible to retrieve it without resort to polling.
+This would require a blocking version of `SPOP` that is not available, but using
+blocking list operations we can easily accomplish this task.
The consumer will do:
View
9 commands/brpop.md
@@ -1,7 +1,8 @@
-`BRPOP` is a blocking list pop primitive. It is the blocking version of `RPOP`
-because it blocks the connection when there are no elements to pop from any of
-the given lists. An element is popped from the tail of the first list that is
-non-empty, with the given keys being checked in the order that they are given.
+`BRPOP` is a blocking list pop primitive.
+It is the blocking version of `RPOP` because it blocks the connection when there
+are no elements to pop from any of the given lists.
+An element is popped from the tail of the first list that is non-empty, with the
+given keys being checked in the order that they are given.
See the [BLPOP documentation][cb] for the exact semantics, since `BRPOP` is
identical to `BLPOP` with the only difference being that it pops elements from
View
9 commands/brpoplpush.md
@@ -1,7 +1,8 @@
-`BRPOPLPUSH` is the blocking variant of `RPOPLPUSH`. When `source` contains
-elements, this command behaves exactly like `RPOPLPUSH`. When `source` is empty,
-Redis will block the connection until another client pushes to it or until
-`timeout` is reached. A `timeout` of zero can be used to block indefinitely.
+`BRPOPLPUSH` is the blocking variant of `RPOPLPUSH`.
+When `source` contains elements, this command behaves exactly like `RPOPLPUSH`.
+When `source` is empty, Redis will block the connection until another client
+pushes to it or until `timeout` is reached.
+A `timeout` of zero can be used to block indefinitely.
See `RPOPLPUSH` for more information.
View
17 commands/config get.md
@@ -1,14 +1,15 @@
The `CONFIG GET` command is used to read the configuration parameters of a
-running Redis server. Not all the configuration parameters are supported in
-Redis 2.4, while Redis 2.6 can read the whole configuration of a server using
-this command.
+running Redis server.
+Not all the configuration parameters are supported in Redis 2.4, while Redis 2.6
+can read the whole configuration of a server using this command.
The symmetric command used to alter the configuration at run time is `CONFIG
SET`.
-`CONFIG GET` takes a single argument, that is glob style pattern. All the
-configuration parameters matching this parameter are reported as a list of
-key-value pairs. Example:
+`CONFIG GET` takes a single argument, that is glob style pattern.
+All the configuration parameters matching this parameter are reported as a list
+of key-value pairs.
+Example:
redis> config get *max-*-entries*
1) "hash-max-zipmap-entries"
@@ -31,8 +32,8 @@ following important differences:
the `redis.conf` abbreviated form (10k 2gb ... and so forth), everything
should be specified as a well formed 64 bit integer, in the base unit of the
configuration directive.
-* The save parameter is a single string of space separated integers. Every pair
- of integers represent a seconds/modifications threshold.
+* The save parameter is a single string of space separated integers.
+ Every pair of integers represent a seconds/modifications threshold.
For instance what in `redis.conf` looks like:
View
17 commands/config set.md
@@ -1,6 +1,7 @@
The `CONFIG SET` command is used in order to reconfigure the server at run time
-without the need to restart Redis. You can change both trivial parameters or
-switch from one to another persistence option using this command.
+without the need to restart Redis.
+You can change both trivial parameters or switch from one to another persistence
+option using this command.
The list of configuration parameters supported by `CONFIG SET` can be obtained
issuing a `CONFIG GET *` command, that is the symmetrical command used to obtain
@@ -20,8 +21,8 @@ following important differences:
the `redis.conf` abbreviated form (10k 2gb ... and so forth), everything
should be specified as a well formed 64 bit integer, in the base unit of the
configuration directive.
-* The save parameter is a single string of space separated integers. Every pair
- of integers represent a seconds/modifications threshold.
+* The save parameter is a single string of space separated integers.
+ Every pair of integers represent a seconds/modifications threshold.
For instance what in `redis.conf` looks like:
@@ -33,8 +34,8 @@ and after 300 seconds if there are at least 10 changes to the datasets, should
be set using `CONFIG SET` as "900 1 300 10".
It is possible to switch persistence from RDB snapshotting to append only file
-(and the other way around) using the `CONFIG SET` command. For more information
-about how to do that please check [persistence page][tp].
+(and the other way around) using the `CONFIG SET` command.
+For more information about how to do that please check [persistence page][tp].
[tp]: /topics/persistence
@@ -49,5 +50,5 @@ options are not mutually exclusive.
@return
-@status-reply: `OK` when the configuration was set properly. Otherwise an error
-is returned.
+@status-reply: `OK` when the configuration was set properly.
+Otherwise an error is returned.
View
4 commands/debug object.md
@@ -1,4 +1,4 @@
-`DEBUG OBJECT` is a debugging command that should not be used by clients. Check
-the `OBJECT` command instead.
+`DEBUG OBJECT` is a debugging command that should not be used by clients.
+Check the `OBJECT` command instead.
@status-reply
View
4 commands/debug segfault.md
@@ -1,4 +1,4 @@
-`DEBUG SEGFAULT` performs an invalid memory access that crashes Redis. It is
-used to simulate bugs during the development.
+`DEBUG SEGFAULT` performs an invalid memory access that crashes Redis.
+It is used to simulate bugs during the development.
@status-reply
View
9 commands/decr.md
@@ -1,7 +1,8 @@
-Decrements the number stored at `key` by one. If the key does not exist, it
-is set to `0` before performing the operation. An error is returned if the
-key contains a value of the wrong type or contains a string that can not be
-represented as integer. This operation is limited to **64 bit signed integers**.
+Decrements the number stored at `key` by one.
+If the key does not exist, it is set to `0` before performing the operation.
+An error is returned if the key contains a value of the wrong type or contains a
+string that can not be represented as integer.
+This operation is limited to **64 bit signed integers**.
See `INCR` for extra information on increment/decrement operations.
View
9 commands/decrby.md
@@ -1,7 +1,8 @@
-Decrements the number stored at `key` by `decrement`. If the key does not exist,
-it is set to `0` before performing the operation. An error is returned if the
-key contains a value of the wrong type or contains a string that can not be
-represented as integer. This operation is limited to 64 bit signed integers.
+Decrements the number stored at `key` by `decrement`.
+If the key does not exist, it is set to `0` before performing the operation.
+An error is returned if the key contains a value of the wrong type or contains a
+string that can not be represented as integer.
+This operation is limited to 64 bit signed integers.
See `INCR` for extra information on increment/decrement operations.
View
3  commands/del.md
@@ -1,4 +1,5 @@
-Removes the specified keys. A key is ignored if it does not exist.
+Removes the specified keys.
+A key is ignored if it does not exist.
@return
View
15 commands/dump.md
@@ -1,20 +1,23 @@
Serialize the value stored at key in a Redis-specific format and return it to
-the user. The returned value can be synthesized back into a Redis key using the
-`RESTORE` command.
+the user.
+The returned value can be synthesized back into a Redis key using the `RESTORE`
+command.
The serialization format is opaque and non-standard, however it has a few
semantical characteristics:
* It contains a 64bit checksum that is used to make sure errors will be
- detected. The `RESTORE` command makes sure to check the checksum before
- synthesizing a key using the serialized value.
+ detected.
+ The `RESTORE` command makes sure to check the checksum before synthesizing a
+ key using the serialized value.
* Values are encoded in the same format used by RDB.
* An RDB version is encoded inside the serialized value, so that different Redis
versions with incompatible RDB formats will refuse to process the serialized
value.
-The serialized value does NOT contain expire information. In order to capture
-the time to live of the current value the `PTTL` command should be used.
+The serialized value does NOT contain expire information.
+In order to capture the time to live of the current value the `PTTL` command
+should be used.
If `key` does not exist a nil bulk reply is returned.
View
285 commands/eval.md
@@ -3,14 +3,14 @@
`EVAL` and `EVALSHA` are used to evaluate scripts using the Lua interpreter
built into Redis starting from version 2.6.0.
-The first argument of `EVAL` is a Lua 5.1 script. The script does not need to
-define a Lua function (and should not). It is just a Lua program that will run
-in the context of the Redis server.
+The first argument of `EVAL` is a Lua 5.1 script.
+The script does not need to define a Lua function (and should not).
+It is just a Lua program that will run in the context of the Redis server.
-The second argument of `EVAL` is the number of arguments that follows the
-script (starting from the third argument) that represent Redis key names. This
-arguments can be accessed by Lua using the `KEYS` global variable in the form of
-a one-based array (so `KEYS[1]`, `KEYS[2]`, ...).
+The second argument of `EVAL` is the number of arguments that follows the script
+(starting from the third argument) that represent Redis key names.
+This arguments can be accessed by Lua using the `KEYS` global variable in the
+form of a one-based array (so `KEYS[1]`, `KEYS[2]`, ...).
All the additional arguments should not represent key names and can be accessed
by Lua using the `ARGV` global variable, very similarly to what happens with
@@ -46,9 +46,9 @@ the arguments of a well formed Redis command:
> eval "return redis.call('set','foo','bar')" 0
OK
-The above script actually sets the key `foo` to the string `bar`. However it
-violates the `EVAL` command semantics as all the keys that the script uses
-should be passed using the KEYS array, in the following way:
+The above script actually sets the key `foo` to the string `bar`.
+However it violates the `EVAL` command semantics as all the keys that the script
+uses should be passed using the KEYS array, in the following way:
> eval "return redis.call('set',KEYS[1],'bar')" 1 foo
OK
@@ -57,11 +57,12 @@ The reason for passing keys in the proper way is that, before of `EVAL` all the
Redis commands could be analyzed before execution in order to establish what are
the keys the command will operate on.
-In order for this to be true for `EVAL` also keys must be explicit. This is
-useful in many ways, but especially in order to make sure Redis Cluster is able
-to forward your request to the appropriate cluster node (Redis Cluster is a
-work in progress, but the scripting feature was designed in order to play well
-with it). However this rule is not enforced in order to provide the user with
+In order for this to be true for `EVAL` also keys must be explicit.
+This is useful in many ways, but especially in order to make sure Redis Cluster
+is able to forward your request to the appropriate cluster node (Redis Cluster
+is a work in progress, but the scripting feature was designed in order to play
+well with it).
+However this rule is not enforced in order to provide the user with
opportunities to abuse the Redis single instance configuration, at the cost of
writing scripts not compatible with Redis Cluster.
@@ -71,16 +72,17 @@ protocol using a set of conversion rules.
## Conversion between Lua and Redis data types
Redis return values are converted into Lua data types when Lua calls a Redis
-command using call() or pcall(). Similarly Lua data types are converted into
-Redis protocol when a Lua script returns some value, so that scripts can control
-what `EVAL` will reply to the client.
+command using call() or pcall().
+Similarly Lua data types are converted into Redis protocol when a Lua script
+returns some value, so that scripts can control what `EVAL` will reply to the
+client.
This conversion between data types is designed in a way that if a Redis type is
converted into a Lua type, and then the result is converted back into a Redis
type, the result is the same as of the initial value.
-In other words there is a one to one conversion between Lua and Redis types. The
-following table shows you all the conversions rules:
+In other words there is a one to one conversion between Lua and Redis types.
+The following table shows you all the conversions rules:
**Redis to Lua** conversion table.
@@ -125,17 +127,17 @@ what the called command would return if called directly.
## Atomicity of scripts
-Redis uses the same Lua interpreter to run all the commands. Also Redis
-guarantees that a script is executed in an atomic way: no other script or Redis
-command will be executed while a script is being executed. This semantics is
-very similar to the one of `MULTI` / `EXEC`. From the point of view of all the
-other clients the effects of a script are either still not visible or already
-completed.
+Redis uses the same Lua interpreter to run all the commands.
+Also Redis guarantees that a script is executed in an atomic way: no other
+script or Redis command will be executed while a script is being executed.
+This semantics is very similar to the one of `MULTI` / `EXEC`.
+From the point of view of all the other clients the effects of a script are
+either still not visible or already completed.
-However this also means that executing slow scripts is not a good idea. It is
-not hard to create fast scripts, as the script overhead is very low, but if
-you are going to use slow scripts you should be aware that while the script is
-running no other client can execute commands since the server is busy.
+However this also means that executing slow scripts is not a good idea.
+It is not hard to create fast scripts, as the script overhead is very low, but
+if you are going to use slow scripts you should be aware that while the script
+is running no other client can execute commands since the server is busy.
## Error handling
@@ -157,10 +159,10 @@ object returned by `redis.pcall()`.
## Bandwidth and EVALSHA
-The `EVAL` command forces you to send the script body again and again. Redis
-does not need to recompile the script every time as it uses an internal caching
-mechanism, however paying the cost of the additional bandwidth may not be
-optimal in many contexts.
+The `EVAL` command forces you to send the script body again and again.
+Redis does not need to recompile the script every time as it uses an internal
+caching mechanism, however paying the cost of the additional bandwidth may not
+be optimal in many contexts.
On the other hand defining commands using a special command or via `redis.conf`
would be a problem for a few reasons:
@@ -177,7 +179,8 @@ In order to avoid the above three problems and at the same time don't incur in
the bandwidth penalty, Redis implements the `EVALSHA` command.
`EVALSHA` works exactly as `EVAL`, but instead of having a script as first
-argument it has the SHA1 sum of a script. The behavior is the following:
+argument it has the SHA1 sum of a script.
+The behavior is the following:
* If the server still remembers a script whose SHA1 sum was the one specified,
the script is executed.
@@ -198,8 +201,8 @@ Example:
The client library implementation can always optimistically send `EVALSHA` under
the hoods even when the client actually called `EVAL`, in the hope the script
-was already seen by the server. If the `NOSCRIPT` error is returned `EVAL` will
-be used instead.
+was already seen by the server.
+If the `NOSCRIPT` error is returned `EVAL` will be used instead.
Passing keys and arguments as `EVAL` additional arguments is also very useful in
this context as the script string remains constant and can be efficiently cached
@@ -207,64 +210,74 @@ by Redis.
## Script cache semantics
-Executed scripts are guaranteed to be in the script cache **forever**. This
-means that if an `EVAL` is performed against a Redis instance all the subsequent
-`EVALSHA` calls will succeed.
+Executed scripts are guaranteed to be in the script cache **forever**.
+This means that if an `EVAL` is performed against a Redis instance all the
+subsequent `EVALSHA` calls will succeed.
The only way to flush the script cache is by explicitly calling the SCRIPT FLUSH
command, that will _completely flush_ the scripts cache removing all the scripts
-executed so far. This is usually needed only when the instance is going to be
-instantiated for another customer or application in a cloud environment.
+executed so far.
+This is usually needed only when the instance is going to be instantiated for
+another customer or application in a cloud environment.
The reason why scripts can be cached for long time is that it is unlikely for
a well written application to have so many different scripts to create memory
-problems. Every script is conceptually like the implementation of a new command,
-and even a large application will likely have just a few hundreds of that. Even
-if the application is modified many times and scripts will change, still the
-memory used is negligible.
+problems.
+Every script is conceptually like the implementation of a new command, and even
+a large application will likely have just a few hundreds of that.
+Even if the application is modified many times and scripts will change, still
+the memory used is negligible.
The fact that the user can count on Redis not removing scripts is semantically a
-very good thing. For instance an application taking a persistent connection to
-Redis can stay sure that if a script was sent once it is still in memory, thus
-for instance can use EVALSHA against those scripts in a pipeline without the
-chance that an error will be generated since the script is not known (we'll see
-this problem in its details later).
+very good thing.
+For instance an application taking a persistent connection to Redis can stay
+sure that if a script was sent once it is still in memory, thus for instance can
+use EVALSHA against those scripts in a pipeline without the chance that an error
+will be generated since the script is not known (we'll see this problem in its
+details later).
## The SCRIPT command
Redis offers a SCRIPT command that can be used in order to control the scripting
-subsystem. SCRIPT currently accepts three different commands:
-
-* SCRIPT FLUSH. This command is the only way to force Redis to flush the scripts
- cache. It is mostly useful in a cloud environment where the same instance
- can be reassigned to a different user. It is also useful for testing client
- libraries implementations of the scripting feature.
-
-* SCRIPT EXISTS _sha1_ _sha2_... _shaN_. Given a list of SHA1 digests as
- arguments this command returns an array of 1 or 0, where 1 means the specific
- SHA1 is recognized as a script already present in the scripting cache, while
- 0 means that a script with this SHA1 was never seen before (or at least never
- seen after the latest SCRIPT FLUSH command).
-
-* SCRIPT LOAD _script_. This command registers the specified script in the
- Redis script cache. The command is useful in all the contexts where we want
- to make sure that `EVALSHA` will not fail (for instance during a pipeline or
- MULTI/EXEC operation), without the need to actually execute the script.
-
-* SCRIPT KILL. This command is the only wait to interrupt a long running script
- that reached the configured maximum execution time for scripts. The SCRIPT
- KILL command can only be used with scripts that did not modified the dataset
- during their execution (since stopping a read only script does not violate
- the scripting engine guaranteed atomicity). See the next sections for more
- information about long running scripts.
+subsystem.
+SCRIPT currently accepts three different commands:
+
+* SCRIPT FLUSH.
+ This command is the only way to force Redis to flush the scripts cache.
+ It is mostly useful in a cloud environment where the same instance can be
+ reassigned to a different user.
+ It is also useful for testing client libraries implementations of the
+ scripting feature.
+
+* SCRIPT EXISTS _sha1_ _sha2_... _shaN_.
+ Given a list of SHA1 digests as arguments this command returns an array of
+ 1 or 0, where 1 means the specific SHA1 is recognized as a script already
+ present in the scripting cache, while 0 means that a script with this SHA1
+ was never seen before (or at least never seen after the latest SCRIPT FLUSH
+ command).
+
+* SCRIPT LOAD _script_.
+ This command registers the specified script in the Redis script cache.
+ The command is useful in all the contexts where we want to make sure that
+ `EVALSHA` will not fail (for instance during a pipeline or MULTI/EXEC
+ operation), without the need to actually execute the script.
+
+* SCRIPT KILL.
+ This command is the only wait to interrupt a long running script that reached
+ the configured maximum execution time for scripts.
+ The SCRIPT KILL command can only be used with scripts that did not modified
+ the dataset during their execution (since stopping a read only script does not
+ violate the scripting engine guaranteed atomicity).
+ See the next sections for more information about long running scripts.
## Scripts as pure functions
A very important part of scripting is writing scripts that are pure functions.
Scripts executed in a Redis instance are replicated on slaves sending the same
-script, instead of the resulting commands. The same happens for the Append Only
-File. The reason is that scripts are much faster than sending commands one after
-the other to a Redis instance, so if the client is taking the master very busy
+script, instead of the resulting commands.
+The same happens for the Append Only File.
+The reason is that scripts are much faster than sending commands one after the
+other to a Redis instance, so if the client is taking the master very busy
sending scripts, turning this scripts into single commands for the slave / AOF
would result in too much bandwidth for the replication link or the Append Only
File (and also too much CPU since dispatching a command received via network
@@ -275,10 +288,11 @@ The only drawback with this approach is that scripts are required to have the
following property:
* The script always evaluates the same Redis _write_ commands with the same
- arguments given the same input data set. Operations performed by the script
- cannot depend on any hidden (non explicit) information or state that may
- change as script execution proceeds or between different executions of the
- script, nor can it depend on any external input from I/O devices.
+ arguments given the same input data set.
+ Operations performed by the script cannot depend on any hidden (non explicit)
+ information or state that may change as script execution proceeds or between
+ different executions of the script, nor can it depend on any external input
+ from I/O devices.
Things like using the system time, calling Redis random commands like
`RANDOMKEY`, or using Lua random number generator, could result into scripts
@@ -291,29 +305,31 @@ In order to enforce this behavior in scripts Redis does the following:
* Redis will block the script with an error if a script will call a Redis
command able to alter the data set **after** a Redis _random_ command like
- `RANDOMKEY`, `SRANDMEMBER`, `TIME`. This means that if a script is read only
- and does not modify the data set it is free to call those commands. Note that
- a _random command_ does not necessarily identifies a command that uses random
- numbers: any non deterministic command is considered a random command (the
- best example in this regard is the `TIME` command).
+ `RANDOMKEY`, `SRANDMEMBER`, `TIME`.
+ This means that if a script is read only and does not modify the data set it
+ is free to call those commands.
+ Note that a _random command_ does not necessarily identifies a command that
+ uses random numbers: any non deterministic command is considered a random
+ command (the best example in this regard is the `TIME` command).
* Redis commands that may return elements in random order, like `SMEMBERS`
(because Redis Sets are _unordered_) have a different behavior when called
from Lua, and undergone a silent lexicographical sorting filter before
- returning data to Lua scripts. So `redis.call("smembers",KEYS[1])` will always
- return the Set elements in the same order, while the same command invoked from
- normal clients may return different results even if the key contains exactly
- the same elements.
+ returning data to Lua scripts.
+ So `redis.call("smembers",KEYS[1])` will always return the Set elements in
+ the same order, while the same command invoked from normal clients may return
+ different results even if the key contains exactly the same elements.
* Lua pseudo random number generation functions `math.random` and
`math.randomseed` are modified in order to always have the same seed every
- time a new script is executed. This means that calling `math.random` will
- always generate the same sequence of numbers every time a script is executed
- if `math.randomseed` is not used.
+ time a new script is executed.
+ This means that calling `math.random` will always generate the same sequence
+ of numbers every time a script is executed if `math.randomseed` is not used.
-However the user is still able to write commands with random behaviors using
-the following simple trick. Imagine I want to write a Redis script that will
-populate a list with N random integers.
+However the user is still able to write commands with random behaviors using the
+following simple trick.
+Imagine I want to write a Redis script that will populate a list with N random
+integers.
I can start writing the following script, using a small Ruby program:
@@ -353,7 +369,8 @@ following elements:
In order to make it a pure function, but still making sure that every invocation
of the script will result in different random elements, we can simply add an
additional argument to the script, that will be used in order to seed the Lua
-pseudo random number generator. The new script will be like the following:
+pseudo random number generator.
+The new script will be like the following:
RandomPushScript = <<EOF
local i = tonumber(ARGV[1])
@@ -372,21 +389,23 @@ pseudo random number generator. The new script will be like the following:
What we are doing here is sending the seed of the PRNG as one of the arguments.
This way the script output will be the same given the same arguments, but we are
changing one of the argument at every invocation, generating the random seed
-client side. The seed will be propagated as one of the arguments both in the
-replication link and in the Append Only File, guaranteeing that the same changes
-will be generated when the AOF is reloaded or when the slave will process the
-script.
+client side.
+The seed will be propagated as one of the arguments both in the replication
+link and in the Append Only File, guaranteeing that the same changes will be
+generated when the AOF is reloaded or when the slave will process the script.
Note: an important part of this behavior is that the PRNG that Redis implements
as `math.random` and `math.randomseed` is guaranteed to have the same output
-regardless of the architecture of the system running Redis. 32 or 64 bit systems
-like big or little endian systems will still produce the same output.
+regardless of the architecture of the system running Redis.
+32 or 64 bit systems like big or little endian systems will still produce the
+same output.
## Global variables protection
Redis scripts are not allowed to create global variables, in order to avoid
-leaking data into the Lua state. If a script requires to take state across calls
-(a pretty uncommon need) it should use Redis keys instead.
+leaking data into the Lua state.
+If a script requires to take state across calls (a pretty uncommon need) it
+should use Redis keys instead.
When a global variable access is attempted the script is terminated and EVAL
returns with an error:
@@ -398,7 +417,8 @@ Accessing a _non existing_ global variable generates a similar error.
Using Lua debugging functionalities or other approaches like altering the meta
table used to implement global protections, in order to circumvent globals
-protection, is not hard. However it is hardly possible to do it accidentally.
+protection, is not hard.
+However it is hardly possible to do it accidentally.
If the user messes with the Lua global state, the consistency of AOF and
replication is not guaranteed: don't do it.
@@ -437,11 +457,12 @@ It is possible to write to the Redis log file from Lua scripts using the
* `redis.LOG_NOTICE`
* `redis.LOG_WARNING`
-They exactly correspond to the normal Redis log levels. Only logs emitted
-by scripting using a log level that is equal or greater than the currently
-configured Redis instance log level will be emitted.
+They exactly correspond to the normal Redis log levels.
+Only logs emitted by scripting using a log level that is equal or greater than
+the currently configured Redis instance log level will be emitted.
-The `message` argument is simply a string. Example:
+The `message` argument is simply a string.
+Example:
redis.log(redis.LOG_WARNING,"Something is wrong with this script.")
@@ -452,33 +473,39 @@ Will generate the following:
## Sandbox and maximum execution time
Scripts should never try to access the external system, like the file system,
-nor calling any other system call. A script should just do its work operating on
-Redis data and passed arguments.
+nor calling any other system call.
+A script should just do its work operating on Redis data and passed arguments.
Scripts are also subject to a maximum execution time (five seconds by default).
This default timeout is huge since a script should run usually in a sub
-millisecond amount of time. The limit is mostly needed in order to avoid
-problems when developing scripts that may loop forever for a programming error.
+millisecond amount of time.
+The limit is mostly needed in order to avoid problems when developing scripts
+that may loop forever for a programming error.
It is possible to modify the maximum time a script can be executed with
milliseconds precision, either via `redis.conf` or using the CONFIG GET / CONFIG
-SET command. The configuration parameter affecting max execution time is called
+SET command.
+The configuration parameter affecting max execution time is called
`lua-time-limit`.
When a script reaches the timeout it is not automatically terminated by Redis
since this violates the contract Redis has with the scripting engine to ensure
-that scripts are atomic in nature. Stopping a script half-way means to possibly
-leave the dataset with half-written data inside. For this reasons when a script
-executes for more than the specified time the following happens:
+that scripts are atomic in nature.
+Stopping a script half-way means to possibly leave the dataset with half-written
+data inside.
+For this reasons when a script executes for more than the specified time the
+following happens:
* Redis logs that a script that is running for too much time is still in
execution.
-* It starts accepting commands again from other clients, but will reply with
- a BUSY error to all the clients sending normal commands. The only allowed
- commands in this status are `SCRIPT KILL` and `SHUTDOWN NOSAVE`.
+* It starts accepting commands again from other clients, but will reply with a
+ BUSY error to all the clients sending normal commands.
+ The only allowed commands in this status are `SCRIPT KILL` and `SHUTDOWN
+ NOSAVE`.
* It is possible to terminate a script that executed only read-only commands
- using the `SCRIPT KILL` command. This does not violate the scripting semantic
- as no data was yet written on the dataset by the script.
+ using the `SCRIPT KILL` command.
+ This does not violate the scripting semantic as no data was yet written on the
+ dataset by the script.
* If the script already called write commands the only allowed command becomes
`SHUTDOWN NOSAVE` that stops the server not saving the current data set on
disk (basically the server is aborted).
@@ -487,8 +514,9 @@ executes for more than the specified time the following happens:
Care should be taken when executing `EVALSHA` in the context of a pipelined
request, since even in a pipeline the order of execution of commands must be
-guaranteed. If `EVALSHA` will return a `NOSCRIPT` error the command can not be
-reissued later otherwise the order of execution is violated.
+guaranteed.
+If `EVALSHA` will return a `NOSCRIPT` error the command can not be reissued
+later otherwise the order of execution is violated.
The client library implementation should take one of the following approaches:
@@ -496,5 +524,6 @@ The client library implementation should take one of the following approaches:
* Accumulate all the commands to send into the pipeline, then check for `EVAL`
commands and use the `SCRIPT EXISTS` command to check if all the scripts are
- already defined. If not add `SCRIPT LOAD` commands on top of the pipeline as
- required, and use `EVALSHA` for all the `EVAL` calls.
+ already defined.
+ If not add `SCRIPT LOAD` commands on top of the pipeline as required, and use
+ `EVALSHA` for all the `EVAL` calls.
View
82 commands/expire.md
@@ -1,14 +1,15 @@
-Set a timeout on `key`. After the timeout has expired, the key will
-automatically be deleted. A key with an associated timeout is often said to be
-_volatile_ in Redis terminology.
-
-The timeout is cleared only when the key is removed using the `DEL` command
-or overwritten using the `SET` or `GETSET` commands. This means that all the
-operations that conceptually _alter_ the value stored at the key without
-replacing it with a new one will leave the timeout untouched. For instance,
-incrementing the value of a key with `INCR`, pushing a new value into a list
-with `LPUSH`, or altering the field value of a hash with `HSET` are all
-operations that will leave the timeout untouched.
+Set a timeout on `key`.
+After the timeout has expired, the key will automatically be deleted.
+A key with an associated timeout is often said to be _volatile_ in Redis
+terminology.
+
+The timeout is cleared only when the key is removed using the `DEL` command or
+overwritten using the `SET` or `GETSET` commands.
+This means that all the operations that conceptually _alter_ the value stored at
+the key without replacing it with a new one will leave the timeout untouched.
+For instance, incrementing the value of a key with `INCR`, pushing a new value
+into a list with `LPUSH`, or altering the field value of a hash with `HSET` are
+all operations that will leave the timeout untouched.
The timeout can also be cleared, turning the key back into a persistent key,
using the `PERSIST` command.
@@ -24,16 +25,17 @@ inherit all the characteristics of `Key_B`.
## Refreshing expires
It is possible to call `EXPIRE` using as argument a key that already has an
-existing expire set. In this case the time to live of a key is _updated_ to the
-new value. There are many useful applications for this, an example is documented
-in the _Navigation session_ pattern section below.
+existing expire set.
+In this case the time to live of a key is _updated_ to the new value.
+There are many useful applications for this, an example is documented in the
+_Navigation session_ pattern section below.
## Differences in Redis prior 2.1.3
In Redis versions prior **2.1.3** altering a key with an expire set using a
-command altering its value had the effect of removing the key entirely. This
-semantics was needed because of limitations in the replication layer that are
-now fixed.
+command altering its value had the effect of removing the key entirely.
+This semantics was needed because of limitations in the replication layer that
+are now fixed.
@return
@@ -55,10 +57,11 @@ now fixed.
Imagine you have a web service and you are interested in the latest N pages
_recently_ visited by your users, such that each adiacent page view was not
-performed more than 60 seconds after the previous. Conceptually you may think at
-this set of page views as a _Navigation session_ if your user, that may contain
-interesting information about what kind of products he or she is looking for
-currently, so that you can recommend related products.
+performed more than 60 seconds after the previous.
+Conceptually you may think at this set of page views as a _Navigation session_
+if your user, that may contain interesting information about what kind of
+products he or she is looking for currently, so that you can recommend related
+products.
You can easily model this pattern in Redis using the following strategy: every
time the user does a page view you call the following commands:
@@ -79,14 +82,14 @@ using `RPUSH`.
## Keys with an expire
-Normally Redis keys are created without an associated time to live. The key will
-simply live forever, unless it is removed by the user in an explicit way, for
-instance using the `DEL` command.
+Normally Redis keys are created without an associated time to live.
+The key will simply live forever, unless it is removed by the user in an
+explicit way, for instance using the `DEL` command.
The `EXPIRE` family of commands is able to associate an expire to a given key,
-at the cost of some additional memory used by the key. When a key has an expire
-set, Redis will make sure to remove the key when the specified amount of time
-elapsed.
+at the cost of some additional memory used by the key.
+When a key has an expire set, Redis will make sure to remove the key when the
+specified amount of time elapsed.
The key time to live can be updated or entirely removed using the `EXPIRE` and
`PERSIST` command (or other strictly related commands).
@@ -101,12 +104,13 @@ Since Redis 2.6 the expire error is from 0 to 1 milliseconds.
## Expires and persistence
Keys expiring information is stored as absolute Unix timestamps (in milliseconds
-in case of Redis version 2.6 or greater). This means that the time is flowing
-even when the Redis instance is not active.
+in case of Redis version 2.6 or greater).
+This means that the time is flowing even when the Redis instance is not active.
-For expires to work well, the computer time must be taken stable. If you move an
-RDB file from two computers with a big desync in their clocks, funny things may
-happen (like all the keys loaded to be expired at loading time).
+For expires to work well, the computer time must be taken stable.
+If you move an RDB file from two computers with a big desync in their clocks,
+funny things may happen (like all the keys loaded to be expired at loading
+time).
Even running instances will always check the computer clock, so for instance if
you set a key with a time to live of 1000 seconds, and then set your computer
@@ -121,9 +125,10 @@ A key is actively expired simply when some client tries to access it, and the
key is found to be timed out.
Of course this is not enough as there are expired keys that will never be
-accessed again. This keys should be expired anyway, so periodically Redis test a
-few keys at random among keys with an expire set. All the keys that are already
-expired are deleted from the keyspace.
+accessed again.
+This keys should be expired anyway, so periodically Redis test a few keys at
+random among keys with an expire set.
+All the keys that are already expired are deleted from the keyspace.
Specifically this is what Redis does 10 times per second:
@@ -142,9 +147,10 @@ second divided by 4.
## How expires are handled in the replication link and AOF file
In order to obtain a correct behavior without sacrificing consistency, when a
-key expires, a `DEL` operation is synthesized in both the AOF file and gains
-all the attached slaves. This way the expiration process is centralized in the
-master instance, and there is no chance of consistency errors.
+key expires, a `DEL` operation is synthesized in both the AOF file and gains all
+the attached slaves.
+This way the expiration process is centralized in the master instance, and there
+is no chance of consistency errors.
However while the slaves connected to a master will not expire keys
independently (but will wait for the `DEL` coming from the master), they'll
View
5 commands/expireat.md
@@ -10,8 +10,9 @@ Please for the specific semantics of the command refer to the documentation of
## Background
`EXPIREAT` was introduced in order to convert relative timeouts to absolute
-timeouts for the AOF persistence mode. Of course, it can be used directly to
-specify that a given key should expire at a given time in the future.
+timeouts for the AOF persistence mode.
+Of course, it can be used directly to specify that a given key should expire at
+a given time in the future.
@return
View
3  commands/flushall.md
@@ -1,5 +1,6 @@
Delete all the keys of all the existing databases, not just the currently
-selected one. This command never fails.
+selected one.
+This command never fails.
@return
View
3  commands/flushdb.md
@@ -1,4 +1,5 @@
-Delete all the keys of the currently selected DB. This command never fails.
+Delete all the keys of the currently selected DB.
+This command never fails.
@return
View
7 commands/get.md
@@ -1,6 +1,7 @@
-Get the value of `key`. If the key does not exist the special value `nil` is
-returned. An error is returned if the value stored at `key` is not a string,
-because `GET` only handles string values.
+Get the value of `key`.
+If the key does not exist the special value `nil` is returned.
+An error is returned if the value stored at `key` is not a string, because `GET`
+only handles string values.
@return
View
7 commands/getbit.md
@@ -1,9 +1,10 @@
Returns the bit value at _offset_ in the string value stored at _key_.
When _offset_ is beyond the string length, the string is assumed to be a
-contiguous space with 0 bits. When _key_ does not exist it is assumed to be an
-empty string, so _offset_ is always out of range and the value is also assumed
-to be a contiguous space with 0 bits.
+contiguous space with 0 bits.
+When _key_ does not exist it is assumed to be an empty string, so _offset_ is
+always out of range and the value is also assumed to be a contiguous space with
+0 bits.
@return
View
7 commands/getrange.md
@@ -2,9 +2,10 @@
Redis versions `<= 2.0`.
Returns the substring of the string value stored at `key`, determined by the
-offsets `start` and `end` (both are inclusive). Negative offsets can be used in
-order to provide an offset starting from the end of the string. So -1 means the
-last character, -2 the penultimate and so forth.
+offsets `start` and `end` (both are inclusive).
+Negative offsets can be used in order to provide an offset starting from the end
+of the string.
+So -1 means the last character, -2 the penultimate and so forth.
The function handles out of range requests by limiting the resulting range to
the actual length of the string.
View
9 commands/getset.md
@@ -3,10 +3,11 @@ Returns an error when `key` exists but does not hold a string value.
## Design pattern
-`GETSET` can be used together with `INCR` for counting with atomic reset. For
-example: a process may call `INCR` against the key `mycounter` every time some
-event occurs, but from time to time we need to get the value of the counter and
-reset it to zero atomically. This can be done using `GETSET mycounter "0"`:
+`GETSET` can be used together with `INCR` for counting with atomic reset.
+For example: a process may call `INCR` against the key `mycounter` every time
+some event occurs, but from time to time we need to get the value of the counter
+and reset it to zero atomically.
+This can be done using `GETSET mycounter "0"`:
@cli
INCR mycounter
View
11 commands/hdel.md
@@ -1,6 +1,7 @@
-Removes the specified fields from the hash stored at `key`. Specified fields
-that do not exist within this hash are ignored. If `key` does not exist, it is
-treated as an empty hash and this command returns `0`.
+Removes the specified fields from the hash stored at `key`.
+Specified fields that do not exist within this hash are ignored.
+If `key` does not exist, it is treated as an empty hash and this command returns
+`0`.
@return
@@ -9,8 +10,8 @@ including specified but non existing fields.
@history
-* `>= 2.4`: Accepts multiple `field` arguments. Redis versions older than 2.4
- can only remove a field per call.
+* `>= 2.4`: Accepts multiple `field` arguments.
+ Redis versions older than 2.4 can only remove a field per call.
To remove multiple fields from a hash in an atomic fashion in earlier
versions, use a `MULTI` / `EXEC` block.
View
6 commands/hgetall.md
@@ -1,6 +1,6 @@
-Returns all fields and values of the hash stored at `key`. In the returned
-value, every field name is followed by its value, so the length of the reply is
-twice the size of the hash.
+Returns all fields and values of the hash stored at `key`.
+In the returned value, every field name is followed by its value, so the length
+of the reply is twice the size of the hash.
@return
View
3  commands/hincrby.md
@@ -1,5 +1,6 @@
Increments the number stored at `field` in the hash stored at `key` by
-`increment`. If `key` does not exist, a new key holding a hash is created.
+`increment`.
+If `key` does not exist, a new key holding a hash is created.
If `field` does not exist the value is set to `0` before the operation is
performed.
View
8 commands/hincrbyfloat.md
@@ -1,7 +1,7 @@
-Increment the specified `field` of an hash stored at `key`, and representing
-a floating point number, by the specified `increment`. If the field does not
-exist, it is set to `0` before performing the operation. An error is returned if
-one of the following conditions occur:
+Increment the specified `field` of an hash stored at `key`, and representing a
+floating point number, by the specified `increment`.
+If the field does not exist, it is set to `0` before performing the operation.
+An error is returned if one of the following conditions occur:
* The field contains a value of the wrong type (not a string).
* The current field content or the specified increment are not parsable as a
View
5 commands/hmset.md
@@ -1,6 +1,7 @@
Sets the specified fields to their respective values in the hash stored at
-`key`. This command overwrites any existing fields in the hash. If `key` does
-not exist, a new key holding a hash is created.
+`key`.
+This command overwrites any existing fields in the hash.
+If `key` does not exist, a new key holding a hash is created.
@return
View
6 commands/hset.md
@@ -1,6 +1,6 @@
-Sets `field` in the hash stored at `key` to `value`. If `key` does not exist, a
-new key holding a hash is created. If `field` already exists in the hash, it is
-overwritten.
+Sets `field` in the hash stored at `key` to `value`.
+If `key` does not exist, a new key holding a hash is created.
+If `field` already exists in the hash, it is overwritten.
@return
View
5 commands/hsetnx.md
@@ -1,6 +1,7 @@
Sets `field` in the hash stored at `key` to `value`, only if `field` does not
-yet exist. If `key` does not exist, a new key holding a hash is created. If
-`field` already exists, this operation has no effect.
+yet exist.
+If `key` does not exist, a new key holding a hash is created.
+If `field` already exists, this operation has no effect.
@return
View
69 commands/incr.md
@@ -1,11 +1,13 @@
-Increments the number stored at `key` by one. If the key does not exist, it
-is set to `0` before performing the operation. An error is returned if the
-key contains a value of the wrong type or contains a string that can not be
-represented as integer. This operation is limited to 64 bit signed integers.
+Increments the number stored at `key` by one.
+If the key does not exist, it is set to `0` before performing the operation.
+An error is returned if the key contains a value of the wrong type or contains a
+string that can not be represented as integer.
+This operation is limited to 64 bit signed integers.
**Note**: this is a string operation because Redis does not have a dedicated
-integer type. The the string stored at the key is interpreted as a base-10 **64
-bit signed integer** to execute the operation.
+integer type.
+The the string stored at the key is interpreted as a base-10 **64 bit signed
+integer** to execute the operation.
Redis stores integers in their integer representation, so for string values
that actually hold an integer, there is no overhead for storing the string
@@ -25,9 +27,11 @@ representation of the integer.
## Pattern: Counter
The counter pattern is the most obvious thing you can do with Redis atomic
-increment operations. The idea is simply send an `INCR` command to Redis every
-time an operation occurs. For instance in a web application we may want to know
-how many page views this user did every day of the year.
+increment operations.
+The idea is simply send an `INCR` command to Redis every time an operation
+occurs.
+For instance in a web application we may want to know how many page views this
+user did every day of the year.
To do so the web application may simply increment a key every time the user
performs a page view, creating the key name concatenating the User ID and a
@@ -42,15 +46,15 @@ This simple pattern can be extended in many ways:
and reset it to zero.
* Using other atomic increment/decrement commands like `DECR` or `INCRBY` it
is possible to handle values that may get bigger or smaller depending on the
- operations performed by the user. Imagine for instance the score of different
- users in an online game.
+ operations performed by the user.
+ Imagine for instance the score of different users in an online game.
## Pattern: Rate limiter
-The rate limiter pattern is a special counter that is used to limit the rate
-at which an operation can be performed. The classical materialization of this
-pattern involves limiting the number of requests that can be performed against a
-public API.
+The rate limiter pattern is a special counter that is used to limit the rate at
+which an operation can be performed.
+The classical materialization of this pattern involves limiting the number of
+requests that can be performed against a public API.
We provide two implementations of this pattern using `INCR`, where we assume
that the problem to solve is limiting the number of API calls to a maximum of
@@ -74,9 +78,10 @@ The more simple and direct implementation of this pattern is the following:
PERFORM_API_CALL()
END
-Basically we have a counter for every IP, for every different second. But this
-counters are always incremented setting an expire of 10 seconds so that they'll
-be removed by Redis automatically when the current second is a different one.
+Basically we have a counter for every IP, for every different second.
+But this counters are always incremented setting an expire of 10 seconds so that
+they'll be removed by Redis automatically when the current second is a different
+one.
Note the used of `MULTI` and `EXEC` in order to make sure that we'll both
increment and set the expire at every API call.
@@ -84,7 +89,8 @@ increment and set the expire at every API call.
## Pattern: Rate limiter 2
An alternative implementation uses a single counter, but is a bit more complex
-to get it right without race conditions. We'll examine different variants.
+to get it right without race conditions.
+We'll examine different variants.
FUNCTION LIMIT_API_CALL(ip):
current = GET(ip)
@@ -99,13 +105,13 @@ to get it right without race conditions. We'll examine different variants.
END
The counter is created in a way that it only will survive one second, starting
-from the first request performed in the current second. If there are more than
-10 requests in the same second the counter will reach a value greater than 10,
-otherwise it will expire and start again from 0.
+from the first request performed in the current second.
+If there are more than 10 requests in the same second the counter will reach a
+value greater than 10, otherwise it will expire and start again from 0.
-**In the above code there is a race condition**. If for some reason the client
-performs the `INCR` command but does not perform the `EXPIRE` the key will be
-leaked until we'll see the same IP address again.
+**In the above code there is a race condition**.
+If for some reason the client performs the `INCR` command but does not perform
+the `EXPIRE` the key will be leaked until we'll see the same IP address again.
This can be fixed easily turning the `INCR` with optional `EXPIRE` into a Lua
script that is send using the `EVAL` command (only available since Redis version
@@ -118,10 +124,10 @@ script that is send using the `EVAL` command (only available since Redis version
end
There is a different way to fix this issue without using scripting, but using
-Redis lists instead of counters. The implementation is more complex and uses
-more advanced features but has the advantage of remembering the IP addresses
-of the clients currently performing an API call, that may be useful or not
-depending on the application.
+Redis lists instead of counters.
+The implementation is more complex and uses more advanced features but has the
+advantage of remembering the IP addresses of the clients currently performing an
+API call, that may be useful or not depending on the application.
FUNCTION LIMIT_API_CALL(ip)
current = LLEN(ip)
@@ -143,5 +149,6 @@ The `RPUSHX` command only pushes the element if the key already exists.
Note that we have a race here, but it is not a problem: `EXISTS` may return
false but the key may be created by another client before we create it inside
-the `MULTI` / `EXEC` block. However this race will just miss an API call under
-rare conditions, so the rate limiting will still work correctly.
+the `MULTI` / `EXEC` block.
+However this race will just miss an API call under rare conditions, so the rate
+limiting will still work correctly.
View
9 commands/incrby.md
@@ -1,7 +1,8 @@
-Increments the number stored at `key` by `increment`. If the key does not exist,
-it is set to `0` before performing the operation. An error is returned if the
-key contains a value of the wrong type or contains a string that can not be
-represented as integer. This operation is limited to 64 bit signed integers.
+Increments the number stored at `key` by `increment`.
+If the key does not exist, it is set to `0` before performing the operation.
+An error is returned if the key contains a value of the wrong type or contains a
+string that can not be represented as integer.
+This operation is limited to 64 bit signed integers.
See `INCR` for extra information on increment/decrement operations.
View
11 commands/incrbyfloat.md
@@ -1,7 +1,7 @@
-Increment the string representing a floating point number stored at `key`
-by the specified `increment`. If the key does not exist, it is set to `0`
-before performing the operation. An error is returned if one of the following
-conditions occur:
+Increment the string representing a floating point number stored at `key` by the
+specified `increment`.
+If the key does not exist, it is set to `0` before performing the operation.
+An error is returned if one of the following conditions occur:
* The key contains a value of the wrong type (not a string).
* The current key content or the specified increment are not parsable as a
@@ -15,7 +15,8 @@ Both the value already contained in the string key and the increment argument
can be optionally provided in exponential notation, however the value computed
after the increment is stored consistently in the same format, that is, an
integer number followed (if needed) by a dot, and a variable number of digits
-representing the decimal part of the number. Trailing zeroes are always removed.
+representing the decimal part of the number.
+Trailing zeroes are always removed.
The precision of the output is fixed at 17 digits after the decimal point
regardless of the actual internal precision of the computation.
View
22 commands/info.md
@@ -27,20 +27,24 @@ All the fields are in the form of `field:value` terminated by `\r\n`.
as [`tcmalloc`][hcgcpgp]
* `used_memory_rss` is the number of bytes that Redis allocated as seen by the
- operating system. Optimally, this number is close to `used_memory` and there
- is little memory fragmentation. This is the number reported by tools such as
- `top` and `ps`. A large difference between these numbers means there is memory
- fragmentation. Because Redis does not have control over how its allocations
- are mapped to memory pages, `used_memory_rss` is often the result of a spike
- in memory usage. The ratio between `used_memory_rss` and `used_memory` is
- given as `mem_fragmentation_ratio`.
+ operating system.
+ Optimally, this number is close to `used_memory` and there is little memory
+ fragmentation.
+ This is the number reported by tools such as `top` and `ps`.
+ A large difference between these numbers means there is memory fragmentation.
+ Because Redis does not have control over how its allocations are mapped to
+ memory pages, `used_memory_rss` is often the result of a spike in memory
+ usage.
+ The ratio between `used_memory_rss` and `used_memory` is given as
+ `mem_fragmentation_ratio`.
* `changes_since_last_save` refers to the number of operations that produced
some kind of change in the dataset since the last time either `SAVE` or
`BGSAVE` was called.
* `allocation_stats` holds a histogram containing the number of allocations of a
- certain size (up to 256). This provides a means of introspection for the type
- of allocations performed by Redis at run time.
+ certain size (up to 256).
+ This provides a means of introspection for the type of allocations performed
+ by Redis at run time.
[hcgcpgp]: http://code.google.com/p/google-perftools/
View
17 commands/keys.md
@@ -1,15 +1,18 @@
Returns all keys matching `pattern`.
While the time complexity for this operation is O(N), the constant times are
-fairly low. For example, Redis running on an entry level laptop can scan a 1
-million key database in 40 milliseconds.
+fairly low.
+For example, Redis running on an entry level laptop can scan a 1 million key
+database in 40 milliseconds.
**Warning**: consider `KEYS` as a command that should only be used in production
-environments with extreme care. It may ruin performance when it is executed
-against large databases. This command is intended for debugging and special
-operations, such as changing your keyspace layout. Don't use `KEYS` in your
-regular application code. If you're looking for a way to find keys in a subset
-of your keyspace, consider using [sets][tdts].
+environments with extreme care.
+It may ruin performance when it is executed against large databases.
+This command is intended for debugging and special operations, such as changing
+your keyspace layout.
+Don't use `KEYS` in your regular application code.
+If you're looking for a way to find keys in a subset of your keyspace, consider
+using [sets][tdts].
[tdts]: /topics/data-types#sets
View
8 commands/lastsave.md
@@ -1,7 +1,7 @@
-Return the UNIX TIME of the last DB save executed with success. A client may
-check if a `BGSAVE` command succeeded reading the `LASTSAVE` value, then
-issuing a `BGSAVE` command and checking at regular intervals every N seconds if
-`LASTSAVE` changed.
+Return the UNIX TIME of the last DB save executed with success.
+A client may check if a `BGSAVE` command succeeded reading the `LASTSAVE` value,
+then issuing a `BGSAVE` command and checking at regular intervals every N
+seconds if `LASTSAVE` changed.
@return
View
11 commands/lindex.md
@@ -1,8 +1,9 @@
-Returns the element at index `index` in the list stored at `key`. The index
-is zero-based, so `0` means the first element, `1` the second element and so
-on. Negative indices can be used to designate elements starting at the tail of
-the list. Here, `-1` means the last element, `-2` means the penultimate and so
-forth.
+Returns the element at index `index` in the list stored at `key`.
+The index is zero-based, so `0` means the first element, `1` the second element
+and so on.
+Negative indices can be used to designate elements starting at the tail of the
+list.
+Here, `-1` means the last element, `-2` means the penultimate and so forth.
When the value at `key` is not a list, an error is returned.
View
6 commands/llen.md
@@ -1,6 +1,6 @@
-Returns the length of the list stored at `key`. If `key` does not exist, it is
-interpreted as an empty list and `0` is returned. An error is returned when the
-value stored at `key` is not a list.
+Returns the length of the list stored at `key`.
+If `key` does not exist, it is interpreted as an empty list and `0` is returned.
+An error is returned when the value stored at `key` is not a list.
@return
View
22 commands/lpush.md
@@ -1,13 +1,14 @@
-Insert all the specified values at the head of the list stored at `key`. If
-`key` does not exist, it is created as empty list before performing the push
-operations. When `key` holds a value that is not a list, an error is returned.
+Insert all the specified values at the head of the list stored at `key`.
+If `key` does not exist, it is created as empty list before performing the push
+operations.
+When `key` holds a value that is not a list, an error is returned.
It is possible to push multiple elements using a single command call just
-specifying multiple arguments at the end of the command. Elements are inserted
-one after the other to the head of the list, from the leftmost element to the
-rightmost element. So for instance the command `LPUSH mylist a b c` will result
-into a list containing `c` as first element, `b` as second element and `a` as
-third element.
+specifying multiple arguments at the end of the command.
+Elements are inserted one after the other to the head of the list, from the
+leftmost element to the rightmost element.
+So for instance the command `LPUSH mylist a b c` will result into a list
+containing `c` as first element, `b` as second element and `a` as third element.
@return
@@ -15,8 +16,9 @@ third element.
@history
-* `>= 2.4`: Accepts multiple `value` arguments. In Redis versions older than 2.4
- it was possible to push a single value per command.
+* `>= 2.4`: Accepts multiple `value` arguments.
+ In Redis versions older than 2.4 it was possible to push a single value per
+ command.
@examples
View
5 commands/lpushx.md
@@ -1,6 +1,7 @@
Inserts `value` at the head of the list stored at `key`, only if `key` already
-exists and holds a list. In contrary to `LPUSH`, no operation will be performed
-when `key` does not yet exist.
+exists and holds a list.
+In contrary to `LPUSH`, no operation will be performed when `key` does not yet
+exist.
@return
View
29 commands/lrange.md
@@ -1,24 +1,27 @@
-Returns the specified elements of the list stored at `key`. The offsets `start`
-and `stop` are zero-based indexes, with `0` being the first element of the list
-(the head of the list), `1` being the next element and so on.
+Returns the specified elements of the list stored at `key`.
+The offsets `start` and `stop` are zero-based indexes, with `0` being the first
+element of the list (the head of the list), `1` being the next element and so
+on.
These offsets can also be negative numbers indicating offsets starting at the
-end of the list. For example, `-1` is the last element of the list, `-2` the
-penultimate, and so on.
+end of the list.
+For example, `-1` is the last element of the list, `-2` the penultimate, and so
+on.
## Consistency with range functions in various programming languages
-Note that if you have a list of numbers from 0 to 100, `LRANGE list 0 10`
-will return 11 elements, that is, the rightmost item is included. This **may
-or may not** be consistent with behavior of range-related functions in your
-programming language of choice (think Ruby's `Range.new`, `Array#slice` or
-Python's `range()` function).
+Note that if you have a list of numbers from 0 to 100, `LRANGE list 0 10` will
+return 11 elements, that is, the rightmost item is included.
+This **may or may not** be consistent with behavior of range-related functions
+in your programming language of choice (think Ruby's `Range.new`, `Array#slice`
+or Python's `range()` function).
## Out-of-range indexes
-Out of range indexes will not produce an error. If `start` is larger than the
-end of the list, an empty list is returned. If `stop` is larger than the actual
-end of the list, Redis will treat it like the last element of the list.
+Out of range indexes will not produce an error.
+If `start` is larger than the end of the list, an empty list is returned.
+If `stop` is larger than the actual end of the list, Redis will treat it like
+the last element of the list.
@return
View
4 commands/lrem.md
@@ -1,6 +1,6 @@
Removes the first `count` occurrences of elements equal to `value` from the list
-stored at `key`. The `count` argument influences the operation in the following
-ways:
+stored at `key`.
+The `count` argument influences the operation in the following ways:
* `count > 0`: Remove elements equal to `value` moving from head to tail.
* `count < 0`: Remove elements equal to `value` moving from tail to head.
View
4 commands/lset.md
@@ -1,5 +1,5 @@
-Sets the list element at `index` to `value`. For more information on the `index`
-argument, see `LINDEX`.
+Sets the list element at `index` to `value`.
+For more information on the `index` argument, see `LINDEX`.
An error is returned for out of range indexes.
View
22 commands/ltrim.md
@@ -1,6 +1,7 @@
Trim an existing list so that it will contain only the specified range of
-elements specified. Both `start` and `stop` are zero-based indexes, where `0` is
-the first element of the list (the head), `1` the next element and so on.
+elements specified.
+Both `start` and `stop` are zero-based indexes, where `0` is the first element
+of the list (the head), `1` the next element and so on.
For example: `LTRIM foobar 0 2` will modify the list stored at `foobar` so that
only the first three elements of the list will remain.
@@ -11,19 +12,22 @@ element and so on.
Out of range indexes will not produce an error: if `start` is larger than the
end of the list, or `start > end`, the result will be an empty list (which
-causes `key` to be removed). If `end` is larger than the end of the list, Redis
-will treat it like the last element of the list.
+causes `key` to be removed).
+If `end` is larger than the end of the list, Redis will treat it like the last
+element of the list.
-A common use of `LTRIM` is together with `LPUSH` / `RPUSH`. For example:
+A common use of `LTRIM` is together with `LPUSH` / `RPUSH`.
+For example:
LPUSH mylist someelement
LTRIM mylist 0 99
This pair of commands will push a new element on the list, while making sure
-that the list will not grow larger than 100 elements. This is very useful when
-using Redis to store logs for example. It is important to note that when used
-in this way `LTRIM` is an O(1) operation because in the average case just one
-element is removed from the tail of the list.
+that the list will not grow larger than 100 elements.
+This is very useful when using Redis to store logs for example.
+It is important to note that when used in this way `LTRIM` is an O(1) operation
+because in the average case just one element is removed from the tail of the
+list.
@return
View
7 commands/mget.md
@@ -1,6 +1,7 @@
-Returns the values of all specified keys. For every key that does not hold a
-string value or does not exist, the special value `nil` is returned. Because of
-this, the operation never fails.
+Returns the values of all specified keys.
+For every key that does not hold a string value or does not exist, the special
+value `nil` is returned.
+Because of this, the operation never fails.
@return
View
25 commands/migrate.md
@@ -1,27 +1,28 @@
Atomically transfer a key from a source Redis instance to a destination Redis
-instance. On success the key is deleted from the original instance and is
-guaranteed to exist in the target instance.
+instance.
+On success the key is deleted from the original instance and is guaranteed to
+exist in the target instance.
The command is atomic and blocks the two instances for the time required to
transfer the key, at any given time the key will appear to exist in a given
instance or in the other instance, unless a timeout error occurs.
The command internally uses `DUMP` to generate the serialized version of the key
-value, and `RESTORE` in order to synthesize the key in the target instance. The
-source instance acts as a client for the target instance. If the target instance
-returns OK to the `RESTORE` command, the source instance deletes the key using
-`DEL`.
+value, and `RESTORE` in order to synthesize the key in the target instance.
+The source instance acts as a client for the target instance.
+If the target instance returns OK to the `RESTORE` command, the source instance
+deletes the key using `DEL`.
The timeout specifies the maximum idle time in any moment of the communication
-with the destination instance in milliseconds. This means that the operation
-does not need to be completed within the specified amount of milliseconds, but
-that the transfer should make progresses without blocking for more than the
-specified amount of milliseconds.
+with the destination instance in milliseconds.
+This means that the operation does not need to be completed within the specified
+amount of milliseconds, but that the transfer should make progresses without
+blocking for more than the specified amount of milliseconds.
`MIGRATE` needs to perform I/O operations and to honor the specified timeout.
When there is an I/O error during the transfer or if the timeout is reached the
-operation is aborted and the special error - `IOERR` returned. When this happens
-the following two cases are possible:
+operation is aborted and the special error - `IOERR` returned.
+When this happens the following two cases are possible:
* The key may be on both the instances.
* The key may be only in the source instance.
View
17 commands/monitor.md
@@ -1,6 +1,7 @@
-`MONITOR` is a debugging command that streams back every command processed
-by the Redis server. It can help in understanding what is happening to the
-database. This command can both be used via `redis-cli` and via `telnet`.
+`MONITOR` is a debugging command that streams back every command processed by
+the Redis server.
+It can help in understanding what is happening to the database.
+This command can both be used via `redis-cli` and via `telnet`.
The ability to see all the requests processed by the server is useful in order
to spot bugs in an application both when using Redis as a database and as a
@@ -37,9 +38,9 @@ Manually issue the `QUIT` command to stop a `MONITOR` stream running via
## Cost of running `MONITOR`
-Because `MONITOR` streams back **all** commands, its use comes at a cost. The
-following (totally unscientific) benchmark numbers illustrate what the cost of
-running `MONITOR` can be.
+Because `MONITOR` streams back **all** commands, its use comes at a cost.
+The following (totally unscientific) benchmark numbers illustrate what the cost
+of running `MONITOR` can be.
Benchmark result **without** `MONITOR` running:
@@ -60,8 +61,8 @@ Benchmark result **with** `MONITOR` running (`redis-cli monitor > /dev/null`):
INCR: 41771.09 requests per second
In this particular case, running a single `MONITOR` client can reduce the
-throughput by more than 50%. Running more `MONITOR` clients will reduce
-throughput even more.
+throughput by more than 50%.
+Running more `MONITOR` clients will reduce throughput even more.
@return
View
7 commands/move.md
@@ -1,7 +1,8 @@
Move `key` from the currently selected database (see `SELECT`) to the specified
-destination database. When `key` already exists in the destination database, or
-it does not exist in the source database, it does nothing. It is possible to use
-`MOVE` as a locking primitive because of this.
+destination database.
+When `key` already exists in the destination database, or it does not exist in
+the source database, it does nothing.
+It is possible to use `MOVE` as a locking primitive because of this.
@return
View
11 commands/mset.md
@@ -1,9 +1,10 @@
-Sets the given keys to their respective values. `MSET` replaces existing values
-with new values, just as regular `SET`. See `MSETNX` if you don't want to
-overwrite existing values.
+Sets the given keys to their respective values.
+`MSET` replaces existing values with new values, just as regular `SET`.
+See `MSETNX` if you don't want to overwrite existing values.
-`MSET` is atomic, so all given keys are set at once. It is not possible for
-clients to see that some of the keys were updated while others are unchanged.
+`MSET` is atomic, so all given keys are set at once.
+It is not possible for clients to see that some of the keys were updated while
+others are unchanged.
@return
View
10 commands/msetnx.md
@@ -1,12 +1,14 @@
-Sets the given keys to their respective values. `MSETNX` will not perform any
-operation at all even if just a single key already exists.
+Sets the given keys to their respective values.
+`MSETNX` will not perform any operation at all even if just a single key already
+exists.
Because of this semantic `MSETNX` can be used in order to set different keys
representing different fields of an unique logic object in a way that ensures
that either all the fields or none at all are set.
-`MSETNX` is atomic, so all given keys are set at once. It is not possible for
-clients to see that some of the keys were updated while others are unchanged.
+`MSETNX` is atomic, so all given keys are set at once.
+It is not possible for clients to see that some of the keys were updated while
+others are unchanged.
@return
View
4 commands/multi.md
@@ -1,5 +1,5 @@
-Marks the start of a [transaction][tt] block. Subsequent commands will be queued
-for atomic execution using `EXEC`.
+Marks the start of a [transaction][tt] block.
+Subsequent commands will be queued for atomic execution using `EXEC`.
[tt]: /topics/transactions
View
35 commands/object.md
@@ -1,14 +1,16 @@
The `OBJECT` command allows to inspect the internals of Redis Objects associated
-with keys. It is useful for debugging or to understand if your keys are using
-the specially encoded data types to save space. Your application may also use
-the information reported by the `OBJECT` command to implement application level
-key eviction policies when using Redis as a Cache.
+with keys.
+It is useful for debugging or to understand if your keys are using the specially
+encoded data types to save space.
+Your application may also use the information reported by the `OBJECT` command
+to implement application level key eviction policies when using Redis as a
+Cache.
The `OBJECT` command supports multiple sub commands:
* `OBJECT REFCOUNT <key>` returns the number of references of the value
- associated with the specified key. This command is mainly useful for
- debugging.
+ associated with the specified key.
+ This command is mainly useful for debugging.
* `OBJECT ENCODING <key>` returns the kind of internal representation used in
order to store the value associated with a key.
* `OBJECT IDLETIME <key>` returns the number of seconds since the object stored
@@ -21,15 +23,18 @@ Objects can be encoded in different ways:
* Strings can be encoded as `raw` (normal string encoding) or `int` (strings
representing integers in a 64 bit signed interval are encoded in this way in
order to save space).
-* Lists can be encoded as `ziplist` or `linkedlist`. The `ziplist` is the
- special representation that is used to save space for small lists.
-* Sets can be encoded as `intset` or `hashtable`. The `intset` is a special
- encoding used for small sets composed solely of integers.
-* Hashes can be encoded as `zipmap` or `hashtable`. The `zipmap` is a special
- encoding used for small hashes.
-* Sorted Sets can be encoded as `ziplist` or `skiplist` format. As for the List
- type small sorted sets can be specially encoded using `ziplist`, while the
- `skiplist` encoding is the one that works with sorted sets of any size.
+* Lists can be encoded as `ziplist` or `linkedlist`.
+ The `ziplist` is the special representation that is used to save space for
+ small lists.
+* Sets can be encoded as `intset` or `hashtable`.
+ The `intset` is a special encoding used for small sets composed solely of
+ integers.
+* Hashes can be encoded as `zipmap` or `hashtable`.
+ The `zipmap` is a special encoding used for small hashes.
+* Sorted Sets can be encoded as `ziplist` or `skiplist` format.
+ As for the List type small sorted sets can be specially encoded using
+ `ziplist`, while the `skiplist` encoding is the one that works with sorted
+ sets of any size.
All the specially encoded types are automatically converted to the general type
once you perform an operation that makes it no possible for Redis to retain the
View
5 commands/ping.md
@@ -1,5 +1,6 @@
-Returns `PONG`. This command is often