Skip to content
This repository

Documented CLIENT LIST, CLIENT KILL and INFO #152

Merged
merged 9 commits into from over 1 year ago

3 participants

Didier Spezia Pieter Noordhuis Salvatore Sanfilippo
Didier Spezia

Hi Pieter/Salvatore,

here is my yearly contribution to redis-doc.
This pull request contains:

  • documentation for CLIENT LIST, CLIENT KILL, and INFO
  • a small refresh of the benchmark page (pipelining, etc ...)
  • some minor fixes in the display of the complexity of some commands

Best regards,
Didier.

Pieter Noordhuis
Collaborator

This is great Didier, thanks a lot!

Salvatore Sanfilippo antirez merged commit 2403a59 into from August 03, 2012
Salvatore Sanfilippo antirez closed this August 03, 2012
Salvatore Sanfilippo
Owner

Awesome! Thanks, merged :)

Didier Spezia

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
This page is out of date. Refresh to see the latest.
19  commands.json
@@ -133,6 +133,24 @@
133 133
     "since": "2.2.0",
134 134
     "group": "list"
135 135
   },
  136
+  "CLIENT KILL": {
  137
+    "summary": "Kill the connection of a client",
  138
+    "complexity": "O(N) where N is the number of client connections",
  139
+    "arguments": [
  140
+      {
  141
+        "name": "ip:port",
  142
+        "type": "string"
  143
+      }
  144
+    ],
  145
+    "since": "2.4.0",
  146
+    "group": "server"
  147
+  },
  148
+  "CLIENT LIST": {
  149
+    "summary": "Get the list of client connections",
  150
+    "complexity": "O(N) where N is the number of client connections",
  151
+    "since": "2.4.0",
  152
+    "group": "server"
  153
+  },
136 154
   "CONFIG GET": {
137 155
     "summary": "Get the value of a configuration parameter",
138 156
     "arguments": [
@@ -1728,6 +1746,7 @@
1728 1746
   },
1729 1747
   "TIME": {
1730 1748
     "summary": "Return the current server time",
  1749
+    "complexity": "O(1)",
1731 1750
     "since": "2.6.0",
1732 1751
     "group": "server"
1733 1752
   },
15  commands/client kill.md
Source Rendered
... ...
@@ -0,0 +1,15 @@
  1
+The `CLIENT KILL` command closes a given client connection identified
  2
+by ip:port.
  3
+
  4
+The ip:port should match a line returned by the `CLIENT LIST` command.
  5
+
  6
+Due to the single-treaded nature of Redis, it is not possible to
  7
+kill a client connection while it is executing a command. From
  8
+the client point of view, the connection can never be closed
  9
+in the middle of the execution of a command. However, the client
  10
+will notice the connection has been closed only when the
  11
+next command is sent (and results in network error).
  12
+
  13
+@return
  14
+
  15
+@status-reply: `OK` if the connection exists and has been closed
59  commands/client list.md
Source Rendered
... ...
@@ -0,0 +1,59 @@
  1
+The `CLIENT LIST` command returns information and statistics about the client
  2
+connections server in a mostly human readable format.
  3
+
  4
+@return
  5
+
  6
+@bulk-reply: a unique string, formatted as follows:
  7
+
  8
+*   One client connection per line (separated by LF)
  9
+*   Each line is composed of a succession of property=value fields separated
  10
+    by a space character.
  11
+
  12
+Here is the meaning of the fields:
  13
+
  14
+*   addr: address/port of the client
  15
+*   fd: file descriptor corresponding to the socket
  16
+*   age: total duration of the connection in seconds
  17
+*   idle: idle time of the connection in seconds
  18
+*   flags: client flags (see below)
  19
+*   db: current database ID
  20
+*   sub: number of channel subscriptions
  21
+*   psub: number of pattern matching subscriptions
  22
+*   multi: number of commands in a MULTI/EXEC context
  23
+*   qbuf: query buffer length (0 means no query pending)
  24
+*   qbuf-free: free space of the query buffer (0 means the buffer is full)
  25
+*   obl: output buffer length
  26
+*   oll: output list length (replies are queued in this list when the buffer is full)
  27
+*   omem: output buffer memory usage
  28
+*   events: file descriptor events (see below)
  29
+*   cmd: last command played
  30
+
  31
+The client flags can be a combination of:
  32
+
  33
+```
  34
+O: the client is a slave in MONITOR mode
  35
+S: the client is a normal slave server
  36
+M: the client is a master
  37
+x: the client is in a MULTI/EXEC context
  38
+b: the client is waiting in a blocking operation
  39
+i: the client is waiting for a VM I/O (deprecated)
  40
+d: a watched keys has been modified - EXEC will fail
  41
+c: connection to be closed after writing entire reply
  42
+u: the client is unblocked
  43
+A: connection to be closed ASAP
  44
+N: no specific flag set
  45
+```
  46
+
  47
+The file descriptor events can be:
  48
+
  49
+```
  50
+r: the client socket is readable (event loop)
  51
+w: the client socket is writable (event loop)
  52
+```
  53
+
  54
+## Notes
  55
+
  56
+New fields are regularly added for debugging purpose. Some could be removed
  57
+in the future. A version safe Redis client using this command should parse
  58
+the output accordingly (i.e. handling gracefully missing fields, skipping
  59
+unknown fields).
239  commands/info.md
Source Rendered
... ...
@@ -1,53 +1,210 @@
1 1
 The `INFO` command returns information and statistics about the server in a
2 2
 format that is simple to parse by computers and easy to read by humans.
3 3
 
  4
+The optional parameter can be used to select a specific section of information:
  5
+
  6
+*   `server`: General information about the Redis server
  7
+*   `clients`: Client connections section
  8
+*   `memory`: Memory consumption related information
  9
+*   `persistence`: RDB and AOF related information
  10
+*   `stats`: General statistics
  11
+*   `replication`: Master/slave replication information
  12
+*   `cpu`: CPU consumption statistics
  13
+*   `commandstats`: Redis command statistics
  14
+*   `cluster`: Redis Cluster section
  15
+*   `keyspace`: Database related statistics
  16
+
  17
+It can also take the following values:
  18
+
  19
+*   `all`: Return all sections
  20
+*   `default`: Return only the default set of sections
  21
+
  22
+When no parameter is provided, the `default` option is assumed.
  23
+
4 24
 @return
5 25
 
6  
-@bulk-reply: in the following format (compacted for brevity):
  26
+@bulk-reply: as a collection of text lines.
7 27
 
8  
-```
9  
-redis_version:2.2.2
10  
-uptime_in_seconds:148
11  
-used_cpu_sys:0.01
12  
-used_cpu_user:0.03
13  
-used_memory:768384
14  
-used_memory_rss:1536000
15  
-mem_fragmentation_ratio:2.00
16  
-changes_since_last_save:118
17  
-keyspace_hits:174
18  
-keyspace_misses:37
19  
-allocation_stats:4=56,8=312,16=1498,...
20  
-db0:keys=1240,expires=0
21  
-```
  28
+Lines can contain a section name (starting with a # character) or a property.
  29
+All the properties are in the form of `field:value` terminated by `\r\n`.
22 30
 
23  
-All the fields are in the form of `field:value` terminated by `\r\n`.
  31
+```cli
  32
+INFO
  33
+```
24 34
 
25 35
 ## Notes
26 36
 
27  
-*   `used_memory` is the total number of bytes allocated by Redis using its
28  
-    allocator (either standard `libc` `malloc`, or an alternative allocator such
29  
-    as [`tcmalloc`][hcgcpgp]
30  
-
31  
-*   `used_memory_rss` is the number of bytes that Redis allocated as seen by the
32  
-    operating system.
33  
-    Optimally, this number is close to `used_memory` and there is little memory
34  
-    fragmentation.
35  
-    This is the number reported by tools such as `top` and `ps`.
36  
-    A large difference between these numbers means there is memory
37  
-    fragmentation.
38  
-    Because Redis does not have control over how its allocations are mapped to
39  
-    memory pages, `used_memory_rss` is often the result of a spike in memory
40  
-    usage.
41  
-    The ratio between `used_memory_rss` and `used_memory` is given as
42  
-    `mem_fragmentation_ratio`.
43  
-
44  
-*   `changes_since_last_save` refers to the number of operations that produced
45  
-    some kind of change in the dataset since the last time either `SAVE` or
46  
-    `BGSAVE` was called.
47  
-
48  
-*   `allocation_stats` holds a histogram containing the number of allocations of
49  
-    a certain size (up to 256).
50  
-    This provides a means of introspection for the type of allocations performed
51  
-    by Redis at run time.
  37
+Please note depending on the version of Redis some of the fields have been
  38
+added or removed. A robust client application should therefore parse the
  39
+result of this command by skipping unknown properties, and gracefully handle
  40
+missing fields.
  41
+
  42
+Here is the description of fields for Redis >= 2.4.
  43
+
  44
+
  45
+Here is the meaning of all fields in the **server** section:
  46
+
  47
+*   `redis_version`: Version of the Redis server
  48
+*   `redis_git_sha1`:  Git SHA1
  49
+*   `redis_git_dirty`: Git dirty flag
  50
+*   `os`: Operating system hosting the Redis server
  51
+*   `arch_bits`: Architecture (32 or 64 bits)
  52
+*   `multiplexing_api`: event loop mechanism used by Redis
  53
+*   `gcc_version`: Version of the GCC compiler used to compile the Redis server
  54
+*   `process_id`: PID of the server process
  55
+*   `run_id`: Random value identifying the Redis server (to be used by Sentinel and Cluster)
  56
+*   `tcp_port`: TCP/IP listen port
  57
+*   `uptime_in_seconds`: Number of seconds since Redis server start
  58
+*   `uptime_in_days`: Same value expressed in days
  59
+*   `lru_clock`: Clock incrementing every minute, for LRU management
  60
+
  61
+Here is the meaning of all fields in the **clients** section:
  62
+
  63
+*   `connected_clients`: Number of client connections (excluding connections from slaves)
  64
+*   `client_longest_output_list`: longest output list among current client connections
  65
+*   `client_biggest_input_buf`: biggest input buffer among current client connections
  66
+*   `blocked_clients`: Number of clients pending on a blocking call (BLPOP, BRPOP, BRPOPLPUSH)
  67
+
  68
+Here is the meaning of all fields in the **memory** section:
  69
+
  70
+*   `used_memory`:  total number of bytes allocated by Redis using its
  71
+     allocator (either standard **libc**, **jemalloc**, or an alternative allocator such
  72
+     as [**tcmalloc**][hcgcpgp]
  73
+*   `used_memory_human`: Human readable representation of previous value
  74
+*   `used_memory_rss`: Number of bytes that Redis allocated as seen by the
  75
+     operating system (a.k.a resident set size). This is the number reported by tools
  76
+     such as **top** and **ps**.
  77
+*   `used_memory_peak`: Peak memory consumed by Redis (in bytes)
  78
+*   `used_memory_peak_human`: Human readable representation of previous value
  79
+*   `used_memory_lua`: Number of bytes used by the Lua engine
  80
+*   `mem_fragmentation_ratio`: Ratio between `used_memory_rss` and `used_memory`
  81
+*   `mem_allocator`: Memory allocator, chosen at compile time.
  82
+
  83
+Ideally, the `used_memory_rss` value should be only slightly higher than `used_memory`.
  84
+When rss >> used, a large difference means there is memory fragmentation
  85
+(internal or external), which can be evaluated by checking `mem_fragmentation_ratio`.
  86
+When used >> rss, it means part of Redis memory has been swapped off by the operating
  87
+system: expect some significant latencies.
  88
+
  89
+Because Redis does not have control over how its allocations are mapped to
  90
+memory pages, high `used_memory_rss` is often the result of a spike in memory
  91
+usage.
  92
+
  93
+When Redis frees memory, the memory is given back to the allocator, and the
  94
+allocator may or may not give the memory back to the system. There may be
  95
+a discrepancy between the `used_memory` value and memory consumption as
  96
+reported by the operating system. It may be due to the fact memory has been
  97
+used and released by Redis, but not given back to the system. The `used_memory_peak`
  98
+value is generally useful to check this point.
  99
+
  100
+Here is the meaning of all fields in the **persistence** section:
  101
+
  102
+*   `loading`: Flag indicating if the load of a dump file is on-going
  103
+*   `rdb_changes_since_last_save`: Number of changes since the last dump
  104
+*   `rdb_bgsave_in_progress`: Flag indicating a RDB save is on-going
  105
+*   `rdb_last_save_time`: Epoch-based timestamp of last successful RDB save
  106
+*   `rdb_last_bgsave_status`: Status of the last RDB save operation
  107
+*   `rdb_last_bgsave_time_sec`: Duration of the last RDB save operation in seconds
  108
+*   `rdb_current_bgsave_time_sec`: Duration of the on-going RDB save operation if any
  109
+*   `aof_enabled`: Flag indicating AOF logging is activated
  110
+*   `aof_rewrite_in_progress`: Flag indicating a AOF rewrite operation is on-going
  111
+*   `aof_rewrite_scheduled`: Flag indicating an AOF rewrite operation
  112
+     will be scheduled once the on-going RDB save is complete.
  113
+*   `aof_last_rewrite_time_sec`: Duration of the last AOF rewrite operation in seconds
  114
+*   `aof_current_rewrite_time_sec`: Duration of the on-going AOF rewrite operation if any
  115
+*   `aof_last_bgrewrite_status`: Status of the last AOF rewrite operation
  116
+
  117
+`changes_since_last_save` refers to the number of operations that produced
  118
+some kind of changes in the dataset since the last time either `SAVE` or
  119
+`BGSAVE` was called.
  120
+
  121
+If AOF is activated, these additional fields will be added:
  122
+
  123
+*   `aof_current_size`: AOF current file size
  124
+*   `aof_base_size`: AOF file size on latest startup or rewrite
  125
+*   `aof_pending_rewrite`: Flag indicating an AOF rewrite operation
  126
+     will be scheduled once the on-going RDB save is complete.
  127
+*   `aof_buffer_length`: Size of the AOF buffer
  128
+*   `aof_rewrite_buffer_length`: Size of the AOF rewrite buffer
  129
+*   `aof_pending_bio_fsync`: Number of fsync pending jobs in background I/O queue
  130
+*   `aof_delayed_fsync`: Delayed fsync counter
  131
+
  132
+If a load operation is on-going, these additional fields will be added:
  133
+
  134
+*   `loading_start_time`: Epoch-based timestamp of the start of the load operation
  135
+*   `loading_total_bytes`: Total file size
  136
+*   `loading_loaded_bytes`: Number of bytes already loaded
  137
+*   `loading_loaded_perc`: Same value expressed as a percentage
  138
+*   `loading_eta_seconds`: ETA in seconds for the load to be complete
  139
+
  140
+Here is the meaning of all fields in the **stats** section:
  141
+
  142
+*   `total_connections_received`: Total number of connections accepted by the server
  143
+*   `total_commands_processed`: Total number of commands processed by the server
  144
+*   `instantaneous_ops_per_sec`: Number of commands processed per second
  145
+*   `rejected_connections`: Number of connections rejected because of maxclients limit
  146
+*   `expired_keys`: Total number of key expiration events
  147
+*   `evicted_keys`: Number of evicted keys due to maxmemory limit
  148
+*   `keyspace_hits`: Number of successful lookup of keys in the main dictionary
  149
+*   `keyspace_misses`: Number of failed lookup of keys in the main dictionary
  150
+*   `pubsub_channels`: Global number of pub/sub channels with client subscriptions
  151
+*   `pubsub_patterns`: Global number of pub/sub pattern with client subscriptions
  152
+*   `latest_fork_usec`: Duration of the latest fork operation in microseconds
  153
+
  154
+Here is the meaning of all fields in the **replication** section:
  155
+
  156
+*   `role`: Value is "master" if the instance is slave of no one, or "slave" if the instance is enslaved to a master.
  157
+    Note that a slave can be master of another slave (daisy chaining).
  158
+
  159
+If the instance is a slave, these additional fields are provided:
  160
+
  161
+*   `master_host`: Host or IP address of the master
  162
+*   `master_port`: Master listening TCP port
  163
+*   `master_link_status`: Status of the link (up/down)
  164
+*   `master_last_io_seconds_ago`: Number of seconds since the last interaction with master
  165
+*   `master_sync_in_progress`: Indicate the master is SYNCing to the slave
  166
+
  167
+If a SYNC operation is on-going, these additional fields are provided:
  168
+
  169
+*   `master_sync_left_bytes`: Number of bytes left before SYNCing is complete
  170
+*   `master_sync_last_io_seconds_ago`: Number of seconds since last transfer I/O during a SYNC operation
  171
+
  172
+If the link between master and slave is down, an additional field is provided:
  173
+
  174
+*   `master_link_down_since_seconds`: Number of seconds since the link is down
  175
+
  176
+The following field is always provided:
  177
+
  178
+*   `connected_slaves`: Number of connected slaves
  179
+
  180
+For each slave, the following line is added:
  181
+
  182
+*   `slaveXXX`: id, ip address, port, state
  183
+
  184
+Here is the meaning of all fields in the **cpu** section:
  185
+
  186
+*   `used_cpu_sys`: System CPU consumed by the Redis server
  187
+*   `used_cpu_user`:User CPU consumed by the Redis server
  188
+*   `used_cpu_sys_children`: System CPU consumed by the background processes
  189
+*   `used_cpu_user_children`: User CPU consumed by the background processes
  190
+
  191
+The **commandstats** section provides statistics based on the command type,
  192
+including the number of calls, the total CPU time consumed by these commands,
  193
+and the average CPU consumed per command execution.
  194
+
  195
+For each command type, the following line is added:
  196
+
  197
+*   `cmdstat_XXX`:calls=XXX,usec=XXX,usec_per_call=XXX
  198
+
  199
+The **cluster** section currently only contains a unique field:
  200
+
  201
+*   `cluster_enabled`: Indicate Redis cluster is enabled
  202
+
  203
+The **keyspace** section provides statistics on the main dictionary of each database.
  204
+The statistics are the number of keys, and the number of keys with an expiration.
  205
+
  206
+For each database, the following line is added:
  207
+
  208
+*   `dbXXX`:keys=XXX,expires=XXX
52 209
 
53 210
 [hcgcpgp]: http://code.google.com/p/google-perftools/
4  commands/pexpire.md
Source Rendered
... ...
@@ -1,7 +1,3 @@
1  
-@complexity
2  
-
3  
-O(1)
4  
-
5 1
 This command works exactly like `EXPIRE` but the time to live of the key is
6 2
 specified in milliseconds instead of seconds.
7 3
 
4  commands/pexpireat.md
Source Rendered
... ...
@@ -1,7 +1,3 @@
1  
-@complexity
2  
-
3  
-O(1)
4  
-
5 1
 `PEXPIREAT` has the same effect and semantic as `EXPIREAT`, but the Unix time at
6 2
 which the key will expire is specified in milliseconds instead of seconds.
7 3
 
4  commands/psetex.md
Source Rendered
... ...
@@ -1,7 +1,3 @@
1  
-@complexity
2  
-
3  
-O(1)
4  
-
5 1
 `PSETEX` works exactly like `SETEX` with the sole difference that the expire
6 2
 time is specified in milliseconds instead of seconds.
7 3
 
4  commands/pttl.md
Source Rendered
... ...
@@ -1,7 +1,3 @@
1  
-@complexity
2  
-
3  
-O(1)
4  
-
5 1
 Like `TTL` this command returns the remaining time to live of a key that has an
6 2
 expire set, with the sole difference that `TTL` returns the amount of remaining
7 3
 time in seconds while `PTTL` returns it in milliseconds.
4  commands/time.md
Source Rendered
... ...
@@ -1,7 +1,3 @@
1  
-@complexity
2  
-
3  
-O(1)
4  
-
5 1
 The `TIME` command returns the current server time as a two items lists: a Unix
6 2
 timestamp and the amount of microseconds already elapsed in the current second.
7 3
 Basically the interface is very similar to the one of the `gettimeofday` system
BIN  topics/Data_size.png
69  topics/benchmarks.md
Source Rendered
@@ -9,23 +9,27 @@ The following options are supported:
9 9
 
10 10
     Usage: redis-benchmark [-h <host>] [-p <port>] [-c <clients>] [-n <requests]> [-k <boolean>]
11 11
 
12  
-     -h <hostname>      Server hostname (default 127.0.0.1)
13  
-     -p <port>          Server port (default 6379)
14  
-     -s <socket>        Server socket (overrides host and port)
15  
-     -c <clients>       Number of parallel connections (default 50)
16  
-     -n <requests>      Total number of requests (default 10000)
17  
-     -d <size>          Data size of SET/GET value in bytes (default 2)
18  
-     -k <boolean>       1=keep alive 0=reconnect (default 1)
19  
-     -r <keyspacelen>   Use random keys for SET/GET/INCR, random values for SADD
  12
+    -h <hostname>      Server hostname (default 127.0.0.1)
  13
+    -p <port>          Server port (default 6379)
  14
+    -s <socket>        Server socket (overrides host and port)
  15
+    -c <clients>       Number of parallel connections (default 50)
  16
+    -n <requests>      Total number of requests (default 10000)
  17
+    -d <size>          Data size of SET/GET value in bytes (default 2)
  18
+    -k <boolean>       1=keep alive 0=reconnect (default 1)
  19
+    -r <keyspacelen>   Use random keys for SET/GET/INCR, random values for SADD
20 20
       Using this option the benchmark will get/set keys
21  
-      in the form mykey_rand000000012456 instead of constant
  21
+      in the form mykey_rand:000000012456 instead of constant
22 22
       keys, the <keyspacelen> argument determines the max
23 23
       number of values for the random number. For instance
24  
-      if set to 10 only rand000000000000 - rand000000000009
  24
+      if set to 10 only rand:000000000000 - rand:000000000009
25 25
       range will be allowed.
26  
-     -q                 Quiet. Just show query/sec values
27  
-     -l                 Loop. Run the tests forever
28  
-     -I                 Idle mode. Just open N idle connections and wait.
  26
+    -P <numreq>        Pipeline <numreq> requests. Default 1 (no pipeline).
  27
+    -q                 Quiet. Just show query/sec values
  28
+    --csv              Output in CSV format
  29
+    -l                 Loop. Run the tests forever
  30
+    -t <tests>         Only run the comma separated list of tests. The test
  31
+                        names are the same as the ones produced as output.
  32
+    -I                 Idle mode. Just open N idle connections and wait.
29 33
 
30 34
 You need to have a running Redis instance before launching the benchmark.
31 35
 A typical example would be:
@@ -65,11 +69,6 @@ multiple CPU cores. People are supposed to launch several Redis instances to
65 69
 scale out on several cores if needed. It is not really fair to compare one
66 70
 single Redis instance to a multi-threaded data store.
67 71
 
68  
-Then the benchmark should do the same operations, and work in the same way with
69  
-the multiple data stores you want to compare. It is absolutely pointless to
70  
-compare the result of redis-benchmark to the result of another benchmark
71  
-program and extrapolate.
72  
-
73 72
 A common misconception is that redis-benchmark is designed to make Redis
74 73
 performances look stellar, the throughput achieved by redis-benchmark being
75 74
 somewhat artificial, and not achievable by a real application. This is
@@ -77,13 +76,23 @@ actually plain wrong.
77 76
 
78 77
 The redis-benchmark program is a quick and useful way to get some figures and
79 78
 evaluate the performance of a Redis instance on a given hardware. However,
80  
-it does not represent the maximum throughput a Redis instance can sustain.
81  
-Actually, by using pipelining and a fast client (hiredis), it is fairly easy
82  
-to write a program generating more throughput than redis-benchmark. The current
83  
-version of redis-benchmark achieves throughput by exploiting concurrency only
84  
-(i.e. it creates several connections to the server). It does not use pipelining
85  
-or any parallelism at all (one pending query per connection at most, and
86  
-no multi-threading).
  79
+by default, it does not represent the maximum throughput a Redis instance can
  80
+sustain. Actually, by using pipelining and a fast client (hiredis), it is fairly
  81
+easy to write a program generating more throughput than redis-benchmark. The
  82
+default behavior of redis-benchmark is to achieve throughput by exploiting
  83
+concurrency only (i.e. it creates several connections to the server).
  84
+It does not use pipelining or any parallelism at all (one pending query per
  85
+connection at most, and no multi-threading).
  86
+
  87
+To run a benchmark using pipelining mode (and achieve higher throughputs),
  88
+you need to explicitly use the -P option. Please note that it is still a
  89
+realistic behavior since a lot of Redis based applications actively use
  90
+pipelining to improve performance.
  91
+
  92
+Finally, the benchmark should apply the same operations, and work in the same way
  93
+with the multiple data stores you want to compare. It is absolutely pointless to
  94
+compare the result of redis-benchmark to the result of another benchmark
  95
+program and extrapolate.
87 96
 
88 97
 For instance, Redis and memcached in single-threaded mode can be compared on
89 98
 GET/SET operations. Both are in-memory data stores, working mostly in the same
@@ -153,6 +162,16 @@ the TCP/IP loopback and unix domain sockets can be used. It depends on the
153 162
 platform, but unix domain sockets can achieve around 50% more throughput than
154 163
 the TCP/IP loopback (on Linux for instance). The default behavior of
155 164
 redis-benchmark is to use the TCP/IP loopback.
  165
++ The performance benefit of unix domain sockets compared to TCP/IP loopback
  166
+tends to decrease when pipelining is heavily used (i.e. long pipelines).
  167
++ When an ethernet network is used to access Redis, aggregating commands using
  168
+pipelining is especially efficient when the size of the data is kept under
  169
+the ethernet packet size (about 1500 bytes). Actually, processing 10 bytes,
  170
+100 bytes, or 1000 bytes queries almost result in the same throughput.
  171
+See the graph below.
  172
+
  173
+![Data size impact](https://github.com/dspezia/redis-doc/raw/client_command/topics/Data_size.png)
  174
+
156 175
 + On multi CPU sockets servers, Redis performance becomes dependant on the
157 176
 NUMA configuration and process location. The most visible effect is that
158 177
 redis-benchmark results seem non deterministic because client and server
Commit_comment_tip

Tip: You can add notes to lines in a file. Hover to the left of a line to make a note

Something went wrong with that request. Please try again.