Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with
or
.
Download ZIP

Loading…

Documented CLIENT LIST, CLIENT KILL and INFO #152

Merged
merged 9 commits into from

3 participants

@dspezia

Hi Pieter/Salvatore,

here is my yearly contribution to redis-doc.
This pull request contains:

  • documentation for CLIENT LIST, CLIENT KILL, and INFO
  • a small refresh of the benchmark page (pipelining, etc ...)
  • some minor fixes in the display of the complexity of some commands

Best regards,
Didier.

@pietern
Collaborator

This is great Didier, thanks a lot!

@antirez antirez merged commit 2403a59 into antirez:master
@antirez
Owner

Awesome! Thanks, merged :)

@dspezia

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Commits on Jul 29, 2012
  1. @dspezia
  2. @dspezia
  3. @dspezia

    Refreshed benchmark page.

    dspezia authored
    (mentioned new pipeline option, ethernet packet size impact, etc ...)
  4. @dspezia

    Fixed data size graph

    dspezia authored
Commits on Jul 30, 2012
  1. @dspezia
Commits on Jul 31, 2012
  1. @dspezia
Commits on Aug 2, 2012
  1. @dspezia
  2. @dspezia

    INFO command final drop

    dspezia authored
  3. @dspezia

    Minor INFO fixes

    dspezia authored
This page is out of date. Refresh to see the latest.
View
19 commands.json
@@ -133,6 +133,24 @@
"since": "2.2.0",
"group": "list"
},
+ "CLIENT KILL": {
+ "summary": "Kill the connection of a client",
+ "complexity": "O(N) where N is the number of client connections",
+ "arguments": [
+ {
+ "name": "ip:port",
+ "type": "string"
+ }
+ ],
+ "since": "2.4.0",
+ "group": "server"
+ },
+ "CLIENT LIST": {
+ "summary": "Get the list of client connections",
+ "complexity": "O(N) where N is the number of client connections",
+ "since": "2.4.0",
+ "group": "server"
+ },
"CONFIG GET": {
"summary": "Get the value of a configuration parameter",
"arguments": [
@@ -1728,6 +1746,7 @@
},
"TIME": {
"summary": "Return the current server time",
+ "complexity": "O(1)",
"since": "2.6.0",
"group": "server"
},
View
15 commands/client kill.md
@@ -0,0 +1,15 @@
+The `CLIENT KILL` command closes a given client connection identified
+by ip:port.
+
+The ip:port should match a line returned by the `CLIENT LIST` command.
+
+Due to the single-treaded nature of Redis, it is not possible to
+kill a client connection while it is executing a command. From
+the client point of view, the connection can never be closed
+in the middle of the execution of a command. However, the client
+will notice the connection has been closed only when the
+next command is sent (and results in network error).
+
+@return
+
+@status-reply: `OK` if the connection exists and has been closed
View
59 commands/client list.md
@@ -0,0 +1,59 @@
+The `CLIENT LIST` command returns information and statistics about the client
+connections server in a mostly human readable format.
+
+@return
+
+@bulk-reply: a unique string, formatted as follows:
+
+* One client connection per line (separated by LF)
+* Each line is composed of a succession of property=value fields separated
+ by a space character.
+
+Here is the meaning of the fields:
+
+* addr: address/port of the client
+* fd: file descriptor corresponding to the socket
+* age: total duration of the connection in seconds
+* idle: idle time of the connection in seconds
+* flags: client flags (see below)
+* db: current database ID
+* sub: number of channel subscriptions
+* psub: number of pattern matching subscriptions
+* multi: number of commands in a MULTI/EXEC context
+* qbuf: query buffer length (0 means no query pending)
+* qbuf-free: free space of the query buffer (0 means the buffer is full)
+* obl: output buffer length
+* oll: output list length (replies are queued in this list when the buffer is full)
+* omem: output buffer memory usage
+* events: file descriptor events (see below)
+* cmd: last command played
+
+The client flags can be a combination of:
+
+```
+O: the client is a slave in MONITOR mode
+S: the client is a normal slave server
+M: the client is a master
+x: the client is in a MULTI/EXEC context
+b: the client is waiting in a blocking operation
+i: the client is waiting for a VM I/O (deprecated)
+d: a watched keys has been modified - EXEC will fail
+c: connection to be closed after writing entire reply
+u: the client is unblocked
+A: connection to be closed ASAP
+N: no specific flag set
+```
+
+The file descriptor events can be:
+
+```
+r: the client socket is readable (event loop)
+w: the client socket is writable (event loop)
+```
+
+## Notes
+
+New fields are regularly added for debugging purpose. Some could be removed
+in the future. A version safe Redis client using this command should parse
+the output accordingly (i.e. handling gracefully missing fields, skipping
+unknown fields).
View
239 commands/info.md
@@ -1,53 +1,210 @@
The `INFO` command returns information and statistics about the server in a
format that is simple to parse by computers and easy to read by humans.
+The optional parameter can be used to select a specific section of information:
+
+* `server`: General information about the Redis server
+* `clients`: Client connections section
+* `memory`: Memory consumption related information
+* `persistence`: RDB and AOF related information
+* `stats`: General statistics
+* `replication`: Master/slave replication information
+* `cpu`: CPU consumption statistics
+* `commandstats`: Redis command statistics
+* `cluster`: Redis Cluster section
+* `keyspace`: Database related statistics
+
+It can also take the following values:
+
+* `all`: Return all sections
+* `default`: Return only the default set of sections
+
+When no parameter is provided, the `default` option is assumed.
+
@return
-@bulk-reply: in the following format (compacted for brevity):
+@bulk-reply: as a collection of text lines.
-```
-redis_version:2.2.2
-uptime_in_seconds:148
-used_cpu_sys:0.01
-used_cpu_user:0.03
-used_memory:768384
-used_memory_rss:1536000
-mem_fragmentation_ratio:2.00
-changes_since_last_save:118
-keyspace_hits:174
-keyspace_misses:37
-allocation_stats:4=56,8=312,16=1498,...
-db0:keys=1240,expires=0
-```
+Lines can contain a section name (starting with a # character) or a property.
+All the properties are in the form of `field:value` terminated by `\r\n`.
-All the fields are in the form of `field:value` terminated by `\r\n`.
+```cli
+INFO
+```
## Notes
-* `used_memory` is the total number of bytes allocated by Redis using its
- allocator (either standard `libc` `malloc`, or an alternative allocator such
- as [`tcmalloc`][hcgcpgp]
-
-* `used_memory_rss` is the number of bytes that Redis allocated as seen by the
- operating system.
- Optimally, this number is close to `used_memory` and there is little memory
- fragmentation.
- This is the number reported by tools such as `top` and `ps`.
- A large difference between these numbers means there is memory
- fragmentation.
- Because Redis does not have control over how its allocations are mapped to
- memory pages, `used_memory_rss` is often the result of a spike in memory
- usage.
- The ratio between `used_memory_rss` and `used_memory` is given as
- `mem_fragmentation_ratio`.
-
-* `changes_since_last_save` refers to the number of operations that produced
- some kind of change in the dataset since the last time either `SAVE` or
- `BGSAVE` was called.
-
-* `allocation_stats` holds a histogram containing the number of allocations of
- a certain size (up to 256).
- This provides a means of introspection for the type of allocations performed
- by Redis at run time.
+Please note depending on the version of Redis some of the fields have been
+added or removed. A robust client application should therefore parse the
+result of this command by skipping unknown properties, and gracefully handle
+missing fields.
+
+Here is the description of fields for Redis >= 2.4.
+
+
+Here is the meaning of all fields in the **server** section:
+
+* `redis_version`: Version of the Redis server
+* `redis_git_sha1`: Git SHA1
+* `redis_git_dirty`: Git dirty flag
+* `os`: Operating system hosting the Redis server
+* `arch_bits`: Architecture (32 or 64 bits)
+* `multiplexing_api`: event loop mechanism used by Redis
+* `gcc_version`: Version of the GCC compiler used to compile the Redis server
+* `process_id`: PID of the server process
+* `run_id`: Random value identifying the Redis server (to be used by Sentinel and Cluster)
+* `tcp_port`: TCP/IP listen port
+* `uptime_in_seconds`: Number of seconds since Redis server start
+* `uptime_in_days`: Same value expressed in days
+* `lru_clock`: Clock incrementing every minute, for LRU management
+
+Here is the meaning of all fields in the **clients** section:
+
+* `connected_clients`: Number of client connections (excluding connections from slaves)
+* `client_longest_output_list`: longest output list among current client connections
+* `client_biggest_input_buf`: biggest input buffer among current client connections
+* `blocked_clients`: Number of clients pending on a blocking call (BLPOP, BRPOP, BRPOPLPUSH)
+
+Here is the meaning of all fields in the **memory** section:
+
+* `used_memory`: total number of bytes allocated by Redis using its
+ allocator (either standard **libc**, **jemalloc**, or an alternative allocator such
+ as [**tcmalloc**][hcgcpgp]
+* `used_memory_human`: Human readable representation of previous value
+* `used_memory_rss`: Number of bytes that Redis allocated as seen by the
+ operating system (a.k.a resident set size). This is the number reported by tools
+ such as **top** and **ps**.
+* `used_memory_peak`: Peak memory consumed by Redis (in bytes)
+* `used_memory_peak_human`: Human readable representation of previous value
+* `used_memory_lua`: Number of bytes used by the Lua engine
+* `mem_fragmentation_ratio`: Ratio between `used_memory_rss` and `used_memory`
+* `mem_allocator`: Memory allocator, chosen at compile time.
+
+Ideally, the `used_memory_rss` value should be only slightly higher than `used_memory`.
+When rss >> used, a large difference means there is memory fragmentation
+(internal or external), which can be evaluated by checking `mem_fragmentation_ratio`.
+When used >> rss, it means part of Redis memory has been swapped off by the operating
+system: expect some significant latencies.
+
+Because Redis does not have control over how its allocations are mapped to
+memory pages, high `used_memory_rss` is often the result of a spike in memory
+usage.
+
+When Redis frees memory, the memory is given back to the allocator, and the
+allocator may or may not give the memory back to the system. There may be
+a discrepancy between the `used_memory` value and memory consumption as
+reported by the operating system. It may be due to the fact memory has been
+used and released by Redis, but not given back to the system. The `used_memory_peak`
+value is generally useful to check this point.
+
+Here is the meaning of all fields in the **persistence** section:
+
+* `loading`: Flag indicating if the load of a dump file is on-going
+* `rdb_changes_since_last_save`: Number of changes since the last dump
+* `rdb_bgsave_in_progress`: Flag indicating a RDB save is on-going
+* `rdb_last_save_time`: Epoch-based timestamp of last successful RDB save
+* `rdb_last_bgsave_status`: Status of the last RDB save operation
+* `rdb_last_bgsave_time_sec`: Duration of the last RDB save operation in seconds
+* `rdb_current_bgsave_time_sec`: Duration of the on-going RDB save operation if any
+* `aof_enabled`: Flag indicating AOF logging is activated
+* `aof_rewrite_in_progress`: Flag indicating a AOF rewrite operation is on-going
+* `aof_rewrite_scheduled`: Flag indicating an AOF rewrite operation
+ will be scheduled once the on-going RDB save is complete.
+* `aof_last_rewrite_time_sec`: Duration of the last AOF rewrite operation in seconds
+* `aof_current_rewrite_time_sec`: Duration of the on-going AOF rewrite operation if any
+* `aof_last_bgrewrite_status`: Status of the last AOF rewrite operation
+
+`changes_since_last_save` refers to the number of operations that produced
+some kind of changes in the dataset since the last time either `SAVE` or
+`BGSAVE` was called.
+
+If AOF is activated, these additional fields will be added:
+
+* `aof_current_size`: AOF current file size
+* `aof_base_size`: AOF file size on latest startup or rewrite
+* `aof_pending_rewrite`: Flag indicating an AOF rewrite operation
+ will be scheduled once the on-going RDB save is complete.
+* `aof_buffer_length`: Size of the AOF buffer
+* `aof_rewrite_buffer_length`: Size of the AOF rewrite buffer
+* `aof_pending_bio_fsync`: Number of fsync pending jobs in background I/O queue
+* `aof_delayed_fsync`: Delayed fsync counter
+
+If a load operation is on-going, these additional fields will be added:
+
+* `loading_start_time`: Epoch-based timestamp of the start of the load operation
+* `loading_total_bytes`: Total file size
+* `loading_loaded_bytes`: Number of bytes already loaded
+* `loading_loaded_perc`: Same value expressed as a percentage
+* `loading_eta_seconds`: ETA in seconds for the load to be complete
+
+Here is the meaning of all fields in the **stats** section:
+
+* `total_connections_received`: Total number of connections accepted by the server
+* `total_commands_processed`: Total number of commands processed by the server
+* `instantaneous_ops_per_sec`: Number of commands processed per second
+* `rejected_connections`: Number of connections rejected because of maxclients limit
+* `expired_keys`: Total number of key expiration events
+* `evicted_keys`: Number of evicted keys due to maxmemory limit
+* `keyspace_hits`: Number of successful lookup of keys in the main dictionary
+* `keyspace_misses`: Number of failed lookup of keys in the main dictionary
+* `pubsub_channels`: Global number of pub/sub channels with client subscriptions
+* `pubsub_patterns`: Global number of pub/sub pattern with client subscriptions
+* `latest_fork_usec`: Duration of the latest fork operation in microseconds
+
+Here is the meaning of all fields in the **replication** section:
+
+* `role`: Value is "master" if the instance is slave of no one, or "slave" if the instance is enslaved to a master.
+ Note that a slave can be master of another slave (daisy chaining).
+
+If the instance is a slave, these additional fields are provided:
+
+* `master_host`: Host or IP address of the master
+* `master_port`: Master listening TCP port
+* `master_link_status`: Status of the link (up/down)
+* `master_last_io_seconds_ago`: Number of seconds since the last interaction with master
+* `master_sync_in_progress`: Indicate the master is SYNCing to the slave
+
+If a SYNC operation is on-going, these additional fields are provided:
+
+* `master_sync_left_bytes`: Number of bytes left before SYNCing is complete
+* `master_sync_last_io_seconds_ago`: Number of seconds since last transfer I/O during a SYNC operation
+
+If the link between master and slave is down, an additional field is provided:
+
+* `master_link_down_since_seconds`: Number of seconds since the link is down
+
+The following field is always provided:
+
+* `connected_slaves`: Number of connected slaves
+
+For each slave, the following line is added:
+
+* `slaveXXX`: id, ip address, port, state
+
+Here is the meaning of all fields in the **cpu** section:
+
+* `used_cpu_sys`: System CPU consumed by the Redis server
+* `used_cpu_user`:User CPU consumed by the Redis server
+* `used_cpu_sys_children`: System CPU consumed by the background processes
+* `used_cpu_user_children`: User CPU consumed by the background processes
+
+The **commandstats** section provides statistics based on the command type,
+including the number of calls, the total CPU time consumed by these commands,
+and the average CPU consumed per command execution.
+
+For each command type, the following line is added:
+
+* `cmdstat_XXX`:calls=XXX,usec=XXX,usec_per_call=XXX
+
+The **cluster** section currently only contains a unique field:
+
+* `cluster_enabled`: Indicate Redis cluster is enabled
+
+The **keyspace** section provides statistics on the main dictionary of each database.
+The statistics are the number of keys, and the number of keys with an expiration.
+
+For each database, the following line is added:
+
+* `dbXXX`:keys=XXX,expires=XXX
[hcgcpgp]: http://code.google.com/p/google-perftools/
View
4 commands/pexpire.md
@@ -1,7 +1,3 @@
-@complexity
-
-O(1)
-
This command works exactly like `EXPIRE` but the time to live of the key is
specified in milliseconds instead of seconds.
View
4 commands/pexpireat.md
@@ -1,7 +1,3 @@
-@complexity
-
-O(1)
-
`PEXPIREAT` has the same effect and semantic as `EXPIREAT`, but the Unix time at
which the key will expire is specified in milliseconds instead of seconds.
View
4 commands/psetex.md
@@ -1,7 +1,3 @@
-@complexity
-
-O(1)
-
`PSETEX` works exactly like `SETEX` with the sole difference that the expire
time is specified in milliseconds instead of seconds.
View
4 commands/pttl.md
@@ -1,7 +1,3 @@
-@complexity
-
-O(1)
-
Like `TTL` this command returns the remaining time to live of a key that has an
expire set, with the sole difference that `TTL` returns the amount of remaining
time in seconds while `PTTL` returns it in milliseconds.
View
4 commands/time.md
@@ -1,7 +1,3 @@
-@complexity
-
-O(1)
-
The `TIME` command returns the current server time as a two items lists: a Unix
timestamp and the amount of microseconds already elapsed in the current second.
Basically the interface is very similar to the one of the `gettimeofday` system
View
BIN  topics/Data_size.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
View
69 topics/benchmarks.md
@@ -9,23 +9,27 @@ The following options are supported:
Usage: redis-benchmark [-h <host>] [-p <port>] [-c <clients>] [-n <requests]> [-k <boolean>]
- -h <hostname> Server hostname (default 127.0.0.1)
- -p <port> Server port (default 6379)
- -s <socket> Server socket (overrides host and port)
- -c <clients> Number of parallel connections (default 50)
- -n <requests> Total number of requests (default 10000)
- -d <size> Data size of SET/GET value in bytes (default 2)
- -k <boolean> 1=keep alive 0=reconnect (default 1)
- -r <keyspacelen> Use random keys for SET/GET/INCR, random values for SADD
+ -h <hostname> Server hostname (default 127.0.0.1)
+ -p <port> Server port (default 6379)
+ -s <socket> Server socket (overrides host and port)
+ -c <clients> Number of parallel connections (default 50)
+ -n <requests> Total number of requests (default 10000)
+ -d <size> Data size of SET/GET value in bytes (default 2)
+ -k <boolean> 1=keep alive 0=reconnect (default 1)
+ -r <keyspacelen> Use random keys for SET/GET/INCR, random values for SADD
Using this option the benchmark will get/set keys
- in the form mykey_rand000000012456 instead of constant
+ in the form mykey_rand:000000012456 instead of constant
keys, the <keyspacelen> argument determines the max
number of values for the random number. For instance
- if set to 10 only rand000000000000 - rand000000000009
+ if set to 10 only rand:000000000000 - rand:000000000009
range will be allowed.
- -q Quiet. Just show query/sec values
- -l Loop. Run the tests forever
- -I Idle mode. Just open N idle connections and wait.
+ -P <numreq> Pipeline <numreq> requests. Default 1 (no pipeline).
+ -q Quiet. Just show query/sec values
+ --csv Output in CSV format
+ -l Loop. Run the tests forever
+ -t <tests> Only run the comma separated list of tests. The test
+ names are the same as the ones produced as output.
+ -I Idle mode. Just open N idle connections and wait.
You need to have a running Redis instance before launching the benchmark.
A typical example would be:
@@ -65,11 +69,6 @@ multiple CPU cores. People are supposed to launch several Redis instances to
scale out on several cores if needed. It is not really fair to compare one
single Redis instance to a multi-threaded data store.
-Then the benchmark should do the same operations, and work in the same way with
-the multiple data stores you want to compare. It is absolutely pointless to
-compare the result of redis-benchmark to the result of another benchmark
-program and extrapolate.
-
A common misconception is that redis-benchmark is designed to make Redis
performances look stellar, the throughput achieved by redis-benchmark being
somewhat artificial, and not achievable by a real application. This is
@@ -77,13 +76,23 @@ actually plain wrong.
The redis-benchmark program is a quick and useful way to get some figures and
evaluate the performance of a Redis instance on a given hardware. However,
-it does not represent the maximum throughput a Redis instance can sustain.
-Actually, by using pipelining and a fast client (hiredis), it is fairly easy
-to write a program generating more throughput than redis-benchmark. The current
-version of redis-benchmark achieves throughput by exploiting concurrency only
-(i.e. it creates several connections to the server). It does not use pipelining
-or any parallelism at all (one pending query per connection at most, and
-no multi-threading).
+by default, it does not represent the maximum throughput a Redis instance can
+sustain. Actually, by using pipelining and a fast client (hiredis), it is fairly
+easy to write a program generating more throughput than redis-benchmark. The
+default behavior of redis-benchmark is to achieve throughput by exploiting
+concurrency only (i.e. it creates several connections to the server).
+It does not use pipelining or any parallelism at all (one pending query per
+connection at most, and no multi-threading).
+
+To run a benchmark using pipelining mode (and achieve higher throughputs),
+you need to explicitly use the -P option. Please note that it is still a
+realistic behavior since a lot of Redis based applications actively use
+pipelining to improve performance.
+
+Finally, the benchmark should apply the same operations, and work in the same way
+with the multiple data stores you want to compare. It is absolutely pointless to
+compare the result of redis-benchmark to the result of another benchmark
+program and extrapolate.
For instance, Redis and memcached in single-threaded mode can be compared on
GET/SET operations. Both are in-memory data stores, working mostly in the same
@@ -153,6 +162,16 @@ the TCP/IP loopback and unix domain sockets can be used. It depends on the
platform, but unix domain sockets can achieve around 50% more throughput than
the TCP/IP loopback (on Linux for instance). The default behavior of
redis-benchmark is to use the TCP/IP loopback.
++ The performance benefit of unix domain sockets compared to TCP/IP loopback
+tends to decrease when pipelining is heavily used (i.e. long pipelines).
++ When an ethernet network is used to access Redis, aggregating commands using
+pipelining is especially efficient when the size of the data is kept under
+the ethernet packet size (about 1500 bytes). Actually, processing 10 bytes,
+100 bytes, or 1000 bytes queries almost result in the same throughput.
+See the graph below.
+
+![Data size impact](https://github.com/dspezia/redis-doc/raw/client_command/topics/Data_size.png)
+
+ On multi CPU sockets servers, Redis performance becomes dependant on the
NUMA configuration and process location. The most visible effect is that
redis-benchmark results seem non deterministic because client and server
Something went wrong with that request. Please try again.