Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP

Loading…

Documentation prettifying #44

Closed
wants to merge 1 commit into from

2 participants

@veganstraightedge
  • Fixed some typos (spelling mistakes).
  • Added .md extension to doc files.
  • Added some Markdown formatting. (For example, using `` around code or commands puts it in a monospace font in a slightly different color).

Compare the README at https://github.com/veganstraightedge/redis to https://github.com/antirez/redis

@mattsta mattsta closed this
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Commits on Feb 22, 2011
  1. @veganstraightedge

    converted doc files to .md, fixed typos, added some markdown bits. fo…

    veganstraightedge authored
    …r auto github pretty documentation pages
This page is out of date. Refresh to see the latest.
View
0  00-RELEASENOTES → 00-RELEASENOTES.md
File renamed without changes
View
0  CONTRIBUTING → CONTRIBUTING.md
File renamed without changes
View
6 COPYING → COPYING.md
@@ -3,8 +3,8 @@ All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
- * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
- * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
- * Neither the name of Redis nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
+* Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
+* Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
+* Neither the name of Redis nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
View
8 INSTALL → INSTALL.md
@@ -2,19 +2,19 @@ To compile Redis, do the following:
cd src; make
-The compilation will produce a redis-server binary.
+The compilation will produce a `redis-server` binary.
To install Redis, use
make install
-and all the binaries will be installed on /usr/local/bin.
+and all the binaries will be installed on `/usr/local/bin`.
Alternatively:
make PREFIX=/some/other/directory
-to have the binaries in /some/other/directory/bin.
+to have the binaries in `/some/other/directory/bin`.
Run the server using the following command line:
@@ -26,5 +26,5 @@ Otherwise if you want to provide your configuration use:
/path/to/redis-server /path/to/redis.conf
-You can find an example redis.conf file in the root directory
+You can find an example `redis.conf` file in the root directory
of this source distribution.
View
25 README → README.md
@@ -1,11 +1,11 @@
Where to find complete Redis documentation?
--------------------------------------------
+===========================================
-This README is just a fast "quick start" document. You can find more detailed
+This `README` is just a fast "quick start" document. You can find more detailed
documentation here:
-1) http://code.google.com/p/redis
-2) Check the 'doc' directory. doc/README.html is a good starting point :)
+1. http://code.google.com/p/redis
+2. Check the `doc` directory. `doc/README.html` is a good starting point :)
Building Redis
--------------
@@ -15,8 +15,8 @@ It is as simple as:
% make
Redis is just a single binary, but if you want to install it you can use
-the "make install" target that will copy the binary in /usr/local/bin
-for default. You can also use "make PREFIX=/some/other/directory install"
+the `make install` target that will copy the binary in `/usr/local/bin`
+for default. You can also use `make PREFIX=/some/other/directory install`
if you wish to use a different destination.
You can run a 32 bit Redis binary using:
@@ -30,8 +30,8 @@ After you build Redis is a good idea to test it, using:
Buliding using tcmalloc
-----------------------
-tcmalloc is a fast and space efficient implementation (for little objects)
-of malloc(). Compiling Redis with it can improve performances and memeory
+`tcmalloc` is a fast and space efficient implementation (for little objects)
+of `malloc()`. Compiling Redis with it can improve performances and memeory
usage. You can read more about it here:
http://goog-perftools.sourceforge.net/doc/tcmalloc.html
@@ -42,7 +42,7 @@ and then use:
% make USE_TCMALLOC=yes
Note that you can pass any other target to make, as long as you append
-USE_TCMALLOC=yes at the end.
+`USE_TCMALLOC=yes` at the end.
Running Redis
-------------
@@ -51,7 +51,7 @@ To run Redis with the default configuration just type:
% cd src
% ./redis-server
-
+
If you want to provide your redis.conf, you have to run it using an additional
parameter (the path of the configuration file):
@@ -61,7 +61,7 @@ parameter (the path of the configuration file):
Playing with Redis
------------------
-You can use redis-cli to play with Redis. Start a redis-server instance,
+You can use `redis-cli` to play with Redis. Start a `redis-server` instance,
then in another terminal try the following:
% cd src
@@ -76,11 +76,10 @@ then in another terminal try the following:
(integer) 1
redis> incr mycounter
(integer) 2
- redis>
+ redis>
You can find the list of all the available commands here:
http://code.google.com/p/redis/wiki/CommandReference
Enjoy!
-
View
58 TODO
@@ -1,58 +0,0 @@
-Redis TODO
-----------
-
-WARNING: are you a possible Redis contributor?
- Before implementing what is listed what is listed in this file
- please drop a message in the Redis google group or chat with
- antirez or pietern on irc.freenode.org #redis to check if the work
- is already in progress and if the feature is still interesting for
- us, and *how* exactly this can be implemented to have good changes
- of a merge. Otherwise it is probably wasted work! Thank you
-
-DISKSTORE TODO
-==============
-
-* Check that 00/00 and ff/ff exist at startup, otherwise exit with error.
-* Implement sync flush option, where data is written synchronously on disk when a command is executed.
-* Implement MULTI/EXEC as transaction abstract API to diskstore.c, with transaction_start, transaction_end, and a journal to recover.
-* Stop BGSAVE thread on shutdown and any other condition where the child is killed during normal bgsave.
-* Fix RANDOMKEY to really do something interesting
-* Fix DBSIZE to really do something interesting
-* Add a DEBUG command to check if an entry is or not in memory currently
-
-REPLICATION
-===========
-
-* PING between master and slave from time to time, so we can subject the
-master-slave link to timeout, and detect when the connection is gone even
-if the socket is still up.
-
-OPTIMIZATIONS
-=============
-
-* SORT: Don't copy the list into a vector when BY argument is constant.
-* Write the hash table size of every db in the dump, so that Redis can resize the hash table just one time when loading a big DB.
-* Read-only mode for slaves.
-
-REPORTING
-=========
-
-* Better INFO output with sections.
-
-RANDOM
-======
-
-* Clients should be closed as far as the output buffer list is bigger than a given number of elements (configurable in redis.conf)
-* Should the redis default configuration, and the default redis.conf, just bind 127.0.0.1?
-
-KNOWN BUGS
-==========
-
-* What happens in the following scenario:
- 1) We are reading an AOF file.
- 2) SETEX FOO 5 BAR
- 3) APPEND FOO ZAP
- What happens if between 1 and 2 for some reason (system under huge load
- or alike) too many time passes? We should prevent expires while the
- AOF is loading.
-
View
58 TODO.md
@@ -0,0 +1,58 @@
+Redis TODO
+==========
+
+**WARNING** Are you a possible Redis contributor?
+Before implementing what is listed what is listed in this file please drop a
+message in the Redis google group or chat with antirez or pietern on
+irc.freenode.org #redis to check if the work is already in progress and
+if the feature is still interesting for us, and *how* exactly this can be
+implemented to have good changes of a merge.
+Otherwise it is probably wasted work! Thank you
+
+DISKSTORE TODO
+--------------
+
+* Check that `00/00` and `ff/ff` exist at startup, otherwise exit with error.
+* Implement `sync flush` option, where data is written synchronously on disk when a command is executed.
+* Implement `MULTI/EXEC` as transaction abstract API to `diskstore.c`, with `transaction_start`, `transaction_end`, and a journal to recover.
+* Stop `BGSAVE` thread on shutdown and any other condition where the child is killed during normal `bgsave`.
+* Fix `RANDOMKEY` to really do something interesting
+* Fix `DBSIZE` to really do something interesting
+* Add a `DEBUG` command to check if an entry is or not in memory currently
+
+REPLICATION
+-----------
+
+* `PING` between master and slave from time to time, so we can subject the
+master-slave link to timeout, and detect when the connection is gone even
+if the socket is still up.
+
+OPTIMIZATIONS
+-------------
+
+* **SORT**: Don't copy the list into a vector when BY argument is constant.
+* Write the hash table size of every db in the dump, so that Redis can resize the hash table just one time when loading a big DB.
+* Read-only mode for slaves.
+
+REPORTING
+---------
+
+* Better INFO output with sections.
+
+RANDOM
+------
+
+* Clients should be closed as far as the output buffer list is bigger than a given number of elements (configurable in `redis.conf`)
+* Should the redis default configuration, and the default `redis.conf`, just bind `127.0.0.1`?
+
+KNOWN BUGS
+----------
+
+* What happens in the following scenario:
+ 1. We are reading an AOF file.
+ 2. SETEX FOO 5 BAR
+ 3. APPEND FOO ZAP
+
+ What happens if between 1 and 2 for some reason (system under huge load
+ or alike) too many time passes? We should prevent expires while the
+ AOF is loading.
View
0  client-libraries/README → client-libraries/README.md
File renamed without changes
View
2  deps/hiredis/README.md
@@ -1,6 +1,6 @@
# HIREDIS
-Hiredis is a minimalistic C client library for the [Redis](http://redis.io/) database.
+Hiredis is a minimalistic C client library for the [Redis](http://redis.io) database.
It is minimalistic because it just adds minimal support for the protocol, but
at the same time it uses an high level printf-alike API in order to make it
View
81 design-documents/REDIS-CLUSTER-2 → design-documents/REDIS-CLUSTER-2.md
@@ -54,11 +54,11 @@ Data nodes
Data nodes are normal Redis instances, but a few additional commands are
provided.
-HASHRING ADD ... list of hash slots ...
-HASHRING DEL ... list of hash slots ...
-HASHRING REHASHING slot
-HASHRING SLOTS => returns the list of configured slots
-HSAHRING KEYS ... list of hash slots ...
+ HASHRING ADD ... list of hash slots ...
+ HASHRING DEL ... list of hash slots ...
+ HASHRING REHASHING slot
+ HASHRING SLOTS => returns the list of configured slots
+ HSAHRING KEYS ... list of hash slots ...
By default Redis instances are configured to accept operations about all
the hash slots. With this commands it's possible to configure a Redis instance
@@ -67,24 +67,24 @@ to accept only a subset of the key space.
If an operation is performed against a key hashing to a slot that is not
configured to be accepted, the Redis instance will reply with:
- "-ERR wrong hash slot"
+ -ERR wrong hash slot
-More details on the HASHRING command and sub commands will be showed later
+More details on the `HASHRING` command and sub commands will be showed later
in this document.
Additionally three other commands are added:
-DUMP key
-RESTORE key <dump data>
-MIGRATE key host port
+ DUMP key
+ RESTORE key <dump data>
+ MIGRATE key host port
-DUMP is used to output a very compact binary representation of the data stored at key.
+`DUMP` is used to output a very compact binary representation of the data stored at key.
-RESTORE re-creates a value (storing it at key) starting from the output produced by DUMP.
+`RESTORE` re-creates a value (storing it at key) starting from the output produced by DUMP.
-MIGRATE is like a server-side DUMP+RESTORE command. This atomic command moves one key from the connected instance to another instance, returning the status code of the operation (+OK or an error).
+`MIGRATE` is like a server-side DUMP+RESTORE command. This atomic command moves one key from the connected instance to another instance, returning the status code of the operation (+OK or an error).
-The protocol described in this draft only uses the MIGRATE command, but this in turn will use RESTORE internally when connecting to another server, and DUMP is provided for symmetry.
+The protocol described in this draft only uses the `MIGRATE` command, but this in turn will use `RESTORE` internally when connecting to another server, and DUMP is provided for symmetry.
Querying the cluster
====================
@@ -102,8 +102,8 @@ into memory. The cluster configuration is the sum of the following info:
hash slot 3 -> node 3
... and so forth ...
- Physical address of nodes, and their replicas.
- node 0 addr -> 192.168.1.100
- node 0 replicas -> 192.168.1.101, 192.168.1.105
+ `node 0 addr -> 192.168.1.100`
+ `node 0 replicas -> 192.168.1.101, 192.168.1.105`
- Configuration version: the SHA1 of the whole configuration
The configuration is stored in every single data node of the cluster.
@@ -127,7 +127,7 @@ configuration version matches the one loaded in memory.
Also a client is required to refresh the configuration every time a node
replies with:
- "-ERR wrong hash slot"
+ -ERR wrong hash slot
As this means that hash slots were reassigned in some way.
@@ -140,12 +140,12 @@ to time is going to have no impact in the overall performance.
-------------
To perform a read query the client hashes the key argument from the command
-(in the intiial version of Redis Cluster only single-key commands are
+(in the initial version of Redis Cluster only single-key commands are
allowed). Using the in memory configuration it maps the hash key to the
node ID.
If the client is configured to support read-after-write consistency, then
-the "master" node for this hash slot is queried.
+the `master` node for this hash slot is queried.
Otherwise the client picks a random node from the master and the replicas
available.
@@ -159,7 +159,7 @@ write always targets the master node, instead of the replicas.
Creating a cluster
==================
-In order to create a new cluster, the redis-cluster command line utility is
+In order to create a new cluster, the `redis-cluster` command line utility is
used. It gets a list of available nodes and replicas, in order to write the
initial configuration in all the nodes.
@@ -168,22 +168,21 @@ At this point the cluster is usable by clients.
Adding nodes to the cluster
===========================
-The command line utility redis-cluster is used in order to add a node to the
+The command line utility `redis-cluster` is used in order to add a node to the
cluster:
-1) The cluster configuration is loaded.
-2) A fair number of hash slots are assigned to the new data node.
-3) Hash slots moved to the new node are marked as "REHASHING" in the old
- nodes, using the HASHRING command:
+1. The cluster configuration is loaded.
+2. A fair number of hash slots are assigned to the new data node.
+3. Hash slots moved to the new node are marked as "REHASHING" in the old nodes, using the HASHRING command:
- HASHRING SETREHASHING 1 192.168.1.103 6380
+ `HASHRING SETREHASHING 1 192.168.1.103 6380`
The above command set the hash slot "1" in rehashing state, with the
"forwarding address" to 192.168.1.103:6380. As a result if this node receives
a query about a key hashing to hash slot 1, that *is not present* in the
current data set, it replies with:
- "-MIGRATED 192.168.1.103:6380"
+ -MIGRATED 192.168.1.103:6380
The client can then reissue the query against the new node.
@@ -194,14 +193,12 @@ rehashing.
Note that no additional memory is used by Redis in order to provide such a
feature.
-4) While the Hash slot is marked as "REHASHING", redis-cluster asks this node
-the list of all the keys matching the specified hash slot. Then all the keys
-are moved to the new node using the MIGRATE command.
-5) Once all the keys are migrated, the hash slot is deleted from the old
-node configuration with "HASHRING DEL 1". And the configuration is update.
+4. While the Hash slot is marked as `REHASHING`, `redis-cluster` asks this node the list of all the keys matching the specified hash slot. Then all the keys are moved to the new node using the `MIGRATE` command.
+5. Once all the keys are migrated, the hash slot is deleted from the old node configuration with `HASHRING DEL 1`. And the configuration is update.
-Using this algorithm all the hash slots are migrated one after the other to the new node. In practical implementation before to start the migration the
-redis-cluster utility should write a log into the configuration so that
+Using this algorithm all the hash slots are migrated one after the other to the new node.
+In practical implementation before to start the migration the
+`redis-cluster` utility should write a log into the configuration so that
in case of crash or any other problem the utility is able to recover from
were it left.
@@ -217,9 +214,9 @@ signaling it to all the other clients.
When a master node is failing in a permanent way, promoting the first slave
is easy:
-1) At some point a client will notice there are problems accessing a given node. It will try to refresh the config, but will notice that the config is already up to date.
-2) In order to make sure the problem is not about the client connectivity itself, it will try to reach other nodes as well. If more than M-1 nodes appear to be down, it's either a client networking problem or alternatively the cluster can't be fixed as too many nodes are down anyway. So no action is taken, but an error is reported.
-3) If instead only 1 or at max M-1 nodes appear to be down, the client promotes a slave as master and writes the new configuration to all the data nodes.
+1. At some point a client will notice there are problems accessing a given node. It will try to refresh the config, but will notice that the config is already up to date.
+2. In order to make sure the problem is not about the client connectivity itself, it will try to reach other nodes as well. If more than M-1 nodes appear to be down, it's either a client networking problem or alternatively the cluster can't be fixed as too many nodes are down anyway. So no action is taken, but an error is reported.
+3. If instead only 1 or at max M-1 nodes appear to be down, the client promotes a slave as master and writes the new configuration to all the data nodes.
All the other clients will see the data node not working, and as a first step will try to refresh the configuration. They will successful refresh the configuration and the cluster will work again.
@@ -241,12 +238,12 @@ cluster and update if needed).
One way to fix this problem is to delegate the fail over mechanism to a
failover agent. When clients notice problems will not take any active action
-but will just log the problem into a redis list in all the reachable nodes,
+but will just log the problem into a `redis` list in all the reachable nodes,
wait, check for configuration change, and retry.
The failover agent constantly monitor this logs: if some client is reporting
a failing node, it can take appropriate actions, checking if the failure is
-permanent or not. If it's not he can send a SHUTDOWN command to the failing
+permanent or not. If it's not he can send a `SHUTDOWN` command to the failing
master if possible. The failover agent can also consider better the problem
checking if the failing mode is advertised by all the clients or just a single
one, and can check itself if there is a real problem before to proceed with
@@ -261,7 +258,7 @@ usual Redis client lib protocol (where a minimal lib can be as small as
100 lines of code), a proxy will be provided to implement the cluster protocol
as a proxy.
-Every client will talk to a redis-proxy node that is responsible of using
+Every client will talk to a `redis-proxy` node that is responsible of using
the new protocol and forwarding back the replies.
In the long run the aim is to switch all the major client libraries to the
@@ -307,7 +304,7 @@ For instance all the nodes may take a list of errors detected by clients.
If Client-1 detects some failure accessing Node-3, for instance a connection
refused error or a timeout, it logs what happened with LPUSH commands against
-all the other nodes. This "error messages" will have a timestamp and the Node
+all the other nodes. This "error message" will have a timestamp and the Node
id. Something like:
LPUSH __cluster__:errors 3:1272545939
@@ -328,7 +325,7 @@ refresh the configuration before a new access.
The config hint may be something like:
-"we are switching to a new master, that is x.y.z.k:port, in a few seconds"
+ we are switching to a new master, that is x.y.z.k:port, in a few seconds
When a client updates the config and finds such a flag set, it starts to
continuously refresh the config until a change is noticed (this will take
View
72 design-documents/REDIS-CLUSTER → design-documents/REDIS-CLUSTER.md
@@ -12,11 +12,11 @@ sub-dictionaries (hashes) and so forth.
While Redis is very fast, currently it lacks scalability in the form of ability
to transparently run across different nodes. This is desirable mainly for the
-following three rasons:
+following three reasons:
-A) Fault tolerance. Some node may go off line without affecting the operations.
-B) Holding bigger datasets without using a single box with a lot of RAM.
-C) Scaling writes.
+1. Fault tolerance. Some node may go off line without affecting the operations.
+2. Holding bigger datasets without using a single box with a lot of RAM.
+3. Scaling writes.
Since a single Redis instance supports 140,000 operations per second in a good
Linux box costing less than $1000, the need for Redis Cluster arises more
@@ -33,8 +33,8 @@ Still a Dynamo alike DHT may not be the best fit for Redis.
Redis is very simple and fast at its core, so Redis cluster should try to
follow the same guidelines. The first problem with a Dynamo-alike DHT is that
-Redis supports complex data types. Merging complex values like lsits, where
-in the case of a netsplit may diverge in very complex ways, is not going to
+Redis supports complex data types. Merging complex values like lists, where
+in the case of a `netsplit` may diverge in very complex ways, is not going to
be easy. The "most recent data" wins is not applicable and all the resolution
business should be in the application.
@@ -43,7 +43,7 @@ values. Writing code in order to resolve conflicts is not going to be
programmer friendly.
So the author of this document claims that Redis does not need to resist to
-netsplits, but it is enough to resist to M-1 nodes going offline, where
+`netsplits`, but it is enough to resist to M-1 nodes going offline, where
M is the number of nodes storing every key-value pair.
For instance in a three nodes cluster I may configure the cluster in order to
@@ -64,24 +64,24 @@ Instead a more decoupled approach can be used, in the form of a Redis Proxy
node (or multiple Proxy nodes) that is contacted by clients, and
is responsible of forwarding queries and replies back and forth from data nodes.
-Data nodes can be just vanilla redis-server instances.
+Data nodes can be just vanilla `redis-server` instances.
Network layout
==============
- - One ore more Data Nodes. Every node is identified by ip:port.
- - A single Configuration Node.
- - One more more Proxy Nodes (redis-cluster nodes).
- - A single Handling Node.
+- One ore more Data Nodes. Every node is identified by ip:port.
+- A single Configuration Node.
+- One more more Proxy Nodes (redis-cluster nodes).
+- A single Handling Node.
-Data Nodes and the Configuration Node are just vanilla redis-server instances.
+Data Nodes and the Configuration Node are just vanilla `redis-server` instances.
Configuration Node
==================
- - Contains information about all the Data nodes in the cluster.
- - Contains information about all the Proxy nodes in the cluster.
- - Contains information about what Data Node holds a given sub-space of keys.
+- Contains information about all the Data nodes in the cluster.
+- Contains information about all the Proxy nodes in the cluster.
+- Contains information about what Data Node holds a given sub-space of keys.
The keyspace is divided into 1024 different "hashing slots".
(1024 is just an example, this value should be configurable)
@@ -90,7 +90,7 @@ Given a key perform SHA1(key) and use the last 10 bits of the result to get a 10
The Configuration node maps every slot of the keyspace to M different Data Nodes (every key is stored into M nodes, configurable).
-The Configuration node can be modified by a single client at a time. Locking is performed using SETNX.
+The Configuration node can be modified by a single client at a time. Locking is performed using `SETNX`.
The Configuration node should be replicated as there is a single configuration node for the whole network. It is the only single point of failure of the system.
When a Configuration node fails the cluster does not stop operating, but is not
@@ -114,7 +114,7 @@ Configuration Node. This connections are keep alive with PING requests from time
to time if there is no traffic. This way Proxy Nodes can understand asap if
there is a problem in some Data Node or in the Configuration Node.
-When a Proxy Node is started it needs to know the Configuration node address in order to load the infomration about the Data nodes and the mapping between the key space and the nodes.
+When a Proxy Node is started it needs to know the Configuration node address in order to load the information about the Data nodes and the mapping between the key space and the nodes.
On startup a Proxy Node will also register itself in the Configuration node, and will make sure to refresh it's configuration every N seconds (via an EXPIREing key) so that it's possible to detect when a Proxy node fails.
@@ -126,35 +126,35 @@ The Proxy Node is also in charge of signaling failing Data nodes to the Configur
When a new Data node joins or leaves the cluster, and in general when the cluster configuration changes, all the Proxy nodes will receive a notification and will reload the configuration from the Configuration node.
-Proxy Nodes - how queries are submited
-======================================
+Proxy Nodes - how queries are submitted
+=======================================
This is how a query is processed:
-1) A client sends a query to a Proxy Node, using the Redis protocol like if it was a plain Redis Node.
-2) The Proxy Node inspects the command arguments to detect the key. The key is hashed. The Proxy Node has the table mapping a given key to M nodes, and persistent connections to all the nodes.
+1. A client sends a query to a Proxy Node, using the Redis protocol like if it was a plain Redis Node.
+2. The Proxy Node inspects the command arguments to detect the key. The key is hashed. The Proxy Node has the table mapping a given key to M nodes, and persistent connections to all the nodes.
At this point the process is different in case of read or write queries:
-WRITE QUERY:
+### WRITE QUERY
-3a) The Proxy Node forwards the query to M Data Nodes at the same time, waiting for replies.
-3b) Once all the replies are received the Proxy Node checks that the replies are consistent. For instance all the M nodes need to reply with OK and so forth. If the query fails in a subset of nodes but succeeds in other nodes, the failing nodes are considered unreliable and are put off line notifying the configuration node.
-3c) The reply is transfered back to the client.
+3a. The Proxy Node forwards the query to M Data Nodes at the same time, waiting for replies.
+3b. Once all the replies are received the Proxy Node checks that the replies are consistent. For instance all the M nodes need to reply with OK and so forth. If the query fails in a subset of nodes but succeeds in other nodes, the failing nodes are considered unreliable and are put off line notifying the configuration node.
+3c. The reply is transfered back to the client.
-READ QUERY:
+### READ QUERY
-3d) The Proxy Node forwards the query to a single random client, passing the reply back to the client.
+3d. The Proxy Node forwards the query to a single random client, passing the reply back to the client.
Handling Node
=============
The handling node is a special Redis client with the following role:
- - Handles the cluster configuration stored in the Config node.
- - Is in charge for adding and removing nodes dynamically from the net.
- - Relocates keys on nodes additions / removal.
- - Signal a configuration change to Proxy nodes.
+- Handles the cluster configuration stored in the Config node.
+- Is in charge for adding and removing nodes dynamically from the net.
+- Relocates keys on nodes additions / removal.
+- Signal a configuration change to Proxy nodes.
More details on hashing slots
============================
@@ -170,7 +170,7 @@ Every hashing slot is actually a Redis list, containing a single or more ip:port
hashingslot:10 => 192.168.1.19:6379, 192.168.1.200:6379
-This mean that keys hashing to slot 10 will be saved in the two Data nodes 192.168.1.19:6379 and 192.168.1.200:6379.
+This mean that keys hashing to slot 10 will be saved in the two Data nodes `192.168.1.19:6379` and `192.168.1.200:6379`.
When a client performs a read operation (via a proxy node), the proxy will contact a random Data node among the data nodes in charge for the given slot.
@@ -178,7 +178,7 @@ For instance a client can ask for the following operation to a given Proxy node:
GET mykey
-"mykey" hashes to (for instance) slot 10, so the Proxy will forward the request to either Data node 192.168.1.19:6379 or 192.168.1.200:6379, and then forward back the reply to the client.
+`mykey` hashes to (for instance) slot 10, so the Proxy will forward the request to either Data node `192.168.1.19:6379` or `192.168.1.200:6379`, and then forward back the reply to the client.
When a write operation is performed, it is forwarded to both the Data nodes in the example (and in general to all the data nodes).
@@ -196,9 +196,9 @@ For instance let's assume there are already two Data nodes in the cluster:
192.168.1.1:6379
192.168.1.2:6379
-We add a new node 192.168.1.3:6379 via the LPUSH operation.
+We add a new node `192.168.1.3:6379` via the `LPUSH` operation.
-We can imagine that the 1024 hash slots are assigned equally among the two inital nodes. In order to add the new (third) node what we have to do is to move incrementally 341 slots form the two old servers to the new one.
+We can imagine that the 1024 hash slots are assigned equally among the two initial nodes. In order to add the new (third) node what we have to do is to move incrementally 341 slots form the two old servers to the new one.
For now we can think that every hash slot is only stored in a single server, to generalize the idea later.
Something went wrong with that request. Please try again.