Permalink
Browse files

Merge branch 'unstable'

  • Loading branch information...
2 parents b8513c9 + 6b52ad8 commit bfe85f7ca97259256e8089349e1a462b6c7dbd00 @antirez antirez committed May 10, 2011
Showing with 6,856 additions and 1,428 deletions.
  1. +156 −0 CLUSTER
  2. +35 −10 TODO
  3. +7 −4 deps/hiredis/Makefile
  4. +2 −2 deps/hiredis/README.md
  5. +279 −55 deps/hiredis/async.c
  6. +24 −10 deps/hiredis/async.h
  7. +338 −0 deps/hiredis/dict.c
  8. +126 −0 deps/hiredis/dict.h
  9. +2 −1 deps/hiredis/example.c
  10. +4 −5 deps/hiredis/fmacros.h
  11. +74 −49 deps/hiredis/hiredis.c
  12. +16 −2 deps/hiredis/hiredis.h
  13. +100 −18 deps/hiredis/net.c
  14. +5 −2 deps/hiredis/net.h
  15. +130 −15 deps/hiredis/sds.c
  16. +12 −1 deps/hiredis/sds.h
  17. +68 −15 deps/hiredis/test.c
  18. +6 −0 redis.conf
  19. +49 −16 src/Makefile
  20. +10 −0 src/anet.c
  21. +1 −0 src/anet.h
  22. +54 −18 src/aof.c
  23. +18 −0 src/asciilogo.h
  24. +1,766 −0 src/cluster.c
  25. +27 −0 src/config.c
  26. +14 −1 src/config.h
  27. +74 −0 src/crc16.c
  28. +125 −1 src/db.c
  29. +49 −14 src/debug.c
  30. +15 −5 src/dict.c
  31. +6 −2 src/dict.h
  32. +32 −56 src/dscache.c
  33. +147 −46 src/networking.c
  34. +76 −7 src/object.c
  35. +50 −19 src/rdb.c
  36. +19 −5 src/redis-cli.c
  37. +258 −0 src/redis-trib.rb
  38. +119 −30 src/redis.c
  39. +207 −43 src/redis.h
  40. +22 −21 src/sds.c
  41. +11 −0 src/sds.h
  42. +3 −0 src/sort.c
  43. +14 −0 src/syncio.c
  44. +11 −6 src/t_hash.c
  45. +25 −20 src/t_list.c
  46. +18 −14 src/t_set.c
  47. +1,325 −360 src/t_zset.c
  48. +231 −34 src/util.c
  49. +12 −0 src/util.h
  50. +1 −1 src/version.h
  51. +44 −53 src/ziplist.c
  52. +47 −25 tests/integration/aof.tcl
  53. +1 −1 tests/support/server.tcl
  54. +1 −0 tests/test_helper.tcl
  55. +9 −0 tests/unit/type/hash.tcl
  56. +16 −0 tests/unit/type/list.tcl
  57. +15 −0 tests/unit/type/set.tcl
  58. +550 −441 tests/unit/type/zset.tcl
View
156 CLUSTER
@@ -0,0 +1,156 @@
+CLUSTER README
+==============
+
+Redis Cluster is currenty a work in progress, however there are a few things
+that you can do already with it to see how it works.
+
+The following guide show you how to setup a three nodes cluster and issue some
+basic command against it.
+
+... WORK IN PROGRESS ...
+
+1) Show MIGRATE
+2) Show CLUSTER MEET
+3) Show link status detection with CLUSTER NODES
+4) Show how to add slots with CLUSTER ADDSLOTS
+5) Show redirection
+6) Show cluster down
+
+... WORK IN PROGRESS ...
+
+TODO
+====
+
+*** WARNING: all the following problably has some meaning only for
+*** me (antirez), most info are not updated, so please consider this file
+*** as a private TODO list / brainstorming.
+
+- disconnect FAIL clients after some pong idle time.
+
+---------------------------------
+
+* Majority rule: the cluster con continue when there are all the hash slots covered AND when there are the majority of masters.
+* Shutdown on request rule: when a node sees many connections closed or even a timeout longer than usual on almost all the other nodes, it will usually wait for the normal timeout before to change the state, unless it receives a query from a client: in such a case it will put itself into error status.
+
+--------------------------------
+
+* When asked for a key that is not in a node's business it will reply:
+
+ -ASK 1.2.3.4:6379 (in case we want the client to ask just one time)
+ -MOVED <slotid> 1.2.3.4:6379 (in case the hash slot is permanently moved)
+
+So with -ASK a client should just retry the query against this new node, a single time.
+
+With -MOVED the client should update its hash slots table to reflect the fact that now the specified node is the one to contact for the specified hash slot.
+
+* Nodes communicate using a binary protocol.
+
+* Node failure detection.
+
+ 1) Every node contains information about all the other nodes:
+ - If this node is believed to work ok or not
+ - The hash slots for which this node is responsible
+ - If the node is a master or a slave
+ - If it is a slave, the slave of which node
+ - if it is a master, the list of slave nodes
+ - The slaves are ordered for "<ip>:<port>" string from lower to higher
+ ordered lexicographically. When a master is down, the cluster will
+ try to elect the first slave in the list.
+
+ 2) Every node also contains the unix time where every other node was
+ reported to work properly (that is, it replied to a ping or any other
+ protocol request correctly). For every node we also store the timestamp
+ at which we sent the latest ping, so we can easily compute the current
+ lag.
+
+ 3) From time to time a node pings a random node, selected among the nodes
+ with the least recent "alive" time stamp. Three random nodes are selected
+ and the one with lower alive time stamp is pinged.
+
+ 4) The ping packet contains also information about a few random nodes
+ alive time stamp. So that the receiver of the ping will update the
+ alive table if the received alive timestamp is more recent the
+ one present in the node local table.
+
+ In the ping packet every node "gossip" information is somethig like
+ this:
+
+ <ip>:<port>:<status>:<pingsent_timestamp>:<pongreceived_timestamp>
+
+ status is OK, POSSIBLE_FAILURE, FAILURE.
+
+ 5) The node replies to ping with a pong packet, that also contains a random
+ selections of nodes timestamps.
+
+A given node thinks another node may be in a failure state once there is a
+ping timeout bigger than 30 seconds (configurable).
+
+When a possible failure is detected the node performs the following action:
+
+ 1) Is the average between all the other nodes big? For instance bigger
+ than 30 seconds / 2 = 15 seconds? Probably *we* are disconnected.
+ In such a case we don't trust our lag data, and reset all the
+ timestamps of sent ping to zero. This way when we'll reconnect there
+ is no risk that we'll claim many nodes are down, taking inappropriate
+ actions.
+
+ 2) Messages from nodes marked as failed are *always* ignored by the other
+ nodes. A new node needs to be "introduced" by a good online node.
+
+ 3) If we are well connected (that is, condition "1" is not true) and a
+ node timeout is > 30 seconds, we mark the node as POSSIBLE_FAILURE
+ (a flat in the cluster node structure). Every time we sent a ping
+ to another node we inform this other nodes that we detected this
+ condition, as already stated.
+
+ 4) Once a node receives a POSSIBLE_FAILURE status for a node that is
+ already marked as POSSIBLE_FAILURE locally, it sends a message
+ to all the other nodes of type NODE_FAILURE_DETECTED, communicating the
+ ip/port of the specified node.
+
+ All the nodes need to update the status of this node setting it into
+ FAILURE.
+
+ 5) If the computer in FAILURE state is a master node, what is needed is
+ to perform a Slave Election.
+
+SLAVE ELECTION
+
+ 1) The slave election is performed by the first slave (with slaves ordered
+ lexicographically). Actually it is the first functioning slave, so if
+ the first slave is marked as failing the next slave will perform the
+ election and so forth. Such a slave is called the "Successor".
+
+ 2) The Successor starts checking that all the nodes in the cluster already
+ marked the master in FAILURE state. If at least one node does not agree
+ no action is performed.
+
+ 3) If all the nodes agree that the master is failing, the Successor does
+ the following:
+
+ a) It will send a SUCCESSION message to all the other nodes, that will
+ upgrade the hash slot tables accordingly. It will make sure that all
+ the nodes are updated and if some node did not received the message
+ it will keep trying.
+ b) Once all nodes not marked as FAILURE accepted the SUCCESSION message
+ it will update his own table and will start acting as a master
+ accepting write queries.
+ c) Every node receiving the succession message, if not already informed
+ of the change will broadcast the same message to other three random
+ nodes. No action is performed if the specified host was already marked
+ as the master node.
+ d) A node that was a slave of the original master that failed will
+ switch master to the new one once the SUCCESSION message is received.
+
+RANDOM
+
+ 1) When selecting a slave, the system will try to pick one with an IP different than the master and other slaves, if possible.
+
+ 2) The PING packet also contains information about the local configuration checksum. This is the SHA1 of the current configuration, without the bits that normally change form one node to another (like latest ping reply, failure status of nodes, and so forth). From time to time the local config SHA1 is checked against the list of the other nodes, and if there is a mismatch between our configuration and the most common one that lasts for more than N seconds, the most common configuration is asked and retrieved from another node. The event is logged.
+
+ 3) Every time a node updates its internal cluster configuration, it dumps such a config in the cluster.conf file. On startup the configuration is reloaded.
+ Nodes can share the cluster configuration when needed (for instance if SHA1 does not match) using this exact same format.
+
+CLIENTS
+
+ - Clients may be configured to use slaves to perform reads, when read-after-write consistency is not required.
View
45 TODO
@@ -9,18 +9,24 @@ WARNING: are you a possible Redis contributor?
us, and *how* exactly this can be implemented to have good changes
of a merge. Otherwise it is probably wasted work! Thank you
-DISKSTORE TODO
-==============
-* Check that 00/00 and ff/ff exist at startup, otherwise exit with error.
-* Implement sync flush option, where data is written synchronously on disk when a command is executed.
-* Implement MULTI/EXEC as transaction abstract API to diskstore.c, with transaction_start, transaction_end, and a journal to recover.
-* Stop BGSAVE thread on shutdown and any other condition where the child is killed during normal bgsave.
-* Fix RANDOMKEY to really do something interesting
-* Fix DBSIZE to really do something interesting
-* Add a DEBUG command to check if an entry is or not in memory currently
+API CHANGES
+===========
-* dscache.c near 236, kobj = createStringObject... we could use static obj.
+* Turn commands into variadic versions when it makes sense, that is, when
+ the variable number of arguments represent values, and there is no conflict
+ with the return value of the command.
+
+CLUSTER
+=======
+
+* Implement rehashing and cluster check in redis-trib.
+* Reimplement MIGRATE / RESTORE to use just in memory buffers (no disk at
+ all). This will require touching a lot of the RDB stuff around, but we may
+ hand with faster persistence for RDB.
+* Implement the slave nodes semantics and election.
+* Allow redis-trib to create a cluster-wide snapshot (using SYNC).
+* Allow redis-trib to restore a cluster-wide snapshot (implement UPLOAD?).
APPEND ONLY FILE
================
@@ -35,6 +41,8 @@ OPTIMIZATIONS
* SORT: Don't copy the list into a vector when BY argument is constant.
* Write the hash table size of every db in the dump, so that Redis can resize the hash table just one time when loading a big DB.
* Read-only mode for slaves.
+* Redis big lists as linked lists of small ziplists?
+ Possibly a simple heuristic that join near nodes when some node gets smaller than the low_level, and split it into two if gets bigger than high_level.
REPORTING
=========
@@ -57,4 +65,21 @@ KNOWN BUGS
What happens if between 1 and 2 for some reason (system under huge load
or alike) too many time passes? We should prevent expires while the
AOF is loading.
+* #519: Slave may have expired keys that were never read in the master (so a DEL
+ is not sent in the replication channel) but are already expired since
+ a lot of time. Maybe after a given delay that is undoubltly greater than
+ the replication link latency we should expire this key on the slave on
+ access?
+DISKSTORE TODO
+==============
+
+* Fix FLUSHALL/FLUSHDB: the queue of pending reads/writes should be handled.
+* Check that 00/00 and ff/ff exist at startup, otherwise exit with error.
+* Implement sync flush option, where data is written synchronously on disk when a command is executed.
+* Implement MULTI/EXEC as transaction abstract API to diskstore.c, with transaction_start, transaction_end, and a journal to recover.
+* Stop BGSAVE thread on shutdown and any other condition where the child is killed during normal bgsave.
+* Fix RANDOMKEY to really do something interesting
+* Fix DBSIZE to really do something interesting
+* Add a DEBUG command to check if an entry is or not in memory currently
+* dscache.c near 236, kobj = createStringObject... we could use static obj.
View
@@ -15,8 +15,9 @@ ifeq ($(uname_S),SunOS)
DYLIB_MAKE_CMD?=$(CC) -G -o ${DYLIBNAME} ${OBJ}
STLIBNAME?=libhiredis.a
STLIB_MAKE_CMD?=ar rcs ${STLIBNAME} ${OBJ}
-else ifeq ($(uname_S),Darwin)
- CFLAGS?=-std=c99 -pedantic $(OPTIMIZATION) -fPIC -Wall -W -Wwrite-strings $(ARCH) $(PROF)
+else
+ifeq ($(uname_S),Darwin)
+ CFLAGS?=-std=c99 -pedantic $(OPTIMIZATION) -fPIC -Wall -W -Wstrict-prototypes -Wwrite-strings $(ARCH) $(PROF)
CCLINK?=-lm -pthread
LDFLAGS?=-L. -Wl,-rpath,.
OBJARCH?=-arch i386 -arch x86_64
@@ -25,14 +26,16 @@ else ifeq ($(uname_S),Darwin)
STLIBNAME?=libhiredis.a
STLIB_MAKE_CMD?=libtool -static -o ${STLIBNAME} - ${OBJ}
else
- CFLAGS?=-std=c99 -pedantic $(OPTIMIZATION) -fPIC -Wall -W -Wwrite-strings $(ARCH) $(PROF)
+ CFLAGS?=-std=c99 -pedantic $(OPTIMIZATION) -fPIC -Wall -W -Wstrict-prototypes -Wwrite-strings $(ARCH) $(PROF)
CCLINK?=-lm -pthread
LDFLAGS?=-L. -Wl,-rpath,.
DYLIBNAME?=libhiredis.so
DYLIB_MAKE_CMD?=gcc -shared -Wl,-soname,${DYLIBNAME} -o ${DYLIBNAME} ${OBJ}
STLIBNAME?=libhiredis.a
STLIB_MAKE_CMD?=ar rcs ${STLIBNAME} ${OBJ}
endif
+endif
+
CCOPT= $(CFLAGS) $(CCLINK)
DEBUG?= -g -ggdb
@@ -45,7 +48,7 @@ all: ${DYLIBNAME} ${BINS}
# Deps (use make dep to generate this)
net.o: net.c fmacros.h net.h
-async.o: async.c async.h hiredis.h sds.h util.h
+async.o: async.c async.h hiredis.h sds.h util.h dict.c dict.h
example.o: example.c hiredis.h
hiredis.o: hiredis.c hiredis.h net.h sds.h util.h
sds.o: sds.c sds.h
View
@@ -108,7 +108,7 @@ was received:
* **`REDIS_REPLY_ARRAY`**:
* A multi bulk reply. The number of elements in the multi bulk reply is stored in
`reply->elements`. Every element in the multi bulk reply is a `redisReply` object as well
- and can be accessed via `reply->elements[..index..]`.
+ and can be accessed via `reply->element[..index..]`.
Redis may reply with nested arrays but this is fully supported.
Replies should be freed using the `freeReplyObject()` function.
@@ -171,7 +171,7 @@ the latter means an error occurred while reading a reply. Just as with the other
the `err` field in the context can be used to find out what the cause of this error is.
The following examples shows a simple pipeline (resulting in only a single call to `write(2)` and
-a single call to `write(2)`):
+a single call to `read(2)`):
redisReply *reply;
redisAppendCommand(context,"SET foo bar");
Oops, something went wrong.

0 comments on commit bfe85f7

Please sign in to comment.