erlang-redis is an Erlang binding for Redis.
To get started, run the tests as follows:
- Start a local redis server
- Run tests with a test DB (0, 1, etc.) - the tests write to the database, so
use a non-production database!
$ make TESTDB=5 test
- Bug free
- Complete API implementation
- Easy to maintain
- Performant
- Idiomatic Erlang
- redis_client provides an Erlang interface to the Redis protocol spec
- redis provides a complete user facing Redis API
- redis_cmd responsible for sending outbound commands to Redis
- redis_reply responsible for handling replies from Redis
API assumes success where reasonable, so most functions return results directly rather than tagging them in ‘ok’ tuples. Errors are communicated as exceptions rather than return values.
Several Redis commands accept variable length arguments (e.g. SADD, DEL, etc.). This is not idiomatic (or even typical) in Erlang. In these cases, we use additional “m” (multiple) variants to accomodate (e.g. smadd, mdel, etc).
Complete Redis 2.4 API implementation.
Library is not used in production anywhere, but is stable in test environments.
Rather than return “QUEUED”, {ok, “QUEUED”}, etc. we should bake that state into the API, either by returning the atom ‘queued’ or by generating an error.
Redis docs use nil for missing values. We use undefined.
I’m not sure undefined is the right thing here – nil explicitly says “this is the value that Redis calls nil”.
Though arguably it’s simpler just to use the Erlang standard undefined.
Note that Erlang also uses ‘error’ in these cases (e.g. dict:find/2) – this might be better way to clarify the difference between “there’s a value here, but it’s not specified – it’s nil, undefined, etc.” and “the key you requested doesn’t exist”.
A value from a `get’ operation is {ok, Value} whereas a missing value is undefined. Values in list results however are just Value, with undefined also be used for missing values.
It’s possible to distinguish missing values in a list using a simple is_binary(V) vs undefined comparion. E.g.
case get_values() of [H|_] when is_binary(H) -> “I have a value!”; [undefined|_] -> “I don’t have a value :(” end
but in a get operation:
case get_value() of {ok, Val} -> “Have!”; undefined -> “Don’t have :(” end
I think this okay: an ‘undefined’ in a list is a value – it’s just undefined (arguably this should be ‘nil’ – see above).
In the get case, undefined means “value doesn’t exist”. Idiomatic erlang might even use ‘error’ for this (e.g. dict:find/2).
Still, I’m not sure using ‘undefined’ everywhere for all cases is the right thing.
Background: http://redis.io/topics/pubsub
It’s unnerving that the pubsub facility of Redis overlay the more typically used passive request/response API. You can see the paradigms conflicting in Redis’s refusal to perform any non-pubsub operation once a client is subscribing to messages.
Even the response to subscribe requests is different:
The reply of the SUBSCRIBE and UNSUBSCRIBE operations are sent in the form of messages, so that the client can just read a coherent stream of messages where the first element indicates the type of message.
The current API implementation mirrors the behavior that Redis expects, but the conflicting world views bother me. I’m wondering if a separate subscriber facility might be a better design for this API.
This would be an offshoot client (or redis_client operating in a different mode) that you’d use like this:
{ok, S} = redis_subscribe:start_link(ConnectOptions, SubOptions),
redis_subscribe:subscribe(S, "foo"),
redis_subscribe:psubscribe(S, "bar_*")
My guess is that the subscriber use case would be fine with this – it’s unlikely that a client would need to switch contexts over time. E.g. a subscriber would receive messages and then, rather than unsubscribe, do Redis work, then subscribe again, it would use another client to do the work while still receiving messages.
Through the `redis’ API module, it would look like this:
HandleMsg = fun(Msg) -> io:format("Got a msg: ~p~n", [Msg]) end,
{ok, S} = redis:connect_subscriber(ConnectOptions, HandleMsg),
redis:subscribe(S, "foo"),
redis:psubscribe(S, "bar_*")
The `subscribe’ and `unsubscribe’ functions would work the same, but they would refuse to work on a redis_client process, thereby keeping subscriber functions separate from the operations.
Background: http://redis.io/topics/pipelining
This might look something like this:
{ok, C} = redis:connect(),
Ops = [{incr, ["foo", 1]},
{incr, ["bar", 1]},
{incr, ["baz", 1]}],
Results = redis:pipeline(Ops)
This would be quite simple to implement – the results would be raw responses from the server (i.e. each operation returns a value - errors do not generate exceptions).
A sample “listfever” application is in progress, which will demonstrate how Redis can be used to build a full featured application. This app is an analog to Retwis – a Twitter clone – but clones Amazon’s Listmania app instead.
Most application using Redis will want to maintain long running connections to a Redis server. In the case where it makes sense to use multiple long running connections, the app needs a pool.
Pool characteristics:
- Actively or lazily create a maximum of N connections
- Use lock semantics to provide access to a connection from the pool
- Pool monitors each connection, restarting them on failure
I’m inclined to only support active connections, at least initially:
- Deterministic beavior (at least we have a shot at it)
- Lazy connections introduce higher latency on initial operation
One argument against active connections is a potential stampeed on network or server recovery. But a stampeed is also in play for most applications under “lazy” as well – any amount of concurrent traffic will implicitly trigger the stampeed.
I think we fix this by introducing a simple rate limiter when establishing new connections on startup (optionally).
Each pool provides homogenous connections – i.e. they’re all connections to the same Redis server using the same credentials.
I think this allows us to simplify the connection strategy – the pool can be filled by connecting sequentially to the server, blocking any subsequent connection attempts until the current attempt succeeds.
This rule applies when connections exit – reconnect attempts should occur in seeries, filling in the dropped connections as needed.
The tests in redis_tests are a good start, but there are ~ 10,321 functions to test!