Request/response framework for Heroku worker procs backed by Redis.
Go
Switch branches/tags
Nothing to show
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Failed to load latest commit information.
README.md
clock.go
clock_test.go
doc.go
logger.go
message_id.go
message_id_test.go
redis.go
redis_test.go
redisent.go
redisent_suite_test.go
redisent_test.go

README.md

Redisent

ret·i·cent
not revealing one's thoughts or feelings readily.

Redisent is for communicating between backend services. It is intended to be a simpler alternative to web servers and http, but comes with the extra requirement of having a Redis server. There is no concept of restful resources or request objects, no headers or cookies, no DNS or SSL to setup, and no need for an open http port. It is scalable insofar as Redis is scalable and stable insofar as Redis is stable (so, very). It requires you only to specify a Redis url to use. It's pretty handy.

How it Works

Redisent allows you to instantiate two types: Client and Service. Clients send messages on a topic and services reply to them. Every message sent results in a reply along with an error. When a client sends a message it will block until it receives either a reply or error or until the specified timeout is reached. Services register topics with message handlers and wait for new messages. When a service receives a new message it is sent to the appropriate message handler and processed in its own goroutine. Message bodies and replies are just byte slices and can represent any type of data. It's up to you to implement the logic that parses them.

Redis Internals

Redisent implements simple FIFO queues for handling messages and replies. Each message is assigned a unique id, a topic, and a body. The id is auto-generated but the topic and body are specified by the client caller on send. A Redisent client will LPUSH a serialized version of the message onto a list keyed by redisent:messages:<service_name>, then BRPOP from a new list keyed by redisent:replies:<message_id>. The corresponding service will endlessly BRPOP messages off of redisent:messages:<service_name> and reply on redisent:replies:<message_id>. The reply key will always expire after 1 minute to avoid generating orphaned keys if the reply is ignored (this may be changed in the future to simply respect the timeout specified by the client).

Note: It is highly recommended, although not necessary, that you use a dedicated Redis server for Redisent to avoid impacting your app's usual Redis performance.

Reliability

If a service falls over while processing a message the message will be lost and the client will time out. If Redis itself fails, both the client and service will receive errors and any messages in process will be lost. If a service is brought offline and clients continue to send messages to it, they will queue up until the service is brought back online to process them. Message sends may begin to time out depending on how long it takes to bring your service back up.

Why Choose Redis Instead of a proper message system like RabbitMQ?

The primary goal of this project is to be able to build small services quickly and painlessly. Many applications these days already use Redis and this keeps those architecture stacks to a minimum. Redis is known for being stable and perfectly capable as a no-frills message system, so it's not completely crazy to use it as one. Not to mention it's also available as a standard service on many popular platforms such as Heroku and AWS. The result is a fairly small package to import that provides a clean and simple api for sending arbitrary data back and forth between processes.

Usage

Service code:

import (
  "fmt"
  "github.com/timehop/redisent"
)

func main() {
  // Create a new service named "puppies"
  puppies := redisent.NewService("puppies")
  
  // Define the topic "print" and a handler to process puppy messages
  puppies.Handle("print", func(puppy []byte) ([]byte, error) {
    fmt.Printf("Look! A %s!\n", puppy)
    reply := []byte("A puppy was printed!")
    return reply, nil
  })
  
  // Start the service and wait for messages
  puppies.Start()
}

Client code:

import (
  "fmt"
  "time"
  "github.com/timehop/redisent"
)

func main() {
  // Create a client by passing in the service name
  puppies := redisent.NewClient("puppies")

  // Specify the topic, body, and timeout.
  // A timeout of 0 blocks forever until a reply or error is received.
  reply, err := puppies.Send("print", []byte("black lab"), 500*time.Millisecond)
  if err != nil {
    panic(err)
  }
  
  // Prints "A puppy was printed!"
  fmt.Printf("%s\n", reply)
}

Benefits

  • Heroku workers as services
    At Timehop we use Redisent to reliably build micro services to support any given app. One of the limitations of using Heroku worker procs is that they don't expose any http ports. This means that any service you wish to build must be an entirely new webapp with it's own DNS, SSL layer, configuration, and so on. We began to wish for a quick way to spin one up without all the cognitive overhead and pain of setup. Hence Redisent was born.
  • Security
    Not needing to build a security layer is really, really nice. No need for special api tokens or SSL at all. Redisent simply piggy-backs off the security inherent to Redis (which can be used without security by the way, so you should still be careful).
  • Omnidirectional Communication
    There is no restriction on who can be a client and who can be a service. Your web app can act like a service and "listen" to updates from backend services just as easily as it can send message to other services.
  • Perfect Request Routing
    Web servers serve requests and those requests must be routed. If not done well, you can get uneven response times at scale. Redisent is guaranteed to have perfect request routing, or as near perfect as possible. Each message is processed by the next service that becomes available. And since each message is processed and replied to in a goroutine a single service can process as many requests as memory allows. The bottleneck here is, of course, Redis. Redis itself is very fast, but single threaded. However, the variability in send-reply times we have seen have been on the order of tens of milliseconds.