flexible cacher for async functions with switchable backends. redis/memory
CoffeeScript Python JavaScript
Switch branches/tags
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Failed to load latest commit information.


build status flexcache

Flexible cache for async function calls and event emitters. It is designed for preventing dirty caches more then on speed. Different Backends allows you to cover different usecases.



Best used for preventing long and slow operations on the filesysetem. Can easily be shared accross a Cluster and is very performant. TTL support of the Redis database scales down the memory usage.

Memory (soon)

Caches are local only. Should only be used in a very narrow scope and be destoryed after every request. They are very fast however.


npm install flexcache

What can be cached

You can cache only data that can be serialized into a bson blob, which is more complete then json. Flexcache tries to prevent false positives and invalid cache state. Caches can be shared across multiple machines depending on the backend.

Cache Identifiers

Flexcache uses a two level cache. First leve is called group, second level is the hash. By using a easy to receive value as group key, you can clear all caches depending strongly on the state of your data.

For example. If you want to cache data that is calculated of data from a file or directory, you can choose those as cache group. When you have changes to your data, simply call a clear_group() on your identifier.

You can also invalidate a hash without touching other hashes.

Default behaviour:

group: save_hasher hash: save_hasher_all


Hasher play a very important part in flexcache. They may determine the group, but more importantly determine the hash to use.

hasher_one: (x,...) JSON.stringify(x)

hasher_all (args...) JSON.stringify(args)

safe_hasher_one (x) sha256(bson.serialize([x]))

safe_hasher_all (args...) sha256(bson.serialize(args...))

It is very important that you normalize the arguments somehow, so the same arguments result in the same hash and you get a cache hit. Your hash function should prevent collisions.

The hash used on the database are prefixed. The key is prefixed with the Flexcache's group_prefix. The hash is prefixed with the cache name. By default the name of the wrapped function, but you have to make sure it is used only once, if not, you need to provide one. Anonymous functions always need a name.


Each Flexcache instance uses a backend for storage. Many Flexcache instances can share a backend, but may have different options.

RedisBackend = require('flexcache/backend/redis').RedisBackend
Flexcache = require('flexcache').Flexcache

backend = new RedisBackend()
fc = new Flexcache(backend, { ttl:400 }) // 400 second timeout

slow = function(a, b, callback) { /* do something slow */ return a*b; }

cached = fc.cache(slow)

rv1 = cached(2, 3);
// next call with same arguments will return cached result
rv2 = cached(2, 3);

// edit some data
// cache is not clean for all cached results in cache group 2

// whipe everything. usually not a good idea :-)

Whatever arguments are passed to cached, they are used to compute the subkey and should therefor never hit a wrong cache entry.


The cache function can also generate EventEmitters. You need to specify emitter in which case you will get the eventemitter returned. If you specify a function all parameters are passed to it. It is important that the EventEmitter constructed in the emmiter function does no work at all. Events forwarded and reactet do are data and end.

The cache is replayed the same way the data is generated by the original function, but every chunk is sent one tick after another.

Advanced Usage

backend = new RedisBackend({port:1234})
fc = new Flexcache(backend, {
    group: function() { return arguments.1 },
    hash: function() { return "X" + arguments.0 },
    ttl: 300, // 5 minutes
    group_name: "grp1",
    max_object_size: 1 * 1024 // 1 kbyte

// use a special key function for this function
rcached = fc.cache(slow, {
    group: function() { return self.somevalue },
    name: "somethinguniqe"
    emitter: function() { return new MyEventemitter() },

rcached.clear(fc.get_group("my", "arguments", 2, 4, {1:3}))

Flexcache Options

  • group function to generate the hash or string of 'all', 'one', 'safe_one', 'safe_all'. default: hash_one
  • hash same as hash. default: safe_all
  • ttl timeout in seconds. -1 = no timeout, -2 = no saving
  • group\_prefix prefix added before the group hash
  • debug integer debug level
  • debug\_serializer try to decode data right after serializing it and print error in case of failure


returns the key computed as they are saved


clears all caches associated with one key

typical use:


Usually you are better of with using the clear(...) function of the cached function as it uses the correct hasher when the cached function uses a different hasher.

cache(fnc, [options])

Creates a cache wrapper for a async function. Options overrides the Flexcache options. The returned function has special members which helps you to deal with cache consistancy:

cache Options

  • group function to generate the hash or one of 'all', 'one', 'safe_one', 'safe_all'. default: safe_all
  • hash same as hash. default: one
  • name identifier for hash
  • multi if set, don't complain about multiple caches sharing the same name
  • emitter if set the function will return a event emitter and not run the callback. Can my true ( EventEmitter ) or a simple EventEmitter class.


Clears a group. Arguments are the same as they are passed to the cache function itself, or at least, enough for the group function to determine the group to clean. Default is the first argument.


Clears a specific subkey under key. If key and subkey are strings, they are used directly. You can also pass the same arguments as the normal function and let the key and subkey be calculated by the key/hash functions.




  • Very fast, ligth memory usage, prefered backend
  • TTL only works with Redis 2.1.3+


  • host Redis server hostname
  • port Redis server portno
  • db Database index to use
  • pass Password for Redis authentication
  • ... Remaining options passed to the redis createClient() method.



  • No TTL support yet. Backend should be used only for one request or cleared periodicly.
  • More for tests then usage. Consider yourself warned ;-)