Skip to content
Hüseyin Tuğrul BÜYÜKIŞIK edited this page Sep 29, 2021 · 9 revisions

Welcome to the LruJS wiki!

The type of cache in this project is passive, asynchronous, read-write cache. It loads necessary values into cache passively when needed in a get or getMultiple method. It works as LRU (an approximation actually, "second-chance" 2-hand version). When cache is not used anymore, latest bits of data should be written to backing-store by calling the flush(callback) method.

The "data loading from backing-store" algorithm is given by user in cache declaration part like this:

let cache = new Lru(100, async function(key,callback){
        // cache-miss data-retrieval from backing-store
        redis.get(key, function(data,err){ callback(data); });

},100000000 /* cache element lifetime milliseconds */, async function(key,value,callback){
        // cache-write-miss data-save to backing-store
        redis.set(key,value,function(err){ callback(); });
});

With the two user-given cache-miss functions, cache can control a backing-store efficiently, keeping most frequently accessed items in RAM. Cache-read-miss function's callback requires a value to be given, cache-write-miss function's callback doesn't need parameter. These callbacks are required to be called after the backing-store operation is complete.

Getting values is as easy as this:

cache.get(5, function(result){ console.log(result); });
cache.getMultiple(function(results){ console.log(results); }, 5, 6, 7, 8 ,9, 10);

the result(s) comes from backing-store the first time it is called. Any other get/getMultiple call to same key(s) results in RAM-backed data fetch which is much faster. The key can be an integer or a string.

In both get and getMultiple methods, the inner-operations are handled asynchronously. This hides latencies of all cache-misses in-flight. Since the soft belly of caching algorithms are "cache miss" operations, this project aims to optimize that part while keeping the "cache hit" as scalable as possible by the (second-chance) CLOCK algorithm.

Clone this wiki locally