Skip to content
🐇 Fastest possible memoization library
Branch: master
Clone or download
Latest commit d7511c5 Jun 28, 2018
Type Name Latest commit message Commit time
Failed to load latest commit information.
benchmark Fix lint Feb 6, 2018
img Update icon Mar 12, 2018
src Fix lint Feb 6, 2018
.codeclimate.yml Apply codeclimate only on `src/` and `test/` Feb 8, 2017
.gitignore Git ignore coverage/ Jan 26, 2017
.travis.yml Update .travis.yml Jul 21, 2017 Update changelog Jun 28, 2018
LICENSE Initial commit Jan 8, 2016 Update README Mar 12, 2018 fix type Feb 14, 2017
codecov.yml Add code coverage check Mar 16, 2017


 Travis CI Code coverage

In computing, memoization is an optimization technique used primarily to speed up computer programs by storing the results of expensive function calls and returning the cached result when the same inputs occur again. — Wikipedia

This library is an attempt to make the fastest possible memoization library in JavaScript that supports N arguments.


npm install fast-memoize --save


const memoize = require('fast-memoize')

const fn = function (one, two, three) { /* ... */ }

const memoized = memoize(fn)

memoized('foo', 3, 'bar')
memoized('foo', 3, 'bar') // Cache hit

Custom cache

The fastest cache is used for the running environment, but it is possible to pass a custom cache to be used.

const memoized = memoize(fn, {
  cache: {
    create() {
      var store = {};
      return {
        has(key) { return (key in store); },
        get(key) { return store[key]; },
        set(key, value) { store[key] = value; }

The custom cache should be an object containing a create method that returns an object implementing the following methods:

  • get
  • set
  • has

Custom serializer

To use a custom serializer:

const memoized = memoize(fn, {
  serializer: customSerializer

The serializer is a function that receives one argument and outputs a string that represents it. It has to be a deterministic algorithm meaning that, given one input, it always returns the same output.


For an in depth explanation on how this library was created, go read this post on RisingStack.

Below you can see a performance benchmark between some of the most popular libraries for memoization.

To run the benchmark, clone the repo, install the dependencies and run npm run benchmark.

git clone
cd fast-memoize
npm install
npm run benchmark

Against another git hash

To benchmark the current code against a git hash, branch, ...

npm run benchmark:compare 53fa9a62214e816cf8b5b4fa291c38f1d63677b9


Spread arguments

We check for function.length to get upfront the expected number of arguments in order to use the fastest strategy. But with spread arguments we don't receive the right number.

function multiply (multiplier, ...theArgs) {
  return (element) {
    return multiplier * element
multiply.length // => 1

So if you use spread arguments, explicitly set the strategy to variadic.

const memoizedMultiply = memoize(multiply, {
  strategy: memoize.strategies.variadic

Function Arguments

The default serializer uses JSON.stringify which will serialize functions as null. This means that if you are passing any functions as arguments you will get the same output regardless of whether you pass in different functions or indeed no function at all. The cache key generated will always be the same. To get around this you can give each function a unique ID and use that.

let id = 0
function memoizedId(x) {
  if (!x.__memoizedId) x.__memoizedId = ++id
  return { __memoizedId: x.__memoizedId }

memoize((aFunction, foo) => {
  return aFunction.bind(foo)
}, {
  serializer: args => {
    const argumentsWithFuncIds = Array.from(args).map(x => {
      if (typeof x === 'function') return memoizedId(x)
      return x
    return JSON.stringify(argumentsWithFuncIds)

Credits  ·  GitHub @caiogondim  ·  Twitter @caio_gondim

You can’t perform that action at this time.