Skip to content
Commits on Mar 22, 2016
  1. v4.0.1

    committed Mar 22, 2016
  2. @Kikobeats

    Fix .length documentation

    Kikobeats committed with Dec 21, 2015
Commits on Jan 24, 2016
Commits on Dec 21, 2015
  1. v4.0.0

    committed Dec 20, 2015
  2. @mbroadst
  3. doc: get() can return undefined

    committed Dec 20, 2015
    close #56
  4. set vim nowrap on inspect test

    committed Dec 20, 2015
  5. update travis

    committed Dec 20, 2015
  6. upgrade to newer tap version

    committed Dec 20, 2015
  7. Add custom inspect method

    committed Dec 20, 2015
    By using truly private vars with symbols, it makes it really annoying
    to work with a LRU cache in the repl or console.log() and see what's
    going on.
  8. Use Symbols for private members

    committed Dec 20, 2015
    Fall back to _props if Symbol isn't available
Commits on Dec 20, 2015
  1. standard style

    committed Dec 20, 2015
  2. Use yallist linked list instead of map for lruList

    committed Dec 20, 2015
    This avoids creating an array just to be able to walk in reverse,
    which makes insertions take abusively long on full caches.
  3. serialize test: pad the aging times a bit

    committed Dec 20, 2015
    Prevents some spurious failures
  4. test with coverage

    committed Dec 20, 2015
  5. ignore nyc output

    committed Dec 20, 2015
Commits on Nov 28, 2015
  1. v3.2.0

    committed Nov 28, 2015
  2. Add cache.rforEach

    committed Nov 28, 2015
    Close #38
  3. v3.1.2

    committed Nov 27, 2015
  4. v3.1.1

    committed Nov 27, 2015
  5. Use Map's insertion order, fix counter overflow

    committed Nov 27, 2015
    There is no need to count up repeatedly from the `lru` to the `mru`
    value.  Because the lruList is implemented as a Map (or PseudoMap, which
    behaves the same on all supported platforms), we can very easily get a
    list of all of the keys in the lruList, and then iterate over those,
    bypassing any gaps in lu indexes.
    This means that the specific fact that mru index is greater than lru
    index is no longer important.  Since the numbers are now just more or
    less opaque tokens, this replaces the `this._mru++` approach with an
    incrementer that will increase the value by one, unless it's
    Number.MAX_SAFE_INTEGER, in which case it'll return to 0, and keep doing
    this as long as it collides with any lru value in use.
    As a side bonus, while doing this, I think I tracked down a CPU-spike
    "bug" that had perplexed me a few years back!  (In "scare quotes"
    because it was behaving properly, just in a way that was unpleasantly
    If you have a cache where some items are used much more frequently than
    others (pretty much the ideal use-case for an LRU like this one), and
    the size of the cache is large enough that some relatively rarely-used
    items stick around at the bottom of the barrel, then the distance
    between the index of the item you just used, and the oldest item in the
    cache, could get very large.  As the top items get used again and again,
    the mru counter keeps going up, but the older items never get bumped up,
    and the lru value stays low.
    This presents a problem when a new item is added, and now we have to
    trim the values to be below the max size.  Previously, having no simple
    way to iterate by the numeric value of the keys, the only thing to do
    was to start at the lru value, and increment up to either the mru value
    or the point where the total cache size fell below the maximum.
    However, if you have a large hole in that index, it can take a
    considerable amount of time to crawl up looking for values, doing
    bazillions of object lookups and spinning on CPU forever not responding
    to requests.
    Fix #50, and hopefully improve performance in at least a few cases that
    were otherwise very difficult to debug.
Commits on Nov 27, 2015
  1. v3.1.0

    committed Nov 27, 2015
  2. Call lengthCalculator as (value, key)

    committed Nov 27, 2015
    Close #58
  3. v3.0.0

    committed Nov 27, 2015
Something went wrong with that request. Please try again.