Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(cache): enable lru cache #158

Closed
wants to merge 3 commits into from
Closed

Conversation

SukkaW
Copy link
Member

@SukkaW SukkaW commented Dec 16, 2019

I have setup a stat by adding a few line of the code to node_modules/hexo in my local dummy site:

let num = 0;

// .....
num++;
console.log(num);

It turns out that even as few as 17 posts in hexo-theme-unit-test, full_url_for() helper will be called for 537 times and isExternalLink() will be called for 2599 times in one generation!

So I bring up this PR. The node-lfu-cache is enabled for those functions:

  • full_url_for()
  • gravatar()
  • is_external_link()
  • relative_url()
  • url_for()

@SukkaW SukkaW requested a review from a team December 16, 2019 03:11
@coveralls
Copy link

coveralls commented Dec 16, 2019

Coverage Status

Coverage increased (+0.2%) to 96.631% when pulling b4e0d88 on SukkaW:enable-lfu-cache into 621c9e3 on hexojs:master.

@SukkaW
Copy link
Member Author

SukkaW commented Dec 16, 2019

Benchmark result:

Node.js Version Cold Process Time Memory
8 Current master branch of Hexo 33.0s 590MB
8 Hexo with sukkaw/hexo-util#enable-lfu-cache 22.70s 603MB
10 Current master branch of Hexo 25.2s 601MB
10 Hexo with sukkaw/hexo-util#enable-lfu-cache 19.9s 593MB
12 Current master branch of Hexo 24.9s 573.4MB
12 Hexo with sukkaw/hexo-util#enable-lfu-cache 20.3s 692MB
13 Current master branch of Hexo 27.3s 567MB
13 Hexo with sukkaw/hexo-util#enable-lfu-cache 24.2s 686MB

@SukkaW SukkaW requested a review from curbengh December 16, 2019 12:43
@SukkaW
Copy link
Member Author

SukkaW commented Dec 16, 2019

By updating the stat code to

let num = 0;
const stat = new Set();

// ...

num++;
stat.add(input);
console.log(num, stat.size);

And here is the result of one generation of a hexo dummy site (hexo-theme-unit):

  • During the generation, 51 files have been generated.
  • full_url_for() has been called for 537 times but there is only 59 unique path is processed (59 seems to be related with Permalink).
  • isExternalLink() has been called for 2599 times, but there is only 115 unique url is processed.

And for my own blogs:

  • During the generation, 266 files have been generated.
  • full_url_for() has been called for 4275 times but there is only 457 unique path is processed.
  • isExternalLink() has been called for 87890 times, but only 944 unique url is processed.

It shows why we definitely need a cache.

@dailyrandomphoto
Copy link
Member

@SukkaW
First, I agreed to add caching for these helpers.

I have some questions.

  1. Are you sure the module you want to use is 'node-lfu-cache', not 'lru-cache'?
  • node-lfu-cache is ported from lru-cache, and the latest version is the first version to date. (history)
  • Found a fatal bug on node-lfu-cache.
const LFU = require('node-lfu-cache');
const cache = new LFU(2);

cache.set(1, 1);
cache.set(2, 2);
cache.get(1); // returns 1
cache.set(3, 3); // evicts key 2
cache.get(2); // returns undefined (not found)
cache.get(3); // returns 3.
cache.set(4, 4); // evicts key 1.
cache.get(1); // returns undefined (not found)
cache.get(3); // returns 3
cache.get(4); // returns 4

The expected result should be like comments. (https://leetcode.com/problems/lfu-cache/)
BUT this module returns: (last 3 lines)

cache.get(1); // returns 1
cache.get(3); // returns 3
cache.get(4); // returns undefined (not found)

This means that when the cache is full, no more items can be added to the cache.
Full test codes and results.

  1. How about storing the cache in a plain object({})?
    The return values of the helpers are short strings, so it doesn't use much memory I think.
    I worry that using a smaller LFU cache or LRU cache will reduce the cache hit rate. Especially on a huge site.
    Because the usage pattern of the cache in these helpers is similar to:
cache.set(1, v);
cache.get(1);
cache.set(2, v);
cache.get(2);
...
cache.set(n, v);
cache.get(n);
// cache evicts key 1, 2 ...
...
cache.set(1, v);
cache.get(1);
cache.set(2, v);
cache.get(2);
...
cache.set(n, v);
cache.get(n);
// cache evicts key 1, 2 ...
...

@SukkaW
Copy link
Member Author

SukkaW commented Dec 16, 2019

@dailyrandomphoto

  1. LFU means the more it has been used, the more likely it will stay in cache. LRU means the more recently it has been used, the more likely it will stay in cache.

We all want to better utilize the cache and not to fill up with once-called key-value, right? LRU should only be used if a cache will be called continuously, not frequently.

Also, the code case you given is exactly expected LFU behavior. LFU will remove the oldest cache when size limit exceeded.
As you can see, I am always using if (cache.has(cacheId)) return cache.get(cacheId); to make sure to return cached value only if the cache existed. stale option provided by node-lfu-cache is used to control maxAge expiration, not to deal with size limit.

  1. LFU is especially designed to avoid too many once-called value stored in the cache, which is exactly what we needed here.

I have provided a stat in my last comment:

isExternalLink() has been called for 87890 times, but only 944 unique url is processed.

Although there are 944 urls being processed by isExternalLink(), but how many of them will be used only once, while how many of them will be used across different page? The LFU is the best option to increase cache hit rate while retains low memory usage.

@dailyrandomphoto
Copy link
Member

Ok, I understand your meaning.
I'm talking about the node-lfu-cache module not working, not the LFU Cache data structure.
node-lfu-cache module have released just one version and with few commits. history Feb 28, 2018 ~ Mar 1, 2018

LFU will remove the oldest cache when size limit exceeded.

This is not working. See my test code above. Or you can run this test.

I have tested using your branch of hexo-utils and confirmed about that.

@SukkaW
Copy link
Member Author

SukkaW commented Dec 17, 2019

@dailyrandomphoto Maybe not oldest cache, but coldest and was earliest being added to the cache.

I have setup a PoC of LFU cache:

const LFU = require('node-lfu-cache');
const cache = new LFU(50);

const random = (min, max) => Math.round(Math.random() * (max - min)) + min;

const cacheFunc = (key, value) => {
  if (cache.has(key)) return cache.get(key);
  cache.set(key, value);
  return value;
}

// Will be called only once
cacheFunc('cold', '123');
// Will be called for 10 times
for (let i = 1; i <= 10; i++) {
  cacheFunc('hot', '456');
}

// Let the cache size limit exceeded
for (let i = 1; i <= 60; i++) {
  cacheFunc(String(100 + i),  random(100, 900));
}

console.log('Cold: ' + cacheFunc('cold', '789'));
// Should be 789 not 123, because the cold cache will be removed from the cache.
console.log('Hot: ' + cacheFunc('hot', '789'));
// Should be 456 not 789, because it is hot and remains in the cache.

// > "Cold: 789"
// > "Hot: 456"

https://runkit.com/sukkaw/5df83a87187f4b001b9a18e9

@dailyrandomphoto
Copy link
Member

This is a good example.
In a real situations, cached will be saved and read many times.
I modified your code a little, then you will find that, when the cache is full, no more items can be added to the cache.

const LFU = require('node-lfu-cache');
const cache = new LFU(50);

const random = (min, max) => Math.round(Math.random() * (max - min)) + min;

const cacheFunc = (key, value) => {
  if (cache.has(key)) return cache.get(key);
  cache.set(key, value);
  return value;
}

// Will be called only once
cacheFunc('cold', '123');
// Will be called for 10 times
for (let i = 1; i <= 10; i++) {
  cacheFunc('hot', '456');
}

// Let the cache size limit exceeded
for (let i = 1; i <= 60; i++) {
  // save to cache
  cacheFunc(String(100 + i),  random(100, 900));
  // read from cache
  cacheFunc(String(100 + i),  random(100, 900));
}

cacheFunc('new', '123');

console.log('Cold: ' + cacheFunc('cold', '789'));
// Should be 789 not 123, because the cold cache will be removed from the cache.
console.log('Hot: ' + cacheFunc('hot', '789'));
// Should be 456 not 789, because it is hot and remains in the cache.
console.log('New: ' + cacheFunc('new', '789'));
// Should be 123 not 789, because it is a new item.

// You will find that, when the cache is full, no more items can be added to the cache.
console.log(cache.dump());

https://runkit.com/dailyrandomphoto/5df84e2da91f66001d9620c5

console.log('New: ' + cacheFunc('new', '789'));
// Should be 123 not 789, because it is a new item.

BUT, it shows 789.

@seaoak
Copy link
Member

seaoak commented Dec 17, 2019

I tasted this patch with a large blog (which has dummy 3000 posts).

The result shows:

  1. This patch causes 3% slower than master.
  2. Using a plain JavaScript object as a cache causes 10% faster than master.

Could you consider to use a plain object?


Detals:

I have a dummy blog which has 3000 posts.
(I made a dummy post generator https://github.com/seaoak/ipsum-blog-generator

I run hexo generate and measured elapsed time.
For each condition, measure 10 times and calculate the average.
Set CPU affinity 0x00010000 and priority HIGH for hexo command (using START command).
Memory usage is not measured.
Before each measurement, do hexo clean and rm debug.log.

And also, to follow @dailyrandomphoto 's comment #158 (comment)
I examine to use a plain JavaScript object as a cache.
https://github.com/seaoak/hexo-util/tree/trial/enable-lfu-cache

Source code:

Result:

Elapsed time:

Branch Average Spread (Worst - Best)
master 73.65sec 0.64sec
SukkaW 76.36sec (+3.7%) 0.67sec
seaoak 66.18sec (-10.1%) 0.56sec

Counter:

Function calling count unique argument count
full_url_for 13249 7249 (54.7%)
gravatar 0 0
is_external_link 1118521 60417 (5.4%)
relative_url 0 0
url_for 72346 4267 (5.8%)

Environment:

  • OS: Windows 10 64bit

  • CPU: Ryzen Threadripper 2950X (16core/32thread, 3.5GHz)

  • MEM: 32GB (8GB x4) DDR4 (3600MHz)

  • SSD: Samsung EVO970 (1TB)

  • Node v12.12.0

@SukkaW
Copy link
Member Author

SukkaW commented Dec 17, 2019

@seaoak What about memory usage? I am still wondering if we need a LFU cache.

@SukkaW
Copy link
Member Author

SukkaW commented Dec 17, 2019

@seaoak The performance regression in your benchmark should be caused by the issue of node-lfu-cache.l

@SukkaW
Copy link
Member Author

SukkaW commented Dec 17, 2019

@dailyrandomphoto @seaoak I have change LFU to LRU, please feel free to have a benchmark see if any performance gained.

@SukkaW SukkaW changed the title feat(cache): enable lfu cache feat(cache): enable lru cache Dec 17, 2019
@SukkaW
Copy link
Member Author

SukkaW commented Dec 18, 2019

Should be superseded by #162,

@SukkaW SukkaW closed this Dec 18, 2019
@seaoak
Copy link
Member

seaoak commented Dec 19, 2019

I'm sorry I'm late.
I was trying to find how to measure "memory usage".

I found a tool "memwatch".
https://github.com/marcominetti/node-memwatch

The result shows using cache has little effect to memory usage.


Details:

For each condition, measure 10 times and calculate the average.
Source codes are same as above my comment.

Condition changes:

  1. Node version is changed from v12.12.0 to v10.18.0.
    (memwatch was not able to be installed for v12)

  2. CPU Affinity is changed from 0x00010000 to 0x55550000.
    (to avoid strange fatal error of Node GC)

Result:

Branch Average of maximum heap size Spread (Worst - Best)
master 1079.9 MB 161.4 MB
SukkaW 1078.0 MB 182.3 MB
seaoak 1090.6 MB 261.6 MB

(modified: data of "seaoak" was mistook)

@curbengh
Copy link
Contributor

The result shows using cache has little effect to memory usage.

I'm guessing cache only stores the pointer to the value, rather than the actual value.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants