Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

Already on GitHub? Sign in to your account

Use LRU cache in `useragent.lookup` so rarely used useragents will not be in the memory forever. #22

Closed
tellnes opened this Issue Jan 10, 2013 · 6 comments

Comments

Projects
None yet
3 participants

tellnes commented Jan 10, 2013

Great library.

You can easily replace the dictionary hash used by useragent.lookup with eg. lru-cache to resolve this. Great if the options for lru-cache are exposed to configuration in useragent.

Owner

3rd-Eden commented Jan 10, 2013

Ha,

Same request was made against my hashring lib to 3rd-Eden/node-hashring#8 today. I'll see what i can do..
What would be a reasonable cache limit 10K (as these
Strings can differ a lot)

On Jan 10, 2013, at 2:14 AM, Christian Tellnes notifications@github.com wrote:

Great library.

You can easily replace the dictionary hash used by useragent.lookup with eg. lru-cache to resolve this. Great if the options for lru-cache are exposed to configuration in useragent.


Reply to this email directly or view it on GitHub.

tellnes commented Jan 10, 2013

I do not know, you should probably run some tests.
The important thing is that the number is not infinite.

Owner

3rd-Eden commented Jan 14, 2013

I have been thinking of it, and i think that a LFU might be more suitable for this due the wide arrange of different user agent strings that could invalid the the cache if it's set to low. It comes with a bit high performance penalty then a LRU but it might be worth it.

This would be a very good feature. I would use it :)

Owner

3rd-Eden commented Feb 24, 2013

LRU caching greatly reduces the performance of the lookup

Executed benchmark against node module: "useragent.lookup"
Count (11266), Cycles (3), Elapsed (5.405), Hz (221424.00861640708)

VS with LRU:

Executed benchmark against node module: "useragent.lookup"
Count (1064), Cycles (2), Elapsed (5.421), Hz (16679.68311019655)

Which is quite a big overhead, but it's still faster then a regular parse.

@3rd-Eden 3rd-Eden closed this in f37e46c Feb 24, 2013

Owner

3rd-Eden commented Feb 24, 2013

Fixed in 2.0.2, I'm assuming that people would rather see memory leak prevention instead of increased performance.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment