Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(cache): replace in-memory table with an lru-cache to limit memory consumption #2246

Merged
merged 1 commit into from
Mar 24, 2017

Conversation

Tieske
Copy link
Member

@Tieske Tieske commented Mar 23, 2017

As the intent is to replace the caching all together, I did not make its size configurable

See code comments on some rough calculations. Configurability can be added if required, feedback welcome

@p0pr0ck5
Copy link
Contributor

How hard would it be to pull our our implementation of TTLs with lua-resty-lrucache's TTL mechanism?

@@ -2,12 +2,19 @@ local utils = require "kong.tools.utils"
local resty_lock = require "resty.lock"
local json_encode = require("cjson.safe").encode
local json_decode = require("cjson.safe").decode
local lrucache = require "resty.lrucache"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what do we expect the average cache hit ratio to look like? if we expect a lot of cache misses and purges, should we consider resty.lrucache.pureffi (or making it configurable?)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Depends on customer use. I'd expect it to be relatively high, based on "sessions", what I mean is that you'll most likely get multiple requests for a specific use. In that case the apis, cunsomers, credentials etc. will be the same for that session. Only when done the next session comes up.

but as said very speculative.

@Tieske
Copy link
Member Author

Tieske commented Mar 23, 2017

How hard would it be to pull our our implementation of TTLs with lua-resty-lrucache's TTL mechanism?

Impossible without modifying the lru library. We need to share the ttl between the lru cache and the shm cache. But if we get an existing entry from shm, we cannot get its remaining ttl. Hence we must have our own ttl field inserted.

@subnetmarco
Copy link
Member

The size should still be configurable. We currently support mem_cache_size in the configuration file, so it should be configurable from there (or another property)?

@thibaultcha
Copy link
Member

How about using the same value as mem_cache_size? There should however, be an upper limit as we know the LuaJIT VM memory limit is quite low, but as long as we are in a 1...500mb range, I think it's reasonably to have the shm and the Lua-land LRU cache to be the same size. Thoughts?

@p0pr0ck5
Copy link
Contributor

p0pr0ck5 commented Mar 23, 2017

The value of mem_cache_size doesn't quite make sense, because lrucache wants a number of items, not a size limit, so the limits of the shdict and the lrucache will not be the same. And given that the usage of the lrucache will vary based entity size, which isn't something that can easily be documented (remember, people want hard numbers, which isnt something that can easily be done here), I think making this configurable will only confuse users. I'd vote to keep it small (maybe a few thousand elements) and leave it alone- it's a transparent cache tier, and doesn't need to be exposed on such a granular level to users.

@subnetmarco
Copy link
Member

Also, mem_cache_size was being used once in the master process, while the LRU cache has a size limit for each worker. Which means that the actual total memory used would be:

(mem_cache_size * num_workers) + master_shd_size

@thibaultcha
Copy link
Member

The same way we compute the number of items in the current patch with a hard-coded estimated item size, we could just as well do that by replacing this PR's MEM_SIZE with mem_cache_size.

In any case, I'm even more in favor of simply hard-coding this value.

@Tieske
Copy link
Member Author

Tieske commented Mar 24, 2017

There should however, be an upper limit as we know the LuaJIT VM memory limit is quite low, but as long as we are in a 1...500mb range

That's exactly why I picked the current size. The item size of 1024 is probably a lot bigger than we actually use. The growth we suspect is mostly based on the db-cache misses (standard data does not grow, it, at its maximum, consists of the entire datastore contents, and never more)
The db-miss sentinels are small, so my guess is that the maximum memory consumption per worker will be less than 250mb anyway.

In any case, I'm even more in favor of simply hard-coding this value.

so am I.

@thibaultcha thibaultcha added the pr/ready (but hold merge) No more concerns, but do not merge yet (probably a conflict of interest with another PR or release) label Mar 24, 2017
@Tieske Tieske merged commit 2fe0137 into master Mar 24, 2017
@Tieske Tieske deleted the fix/lrucache branch March 24, 2017 21:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
pr/ready (but hold merge) No more concerns, but do not merge yet (probably a conflict of interest with another PR or release)
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants