New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Some questions on cachelib allocator, pool and pool size #26
Comments
This will throw if there're bad configs. (You will see std::invalid_argument()). Cache is created if the constructor didn't throw. Typically we recommend you create cache as a
Can you clarify this question more? What overheads are you referring to?
We currently do not support this. Because we share the index across all memory pools, the keys must be unique. On insertion, theoretically we can allow you to pass us two keys (prefix + actual key) and we just memcpy them into the item memory. However, on lookup, we have to concatenate because we need the key to be (prefix + actual key). And, I suspect it is the lookup that's the most expensive here (which we don't have a good way to solve it). Have you measured how much perf overhead this is? (If keys are small <15 bytes, this shouldn't incur heap allocation if using std::string). If overhead is too much, you should consider using multiple CacheLib instances instead of cache pools. |
Thank you for the quick reply!
I think we can't directly add pool in the following way
but i am not sure if it is a good practice. How do you always set the pool size when multiple pools are needed
We didn't do the perf test right now but will do it soon. Does using multiple cachelib instances impact the perf or have other disadvantage compared to using multiple cache pools in a Cachelib instance? |
There are some experimental cachelib features which might be very useful for us in the future such as "Automatic pool resizing", "Memory Monitor". I believe we won't be able to leverage them well, if we have multiple cachelib instances in a process. A typical key-size we encounter is ~32bytes. |
I'd recommend not spinning multiple instances of cachelib and instead use the pools to partition the memory available. The motivation for this is in-line with what @sjoshi6 brought up. Copying 32 bytes might not be a significant cpu impact compared to leaving fragmented memory that is harder to manage across multiple instances. If you'd like to not use std::string (since it uses heap allocation past 20 bytes and incurs a malloc + copy), you can still allocate memory on stack, copy the contents and wrap the stack memory into a Item::Key to call the find apis as long as the calls happen in the same call stack. Lots of performance critical applications do this trick. |
@sathyaphoenix that's a great suggestion, thank you! I have another question on persistent cache. If we want to enable persistent cache, we have to set something as follows.
Does this local path store metadata only or store the whole old cache instance? If only storing metadata, could i expect the size of metadata is very small? |
It is only storing some metadata, which should be less than a KB. You can have this be on any file system and is not performance critical. All the data and any heap metadata is persisted either through shared memory or on-device. The metadata stored within files in the cache directory is some limited information to recover all other pieces of information. |
@tangliisu I'll close this ticket since the original questions are answered. Please feel free to open a new one if you have any additional questions :) |
facebook::cachelib::LruAllocator cache(config)
, does this throw any errors/exceptions? How do we know the cache is created? Do we only need to check cache == nullptr?cache->addPool(name1, cache_->getCacheMemoryStats().cacheSize * 30 / (30 + 15))
cache->addPool(name2, cache_->getCacheMemoryStats().cacheSize * 15 / (30 + 15))
Is there any easier way to accommodate the overhead?
The text was updated successfully, but these errors were encountered: