[POC] Initial support for NVM cache in LRUCache#8113
[POC] Initial support for NVM cache in LRUCache#8113anand1976 merged 8 commits intofacebook:nvm_cache_protofrom
Conversation
Summary: Only support synchronous lookup currently.
|
|
||
| return s; | ||
| } | ||
|
|
There was a problem hiding this comment.
For EraseUnRefEntries() and Erase, do we need to also consider adding the entry to NVM cache?
There was a problem hiding this comment.
Or if we call Erase(), we remove the entry from both block cache and NVM cache?
There was a problem hiding this comment.
I think Erase is mostly called for entries that are no longer valid (when a file is deleted for example), so we need to also erase from the NVM cache. I'll defer the implementation to a follow-on PR.
anand1976
left a comment
There was a problem hiding this comment.
Thanks for the review @zhichao-cao
|
|
||
| return s; | ||
| } | ||
|
|
There was a problem hiding this comment.
I think Erase is mostly called for entries that are no longer valid (when a file is deleted for example), so we need to also erase from the NVM cache. I'll defer the implementation to a follow-on PR.
|
Thanks for addressing the previous comments. Also, the PR description may need to be modified to reflect the tiered cache designs. |
| // ready, and call Wait() in order to block until it becomes ready. | ||
| // The caller must call value() after it becomes ready to determine if the | ||
| // handle successfullly read the item. | ||
| class TieredCacheHandle { |
There was a problem hiding this comment.
So in the regular case, use need to have a loop to check isReady() and if not, sleep the thread?
There was a problem hiding this comment.
Not necessarily a loop, but yes, check isReady() and then call Wait().
|
|
||
| virtual std::string Name() = 0; | ||
|
|
||
| // Insert the given value into the NVM cache. The value is not written |
There was a problem hiding this comment.
May need to update the comments accordingly into Tiered Cache (instead of NVM cache) at multiple places.
|
Thanks for addressing the comments, the PR looks good to me. |
| } | ||
|
|
||
| // If handle table lookup failed, then allocate a handle outside the | ||
| // mutex if we're going to lookup in the NVM cache |
There was a problem hiding this comment.
NVM cache-> tiered cache
| enum class Priority { HIGH, LOW }; | ||
|
|
||
| // A set of callbacks to allow objects in the volatile block cache to be | ||
| // be persisted in a NVM cache tier. Since the volatile cache holds C++ |
There was a problem hiding this comment.
NVM cache tier -> tiered cache (e.g., NVM cache tier)
| // object itself. | ||
| // | ||
| // The SizeCallback takes a void* pointer to the object and returns the size | ||
| // of the persistable data. It can be used by the NVM cache to allocate |
There was a problem hiding this comment.
Also replacing the "NVM cache" -> "tiered ache" in multiple places here
| // function. | ||
| virtual Handle* Lookup(const Slice& key, Statistics* stats = nullptr) = 0; | ||
|
|
||
| // Lookup the key in the volatile and NVM tiers (if one is configured). |
There was a problem hiding this comment.
NVM tiers -> tiered cache layers
Defined the abstract interface for a NVM/persistent cache in
include/rocksdb/nvm_cache.h, and updatedLRUCacheOptionsto take astd::shared_ptr<NvmCache>. An item is initially inserted into the LRU cache. When it ages out and evicted from memory, its inserted into the NVM cache. On a LRU cache miss and successful lookup in NVM, the item is promoted to the in memory cache. Only support synchronous lookup currently.