-
Notifications
You must be signed in to change notification settings - Fork 6.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use new Insert and Lookup APIs in table reader to support tiered cache #8191
base: nvm_cache_proto
Are you sure you want to change the base?
Conversation
@@ -1182,11 +1187,61 @@ Status BlockBasedTable::GetDataBlockFromCache( | |||
Status s; | |||
BlockContents* compressed_block = nullptr; | |||
Cache::Handle* block_cache_compressed_handle = nullptr; | |||
Cache::CreateCallback create_cb; | |||
Cache::CacheItemHelperCallback helper_cb; | |||
if (std::is_same<TBlocklike, BlockContents>::value) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
An alternative approach could be to add CacheItemHelperCallback
to BlocklikeTraits
, and have a common templatized implementation of CreateCallback
that calls BlocklikeTraits::Create<TBlocklike>
. It seems cleaner to have all cache related functionality for a block type in one place like BlocklikeTraits
. What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, it would be easier to extend in the future. Otherwise, people may forget to add the std::is_same<TBlocklike, "">::value branch to the two places, or even more places in the future.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for addressing the comments. I had one more comment, but overall LGTM
|
||
Statistics* statistics = rep_->ioptions.statistics; | ||
block_cache_compressed->Lookup(compressed_block_cache_key, helper_cb, | ||
create_cb, Cache::Priority::LOW, true); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should priority be HIGH or LOW based on the block type?
@anand1976 Thanks for the review and comments. I will fix the testing failure and address the comment on priority. One more question, should I move the functions in tiered_cache_helper.h directly inside BlocklikeTraits::GetCacheItemHelperCallback() or just leave them there in the header file? |
@zhichao-cao Maybe moving the functions to |
0a889da
to
ae33082
Compare
2ea2d07
to
99ad555
Compare
ae33082
to
3d4be15
Compare
3d4be15
to
4561a37
Compare
Can the title be more specific? |
Sure |
block_cache_compressed->Lookup(compressed_block_cache_key); | ||
|
||
Statistics* statistics = rep_->ioptions.statistics; | ||
block_cache_compressed_handle = block_cache_compressed->Lookup( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We might need to explicitly use the BlockContents
helper here too.
4561a37
to
2a81516
Compare
Summary: Defined the abstract interface for a secondary cache in include/rocksdb/secondary_cache.h, and updated LRUCacheOptions to take a std::shared_ptr<SecondaryCache>. An item is initially inserted into the LRU (primary) cache. When it ages out and evicted from memory, its inserted into the secondary cache. On a LRU cache miss and successful lookup in the secondary cache, the item is promoted to the LRU cache. Only support synchronous lookup currently. The secondary cache would be used to implement a persistent (flash cache) or compressed cache. Tests: Results from cache_bench and db_bench don't show any regression due to these changes. cache_bench results before and after this change - Command ```./cache_bench -ops_per_thread=10000000 -threads=1``` Before ```Complete in 40.688 s; QPS = 245774``` ```Complete in 40.486 s; QPS = 246996``` ```Complete in 42.019 s; QPS = 237989``` After ```Complete in 40.672 s; QPS = 245869``` ```Complete in 44.622 s; QPS = 224107``` ```Complete in 42.445 s; QPS = 235599``` db_bench results before this change, and with this change + #8213 and #8191 - Commands ```./db_bench --benchmarks="fillseq,compact" -num=30000000 -key_size=32 -value_size=256 -use_direct_io_for_flush_and_compaction=true -db=/home/anand76/nvm_cache/db -partition_index_and_filters=true``` ```./db_bench -db=/home/anand76/nvm_cache/db -use_existing_db=true -benchmarks=readrandom -num=30000000 -key_size=32 -value_size=256 -use_direct_reads=true -cache_size=1073741824 -cache_numshardbits=6 -cache_index_and_filter_blocks=true -read_random_exp_range=17 -statistics -partition_index_and_filters=true -threads=16 -duration=300``` Before ``` DB path: [/home/anand76/nvm_cache/db] readrandom : 80.702 micros/op 198104 ops/sec; 54.4 MB/s (3708999 of 3708999 found) ``` ``` DB path: [/home/anand76/nvm_cache/db] readrandom : 87.124 micros/op 183625 ops/sec; 50.4 MB/s (3439999 of 3439999 found) ``` After ``` DB path: [/home/anand76/nvm_cache/db] readrandom : 77.653 micros/op 206025 ops/sec; 56.6 MB/s (3866999 of 3866999 found) ``` ``` DB path: [/home/anand76/nvm_cache/db] readrandom : 84.962 micros/op 188299 ops/sec; 51.7 MB/s (3535999 of 3535999 found) ``` Pull Request resolved: #8271 Reviewed By: zhichao-cao Differential Revision: D28357511 Pulled By: anand1976 fbshipit-source-id: d1cfa236f00e649a18c53328be10a8062a4b6da2
Tiered cache is implemented to achieve the secondary cache tier for block cache. New Insert and Lookup APIs are introduced in #8113 . To support and use the tiered cache in table reader, this PR introduces the corresponding callback functions that will be used in tiered cache, and update the Insert and Lookup APIs accordingly.
test plan: added the testing case