Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(storage): replace lru-cache so that we can control memory usage precisely #1994

Merged
merged 20 commits into from
Apr 25, 2022

Conversation

Little-Wallace
Copy link
Contributor

@Little-Wallace Little-Wallace commented Apr 20, 2022

What's changed and what's your intention?

  • replace BlockCache and MetaCache with cache::LruCache.
  • Add TableHolder and BlockHolder for Iterator so that the memory in cache will not be released.

Following #1884

Checklist

  • I have added necessary unit tests and integration tests

Refer to a related PR or issue link (optional)

part of #1773

Signed-off-by: Little-Wallace <bupt2013211450@gmail.com>
Signed-off-by: Little-Wallace <bupt2013211450@gmail.com>
Signed-off-by: Little-Wallace <bupt2013211450@gmail.com>
@codecov
Copy link

codecov bot commented Apr 20, 2022

Codecov Report

Merging #1994 (96cff1b) into main (b986bfa) will increase coverage by 0.04%.
The diff coverage is 88.91%.

@@            Coverage Diff             @@
##             main    #1994      +/-   ##
==========================================
+ Coverage   70.82%   70.87%   +0.04%     
==========================================
  Files         639      639              
  Lines       81173    81361     +188     
==========================================
+ Hits        57494    57664     +170     
- Misses      23679    23697      +18     
Flag Coverage Δ
rust 70.87% <88.91%> (+0.04%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
src/storage/src/hummock/sstable_store.rs 64.17% <41.17%> (-9.31%) ⬇️
src/storage/src/hummock/compactor_tests.rs 91.86% <66.66%> (-0.14%) ⬇️
src/storage/src/hummock/block_cache.rs 77.27% <73.58%> (-10.97%) ⬇️
src/storage/src/hummock/state_store.rs 70.74% <80.00%> (+0.23%) ⬆️
src/storage/src/hummock/compactor.rs 70.33% <83.33%> (-1.81%) ⬇️
src/storage/src/hummock/cache.rs 95.89% <88.52%> (-0.63%) ⬇️
src/meta/src/hummock/compaction.rs 81.09% <89.28%> (+3.94%) ⬆️
...rc/storage/src/hummock/sstable/sstable_iterator.rs 93.50% <94.44%> (+0.45%) ⬆️
src/meta/src/hummock/level_handler.rs 100.00% <100.00%> (+2.04%) ⬆️
src/storage/hummock_sdk/src/compact.rs 100.00% <100.00%> (ø)
... and 24 more

📣 Codecov can now indicate which changes are the most critical in Pull Requests. Learn more

Signed-off-by: Little-Wallace <bupt2013211450@gmail.com>
Signed-off-by: Little-Wallace <bupt2013211450@gmail.com>
Signed-off-by: Little-Wallace <bupt2013211450@gmail.com>
Signed-off-by: Little-Wallace <bupt2013211450@gmail.com>
Signed-off-by: Little-Wallace <bupt2013211450@gmail.com>
Signed-off-by: Little-Wallace <bupt2013211450@gmail.com>
Signed-off-by: Little-Wallace <bupt2013211450@gmail.com>
for sender in que {
(*ptr).add_ref();
let _ = sender.send(CachableEntry {
cache: self.clone(),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we implement Clone for CachableEntry?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems we have implemented it. We can use entry.clone() instead.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no. it will lock again.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as the above comments. If we use AtomicUsize for reference counting, a clone won't involve a lock.

} else {
0
}
hash as usize % self.shards.len()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why use mod instead of right shift?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They are similar

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems that mod is more expensive than a right shift.

And if we use the original right shift, we use the high bits of the hash value, which can be separated from the bits we use for hash table, and the load can probably be more dispersed.

@@ -573,6 +597,11 @@ impl<K: LruKey, T: LruValue> LruCache<K, T> {
drop(data);
}

unsafe fn add_ref(&self, handle: *mut LruHandle<K, T>) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

May want to AtomicUsize for reference count like Arc to avoid locking here

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no. You can see that whether we use atomicusize or not, we must lock the shard because of other operations

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we use atomicusize, the clone of cache entry don't need to acquire the lock.

Signed-off-by: Little-Wallace <bupt2013211450@gmail.com>
Signed-off-by: Little-Wallace <bupt2013211450@gmail.com>
Signed-off-by: Little-Wallace <bupt2013211450@gmail.com>
Copy link
Contributor

@wenym1 wenym1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants