New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
In-Memory Engine: WriteBatch with Skiplist Engine #16433
In-Memory Engine: WriteBatch with Skiplist Engine #16433
Conversation
[REVIEW NOTIFICATION] This pull request has been approved by:
To complete the pull request process, please ask the reviewers in the list to review by filling The full list of commands accepted by this bot can be found here. Reviewer can indicate their review by submitting an approval review. |
4d5b30e
to
a83dadd
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should be ready to publish soon.
e129e3a
to
35fe837
Compare
PR published. |
.collect::<Vec<_>>(), | ||
) | ||
}; | ||
filtered_keys |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what if the range is just evicted before this line?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In this case those keys will still be written too memory, range_manager will prevent them from being read. This is less efficient, but still correct; I can hold the lock on core when writing, but that will reduce write throughput. Another approach is to move from mutex to a read-write lock (or sharded lock) for core: perhaps I'll do this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Addressed this: switched to sharded read write lock from cross beam, using (relatively low-cost) read() lock during WriteBatch::write.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Holding a lock (even the read/write lock) is too heavy for write, which blocks the snapshot acquisition.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Although we will not read it due to the filter of range_manager, we don't have the logic to prevent a range load. That is say we cannot avoid the following situation:
we have cache range k1..k10
- write_impl: should_write_to_memory for k1..k10
- evict k1..k10
- load k1..k10
- write_impl: write_to_memory
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can have these steps to handle it:
- get ranges to write
- recording the ranges
- write_to_memory
- for ranges that are still valid, clear the range. for ranges evicted, schedule a task to delete the range and clear the range after that.
So, load a range should if the range is being written.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do youi mean by 'clear' the range?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
struct RangeManager {
...
ranges_being_written: BtreeSet<CacheRange>,
}
So, clear just mean delete the range in the ranges_being_written.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@tonyxuqqi thoughts on this approach? we can also re-compute this during garbage collection.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can have these steps to handle it:
- get ranges to write
- recording the ranges
- write_to_memory
- for ranges that are still valid, clear the range. for ranges evicted, schedule a task to delete the range and clear the range after that.
So, load a range should if the range is being written.
I worry that it's not more efficient than a simple read-write lock. For read lock, actually it's pretty fast.
/test |
/cc @v01dstar |
Signed-off-by: Alex Feinberg <alex@strlen.net>
Signed-off-by: Alex Feinberg <alex@strlen.net>
Signed-off-by: Alex Feinberg <alex@strlen.net>
Signed-off-by: Alex Feinberg <alex@strlen.net>
Signed-off-by: Alex Feinberg <alex@strlen.net>
…s to avoid writing keys for evicted range. Signed-off-by: Alex Feinberg <alex@strlen.net>
Signed-off-by: Alex Feinberg <alex@strlen.net>
Signed-off-by: Alex Feinberg <alex@strlen.net>
15e2e25
to
7e38ce2
Compare
Signed-off-by: Alex Feinberg <alex@strlen.net>
Signed-off-by: Alex Feinberg <alex@strlen.net>
/merge |
@SpadeA-Tang: It seems you want to merge this PR, I will help you trigger all the tests: /run-all-tests You only need to trigger
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the ti-community-infra/tichi repository. |
This pull request has been accepted and is ready to merge. Commit hash: a6621af
|
/run-test retry=7 |
@afeinberg: Your PR was out of date, I have automatically updated it for you. If the CI test fails, you just re-trigger the test that failed and the bot will merge the PR for you after the CI passes. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the ti-community-infra/tichi repository. |
ref tikv#16323 Update WriteBatch to assume a single skiplist and use RangeManager::contains. Implement and test `get_value_cf_opt` for `HybridEngineSnapshot`. Integrate single WriteBatch with HybridEngine. Signed-off-by: Alex Feinberg <alex@strlen.net> Co-authored-by: ti-chi-bot[bot] <108142056+ti-chi-bot[bot]@users.noreply.github.com> Signed-off-by: dbsid <chenhuansheng@pingcap.com>
What is changed and how it works?
Issue Number: ref #16323 #16141
What's Changed:
Related changes
pingcap/docs
/pingcap/docs-cn
:Check List
Tests
Side effects
Release note