Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

In-Memory Engine: WriteBatch with Skiplist Engine #16433

Conversation

afeinberg
Copy link
Contributor

@afeinberg afeinberg commented Jan 23, 2024

What is changed and how it works?

Issue Number: ref #16323 #16141

What's Changed:

Update WriteBatch to assume a single skiplist and use RangeManager::contains.
Implement and test `get_value_cf_opt` for `HybridEngineSnapshot`.
Integrate single WriteBatch with HybridEngine.

Related changes

  • PR to update pingcap/docs/pingcap/docs-cn:
  • Need to cherry-pick to the release branch

Check List

Tests

  • Unit test
  • Integration test
  • Manual test (add detailed scripts or steps below)
  • No code

Side effects

  • Performance regression: Consumes more CPU
  • Performance regression: Consumes more Memory
  • Breaking backward compatibility

Release note

None

Copy link
Contributor

ti-chi-bot bot commented Jan 23, 2024

[REVIEW NOTIFICATION]

This pull request has been approved by:

  • SpadeA-Tang
  • tonyxuqqi

To complete the pull request process, please ask the reviewers in the list to review by filling /cc @reviewer in the comment.
After your PR has acquired the required number of LGTMs, you can assign this pull request to the committer in the list by filling /assign @committer in the comment to help you merge this pull request.

The full list of commands accepted by this bot can be found here.

Reviewer can indicate their review by submitting an approval review.
Reviewer can cancel approval by submitting a request changes review.

@afeinberg afeinberg mentioned this pull request Jan 23, 2024
13 tasks
@afeinberg afeinberg force-pushed the afeinberg/memory_engine/hybrid_engine_write_batch_p2 branch from 4d5b30e to a83dadd Compare January 24, 2024 18:53
@afeinberg
Copy link
Contributor Author

/cc @SpadeA-Tang @tonyxuqqi

@ti-chi-bot ti-chi-bot bot added size/XL and removed size/L labels Jan 25, 2024
Copy link
Contributor Author

@afeinberg afeinberg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should be ready to publish soon.

@afeinberg afeinberg force-pushed the afeinberg/memory_engine/hybrid_engine_write_batch_p2 branch 2 times, most recently from e129e3a to 35fe837 Compare January 26, 2024 00:14
@afeinberg afeinberg marked this pull request as ready for review January 26, 2024 00:15
@afeinberg
Copy link
Contributor Author

PR published.
/assign @tonyxuqqi

.collect::<Vec<_>>(),
)
};
filtered_keys
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what if the range is just evicted before this line?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this case those keys will still be written too memory, range_manager will prevent them from being read. This is less efficient, but still correct; I can hold the lock on core when writing, but that will reduce write throughput. Another approach is to move from mutex to a read-write lock (or sharded lock) for core: perhaps I'll do this.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed this: switched to sharded read write lock from cross beam, using (relatively low-cost) read() lock during WriteBatch::write.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Holding a lock (even the read/write lock) is too heavy for write, which blocks the snapshot acquisition.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Although we will not read it due to the filter of range_manager, we don't have the logic to prevent a range load. That is say we cannot avoid the following situation:
we have cache range k1..k10

  1. write_impl: should_write_to_memory for k1..k10
  2. evict k1..k10
  3. load k1..k10
  4. write_impl: write_to_memory

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can have these steps to handle it:

  1. get ranges to write
  2. recording the ranges
  3. write_to_memory
  4. for ranges that are still valid, clear the range. for ranges evicted, schedule a task to delete the range and clear the range after that.

So, load a range should if the range is being written.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do youi mean by 'clear' the range?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

struct RangeManager {
    ...
    ranges_being_written: BtreeSet<CacheRange>,
}

So, clear just mean delete the range in the ranges_being_written.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@tonyxuqqi thoughts on this approach? we can also re-compute this during garbage collection.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can have these steps to handle it:

  1. get ranges to write
  2. recording the ranges
  3. write_to_memory
  4. for ranges that are still valid, clear the range. for ranges evicted, schedule a task to delete the range and clear the range after that.

So, load a range should if the range is being written.

I worry that it's not more efficient than a simple read-write lock. For read lock, actually it's pretty fast.

@afeinberg
Copy link
Contributor Author

/test

@afeinberg
Copy link
Contributor Author

/cc @v01dstar

@ti-chi-bot ti-chi-bot bot requested a review from v01dstar January 26, 2024 20:40
Signed-off-by: Alex Feinberg <alex@strlen.net>
Signed-off-by: Alex Feinberg <alex@strlen.net>
Signed-off-by: Alex Feinberg <alex@strlen.net>
Signed-off-by: Alex Feinberg <alex@strlen.net>
Signed-off-by: Alex Feinberg <alex@strlen.net>
Signed-off-by: Alex Feinberg <alex@strlen.net>
…s to avoid writing keys for evicted range.

Signed-off-by: Alex Feinberg <alex@strlen.net>
Signed-off-by: Alex Feinberg <alex@strlen.net>
Signed-off-by: Alex Feinberg <alex@strlen.net>
@afeinberg afeinberg force-pushed the afeinberg/memory_engine/hybrid_engine_write_batch_p2 branch from 15e2e25 to 7e38ce2 Compare January 30, 2024 19:57
Signed-off-by: Alex Feinberg <alex@strlen.net>
Signed-off-by: Alex Feinberg <alex@strlen.net>
@ti-chi-bot ti-chi-bot bot added the status/LGT1 Status: PR - There is already 1 approval label Jan 30, 2024
@ti-chi-bot ti-chi-bot bot added status/LGT2 Status: PR - There are already 2 approvals and removed status/LGT1 Status: PR - There is already 1 approval labels Jan 31, 2024
@SpadeA-Tang
Copy link
Member

/merge

Copy link
Contributor

ti-chi-bot bot commented Jan 31, 2024

@SpadeA-Tang: It seems you want to merge this PR, I will help you trigger all the tests:

/run-all-tests

You only need to trigger /merge once, and if the CI test fails, you just re-trigger the test that failed and the bot will merge the PR for you after the CI passes.

If you have any questions about the PR merge process, please refer to pr process.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the ti-community-infra/tichi repository.

Copy link
Contributor

ti-chi-bot bot commented Jan 31, 2024

This pull request has been accepted and is ready to merge.

Commit hash: a6621af

@ti-chi-bot ti-chi-bot bot added the status/can-merge Status: Can merge to base branch label Jan 31, 2024
@SpadeA-Tang
Copy link
Member

/run-test retry=7

Copy link
Contributor

ti-chi-bot bot commented Jan 31, 2024

@afeinberg: Your PR was out of date, I have automatically updated it for you.

If the CI test fails, you just re-trigger the test that failed and the bot will merge the PR for you after the CI passes.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the ti-community-infra/tichi repository.

@ti-chi-bot ti-chi-bot bot merged commit 87d9a97 into tikv:master Jan 31, 2024
7 checks passed
@ti-chi-bot ti-chi-bot bot added this to the Pool milestone Jan 31, 2024
dbsid pushed a commit to dbsid/tikv that referenced this pull request Mar 24, 2024
ref tikv#16323

Update WriteBatch to assume a single skiplist and use RangeManager::contains.
Implement and test `get_value_cf_opt` for `HybridEngineSnapshot`.
Integrate single WriteBatch with HybridEngine.

Signed-off-by: Alex Feinberg <alex@strlen.net>

Co-authored-by: ti-chi-bot[bot] <108142056+ti-chi-bot[bot]@users.noreply.github.com>
Signed-off-by: dbsid <chenhuansheng@pingcap.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
contribution Type: PR - From contributors release-note-none size/XL status/can-merge Status: Can merge to base branch status/LGT2 Status: PR - There are already 2 approvals
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants