Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tikv_util: introduce future channel #13407

Merged
merged 7 commits into from Sep 15, 2022

Conversation

BusyJay
Copy link
Member

@BusyJay BusyJay commented Sep 6, 2022

What is changed and how it works?

Issue Number: Ref #12842

What's Changed:

It's a super set of batch channel. It can be used as a batch channel
or just a future channel.

This is the first PR to use unified thread pool in apply system for v2.

I bench it with the old implementation, the result is highly unstable as if the receiver goes into sleep state, it will take more time to be woken up. Anyway, I post the result as below. There is no big difference between implementations.

running 7 tests
test bench_channel::bench_receiver_stream                                       ... bench:     820,119 ns/iter (+/- 2,278,412)
test bench_channel::bench_receiver_stream_batch                                 ... bench:     770,672 ns/iter (+/- 509,937)
test bench_channel::bench_receiver_stream_unbounded_batch_future                ... bench:     845,217 ns/iter (+/- 608,816)
test bench_channel::bench_receiver_stream_unbounded_nobatch_future              ... bench:     968,996 ns/iter (+/- 1,435,816)
test bench_channel::bench_receiver_stream_unbounded_batch_old                   ... bench:     774,375 ns/iter (+/- 1,822,096)
test bench_channel::bench_receiver_stream_unbounded_nobatch_old                 ... bench:     761,047 ns/iter (+/- 3,013,100)
test bench_channel::bench_receiver_stream_unbounded_nobatch_async_std_channel ... bench:   2,253,294 ns/iter (+/- 2,903,571)

Check List

Tests

  • Unit test
  • Integration test

Release note

None

It's a super set of batch channel. It can be used as a batch channel
or just a future channel.

Signed-off-by: Jay Lee <BusyJayLee@gmail.com>
@ti-chi-bot
Copy link
Member

ti-chi-bot commented Sep 6, 2022

[REVIEW NOTIFICATION]

This pull request has been approved by:

  • Connor1996
  • sticnarf

To complete the pull request process, please ask the reviewers in the list to review by filling /cc @reviewer in the comment.
After your PR has acquired the required number of LGTMs, you can assign this pull request to the committer in the list by filling /assign @committer in the comment to help you merge this pull request.

The full list of commands accepted by this bot can be found here.

Reviewer can indicate their review by submitting an approval review.
Reviewer can cancel approval by submitting a request changes review.

@BusyJay
Copy link
Member Author

BusyJay commented Sep 6, 2022

/release

return Poll::Ready(Some(t));
}
queue.waker.register(cx.waker());
if let Some(t) = queue.queue.pop() {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add a comment for why try pop twice

if let Some(t) = queue.queue.pop() {
return Poll::Ready(Some(t));
}
queue.waker.register(cx.waker());
Copy link
Member

@Connor1996 Connor1996 Sep 6, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What if the below pop consume the element, then the next wake would be unnecessary.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't get it. What do you mean "consume the element"? User is expected to keep calling poll_next until None is returned.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mean, should it be

        if let Some(t) = queue.queue.pop() {
            return Poll::Ready(Some(t));
        }
        if let Some(t) = queue.queue.pop() {
            return Poll::Ready(Some(t));
        }
        queue.waker.register(cx.waker());
``

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's for edge cases only, which should hardly happen. If that happens, clear the waker may cause unnecessary contention.

Signed-off-by: Jay Lee <BusyJayLee@gmail.com>
@BusyJay
Copy link
Member Author

BusyJay commented Sep 8, 2022

Benched with sysbench read_write, insert, update index, point select, the differences in througput and latency are within 3%.

PTAL

let elem = ctx.elem.take();
if let Some(m) = received {
let collection = ctx.elem.get_or_insert_with(&ctx.initializer);
let _received = ctx.collector.collect(collection, m);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why collect again

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Otherwise received will be dropped and message is lost.

Copy link
Member Author

@BusyJay BusyJay Sep 14, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The logic is simplified, now it won't collect again.

Copy link
Member

@Connor1996 Connor1996 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rest LGTM

}
}

unsafe impl<T: Send> Send for Sender<T> {}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about also Sync? Sometimes non-Sync types may cause difficulties when Sync bounds are required.

break false;
}
count += 1;
if count >= ctx.max_batch_size {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If max_batch_size belongs to BatchReceiver, in which case do we expect BatchCollector::collect returns Some and split?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

max_batch_size is for the number of message, BatchCollector can add other limits like data size.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The limit is for raft client in the past. Now raft client uses its own batch algorithm, so the API is unnecessary. I remove it for simplicity.

Signed-off-by: Jay Lee <BusyJayLee@gmail.com>
Signed-off-by: Jay Lee <BusyJayLee@gmail.com>
Make region size dynamic automation moved this from In progress to Reviewer approved Sep 15, 2022
Copy link
Contributor

@sticnarf sticnarf left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@ti-chi-bot ti-chi-bot added the status/LGT1 Status: PR - There is already 1 approval label Sep 15, 2022
Copy link
Member

@Connor1996 Connor1996 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM anyway

/// `initializer` is used to generate a initial value, and `collector`
/// will collect every (at most `max_batch_size`) raw items into the
/// batched value.
pub fn new(rx: Receiver<T>, max_batch_size: usize, initializer: I, collector: C) -> Self {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the use case for the collector? Isn't vec adequate?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Check

tikv/src/server/service/kv.rs

Lines 2357 to 2369 in 8e311cd

struct BatchRespCollector;
impl BatchCollector<MeasuredBatchResponse, MeasuredSingleResponse> for BatchRespCollector {
fn collect(
&mut self,
v: &mut MeasuredBatchResponse,
mut e: MeasuredSingleResponse,
) -> Option<MeasuredSingleResponse> {
v.batch_resp.mut_request_ids().push(e.id);
v.batch_resp.mut_responses().push(e.resp.consume());
v.measures.push(e.measure);
None
}
}

@ti-chi-bot ti-chi-bot added status/LGT2 Status: PR - There are already 2 approvals and removed status/LGT1 Status: PR - There is already 1 approval labels Sep 15, 2022
@BusyJay
Copy link
Member Author

BusyJay commented Sep 15, 2022

/merge

@ti-chi-bot
Copy link
Member

@BusyJay: It seems you want to merge this PR, I will help you trigger all the tests:

/run-all-tests

You only need to trigger /merge once, and if the CI test fails, you just re-trigger the test that failed and the bot will merge the PR for you after the CI passes.

If you have any questions about the PR merge process, please refer to pr process.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the ti-community-infra/tichi repository.

@ti-chi-bot
Copy link
Member

This pull request has been accepted and is ready to merge.

Commit hash: 74d4d82

@ti-chi-bot ti-chi-bot added the status/can-merge Status: Can merge to base branch label Sep 15, 2022
@ti-chi-bot
Copy link
Member

@BusyJay: Your PR was out of date, I have automatically updated it for you.

At the same time I will also trigger all tests for you:

/run-all-tests

If the CI test fails, you just re-trigger the test that failed and the bot will merge the PR for you after the CI passes.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the ti-community-infra/tichi repository.

@ti-chi-bot ti-chi-bot merged commit 7382d4e into tikv:master Sep 15, 2022
Make region size dynamic automation moved this from Reviewer approved to Done Sep 15, 2022
@ti-chi-bot ti-chi-bot added this to the Pool milestone Sep 15, 2022
@BusyJay BusyJay deleted the introduce-future-channel branch September 15, 2022 06:13
tabokie added a commit to tabokie/tikv that referenced this pull request Sep 21, 2022
This reverts commit 7382d4e.

Signed-off-by: tabokie <xy.tao@outlook.com>
sticnarf added a commit to sticnarf/tikv that referenced this pull request Sep 21, 2022
ti-chi-bot pushed a commit that referenced this pull request Sep 21, 2022
ref #13394, ref #13407

Signed-off-by: tabokie <xy.tao@outlook.com>
BusyJay pushed a commit to BusyJay/tikv that referenced this pull request Sep 26, 2022
sticnarf pushed a commit to sticnarf/tikv that referenced this pull request Sep 26, 2022
BusyJay added a commit to BusyJay/tikv that referenced this pull request Sep 28, 2022
…kv#13507)"

This reverts commit b219a82.

Signed-off-by: Jay Lee <BusyJayLee@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
release-note-none size/XXL status/can-merge Status: Can merge to base branch status/LGT2 Status: PR - There are already 2 approvals
Development

Successfully merging this pull request may close these issues.

None yet

5 participants