Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

server: support dynamic config max-grpc-send-msg-len and raft-msg-max-batch-size #12335

Merged
merged 18 commits into from May 18, 2022

Conversation

glorv
Copy link
Contributor

@glorv glorv commented Apr 7, 2022

Signed-off-by: glorv glorvs@163.com

What is changed and how it works?

Issue Number: Close #12334

What's Changed:

Related changes

  • PR to update pingcap/docs/pingcap/docs-cn:
  • Need to cherry-pick to the release branch

Check List

Tests

  • Unit test
  • Integration test
  • Manual test (add detailed scripts or steps below)
  • No code

Side effects

  • Performance regression
    • Consumes more CPU
    • Consumes more MEM
  • Breaking backward compatibility

Release note

None

@ti-chi-bot
Copy link
Member

ti-chi-bot commented Apr 7, 2022

[REVIEW NOTIFICATION]

This pull request has been approved by:

  • 5kbpers
  • Connor1996

To complete the pull request process, please ask the reviewers in the list to review by filling /cc @reviewer in the comment.
After your PR has acquired the required number of LGTMs, you can assign this pull request to the committer in the list by filling /assign @committer in the comment to help you merge this pull request.

The full list of commands accepted by this bot can be found here.

Reviewer can indicate their review by submitting an approval review.
Reviewer can cancel approval by submitting a request changes review.

@glorv
Copy link
Contributor Author

glorv commented Apr 7, 2022

/test

Signed-off-by: glorv <glorvs@163.com>
@glorv
Copy link
Contributor Author

glorv commented May 16, 2022

@5kbpers @Connor1996 PTAL

@ti-chi-bot ti-chi-bot added the status/LGT1 Status: PR - There is already 1 approval label May 16, 2022
@5kbpers
Copy link
Member

5kbpers commented May 16, 2022

CI failed

@glorv
Copy link
Contributor Author

glorv commented May 16, 2022

/test

@glorv
Copy link
Contributor Author

glorv commented May 16, 2022

CI failed

All failures are caused by github connection timeout.😂️

@glorv
Copy link
Contributor Author

glorv commented May 16, 2022

/build

for entry in msg.get_message().get_entries() {
msg_size += entry.data.len();
let msg_size = Self::message_size(&msg);
// try refrech config before check
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

refrech -> refresh

@@ -163,7 +163,7 @@ impl<T: RaftStoreRouter<E::Local> + Unpin, S: StoreAddrResolver + 'static, E: En

let conn_builder = ConnectionBuilder::new(
env.clone(),
Arc::new(cfg.value().clone()),
Arc::clone(cfg),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cfg.clone()

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure either Arc::clone or .close() is recommended in our code style, though both are ok to me. There are both of them in the current codebase. (Search in tikv for Arc::clone returns 333 hits)

Copy link
Member

@Connor1996 Connor1996 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rest LGTM

src/server/raft_client.rs Outdated Show resolved Hide resolved
Signed-off-by: glorv <glorvs@163.com>
glorv added 7 commits May 16, 2022 21:04
Signed-off-by: glorv <glorvs@163.com>
Signed-off-by: glorv <glorvs@163.com>
Signed-off-by: glorv <glorvs@163.com>
Signed-off-by: glorv <glorvs@163.com>
@glorv
Copy link
Contributor Author

glorv commented May 17, 2022

@Connor1996 PTAL again, thanks.

Copy link
Member

@Connor1996 Connor1996 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@ti-chi-bot ti-chi-bot added status/LGT2 Status: PR - There are already 2 approvals and removed status/LGT1 Status: PR - There is already 1 approval labels May 17, 2022
@5kbpers
Copy link
Member

5kbpers commented May 18, 2022

/merge

@ti-chi-bot
Copy link
Member

@5kbpers: It seems you want to merge this PR, I will help you trigger all the tests:

/run-all-tests

You only need to trigger /merge once, and if the CI test fails, you just re-trigger the test that failed and the bot will merge the PR for you after the CI passes.

If you have any questions about the PR merge process, please refer to pr process.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the ti-community-infra/tichi repository.

@ti-chi-bot
Copy link
Member

This pull request has been accepted and is ready to merge.

Commit hash: a3be3ff

@ti-chi-bot ti-chi-bot added the status/can-merge Status: Can merge to base branch label May 18, 2022
@ti-chi-bot ti-chi-bot merged commit 490c93d into tikv:master May 18, 2022
fengou1 pushed a commit to fengou1/tikv that referenced this pull request May 26, 2022
…-batch-size (tikv#12335)

close tikv#12334

Signed-off-by: glorv <glorvs@163.com>
@@ -461,6 +460,12 @@ impl Config {
));
}

if self.raft_entry_max_size.0 == 0 || self.raft_entry_max_size.0 > ReadableSize::gb(3).0 {
return Err(box_err!(
"raft entry max size should be greater than 0 and less than or equal to 3GiB"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we support 3GiB. Not even 2.01GiB.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I set this to the vlaue as raft-max-size-per-msg, this is just an intuitive upper bound, do you mean the hard limit should be 2GB? Sorry I don't find the related code.

msg_size += entry.data.len();
let msg_size = Self::message_size(&msg);
// try refresh config before check
if let Some(new_cfg) = self.cfg_tracker.any_new() {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have you bench the performance impact?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In my benchmark, this is no notable impact after this change. In the most common case, this is only one extra atomic load.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually the atomic load is way heavier than the original that just sums up several numbers. A better place to check for new version is when a message is sent.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So do you mean that we move this check into flush call as the frequency of flush should be much lower than push especially when the lead is heavy.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes.

@@ -83,7 +83,6 @@ pub struct Config {
#[online_config(skip)]
pub status_thread_pool_size: usize,

#[online_config(skip)]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see the point of changing this value online. It's only for controlling the size of a batch.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The use case is from tidb-cloud to minimize the memory usage in a minimal test cluster. This kind of scenario is not common. I think it is harmless to support it.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The batch is about 10MiB, and it only consumes about 10MiB * grpc_concurrency = 40MiB.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
release-note-none size/L status/can-merge Status: Can merge to base branch status/LGT2 Status: PR - There are already 2 approvals
Projects
None yet
Development

Successfully merging this pull request may close these issues.

online config: support max-grpc-send-msg-len
5 participants