Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add max-tasks-per-worker-xxx and remove end-point-max-tasks #3093

Merged
merged 11 commits into from
Jun 7, 2018

Conversation

breezewish
Copy link
Member

Note: most of our deployments mistakenly configured end-point-max-tasks, so we are not using end-point-max-tasks to override max-tasks-per-worker-xxx. Instead, end-point-max-tasks are just ignored.

@BusyJay @hicqu @overvenus PTAL

@breezewish
Copy link
Member Author

/run-integration-tests

Copy link
Member

@AndreMouche AndreMouche left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not think is a good idea to use xxx-per-worker-xxx, since we do not have a definition of worker in readpool, and also we can only determine the max-tasks in the readpool while can hardly have a definition of the max tasks for each thread in readpool

@@ -79,22 +81,27 @@ impl Host {
cfg: &Config,
pool: ReadPool<ReadPoolContext>,
) -> Host {
// Use read pool's max task config
let max_running_task_count = pool.get_max_tasks();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It confused me, what is L69 used for?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

a reminder to remove it in future, since read pool itself manages a max_running_task_count, once we totally switch to read pool, we don't need to maintain our own max_running_task_count any more.

max_tasks_low: config.max_tasks_low,
max_tasks_high: config.max_tasks_per_worker_high * config.high_concurrency,
max_tasks_normal: config.max_tasks_per_worker_normal * config.high_concurrency,
max_tasks_low: config.max_tasks_per_worker_low * config.high_concurrency,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why high_concurrency?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice catch!!! thanks a lot

@breezewish
Copy link
Member Author

@AndreMouche yes, the name might not be very accurate. It is for solving the issue that max_tasks varies for different concurrency, which brought us a lot of issues recently.

@@ -109,6 +109,18 @@ impl<T: futurepool::Context + 'static> ReadPool<T> {
}
}

/// TODO: Remove thus function. Only used to be compatible with endpoint.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/thus/this/

/// TODO: Remove thus function. Only used to be compatible with endpoint.
#[inline]
pub fn get_max_tasks(&self) -> usize {
vec![
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why create a vector?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To calculate the max. Do you have better alternatives?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems two if or two cmp::max can do the job perfectly.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, exactly. I was just trying to prevent cmp::max(cmp::max()) 😢 If you think it acceptable I can use this one.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why allocation is preferred? Not to mention that it's longer.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok

src/config.rs Outdated
self.server.end_point_stack_size
);
self.readpool.coprocessor.stack_size = self.server.end_point_stack_size.unwrap();
}
if self.server.end_point_max_tasks != None {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not use is_some?

@breezewish
Copy link
Member Author

@BusyJay Addressed comments, PTAL again, thanks a lot!

@BusyJay
Copy link
Member

BusyJay commented Jun 4, 2018

LGTM

@breezewish
Copy link
Member Author

/run-integration-tests

@huachaohuang
Copy link
Contributor

/run-integration-tests

@breezewish breezewish merged commit 1a3f1ba into tikv:master Jun 7, 2018
@breezewish breezewish deleted the wenxuan/readpool-tasks-fix branch June 7, 2018 05:18
breezewish added a commit to breezewish/tikv that referenced this pull request Nov 13, 2018
breezewish added a commit to breezewish/tikv that referenced this pull request Nov 13, 2018
…#3093)

Signed-off-by: Breezewish <breezewish@pingcap.com>
breezewish added a commit that referenced this pull request Nov 14, 2018
… (#3782)

Signed-off-by: Breezewish <breezewish@pingcap.com>
sticnarf pushed a commit to sticnarf/tikv that referenced this pull request Oct 27, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants