*: Optimize the underlying SegmentReader concurrency for TableScan under disagg arch#10522
Merged
ti-chi-bot[bot] merged 6 commits intopingcap:masterfrom Nov 4, 2025
Merged
Conversation
Signed-off-by: JaySon-Huang <tshent@qq.com>
Signed-off-by: JaySon-Huang <tshent@qq.com>
Signed-off-by: JaySon-Huang <tshent@qq.com>
0586ebe to
59eae76
Compare
Signed-off-by: JaySon-Huang <tshent@qq.com>
Contributor
Author
|
/test pull-unit-test |
Contributor
Author
Signed-off-by: JaySon-Huang <tshent@qq.com>
JinheLin
approved these changes
Nov 4, 2025
Contributor
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: CalvinNeo, JinheLin The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Contributor
Contributor
Author
|
/cherry-pick release-nextgen-20251011 |
Member
|
@JaySon-Huang: new pull request created to branch DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the ti-community-infra/tichi repository. |
ti-chi-bot bot
pushed a commit
that referenced
this pull request
Nov 4, 2025
…der disagg arch (#10522) (#10523) ref #10356 *: Optimize the underlying SegmentReader concurrency for TableScan under disagg arch - Adjust the concurrency under disagg * `SegmentReaderPoolManager` will init the SegmentReaderPool with size = vcore * dt_read_thread_count_scale (2.0) * 10 for disagg compute node * `StorageDisaggregated` will create SegmentReadTaskPool with max_active_segment = num_stream * 10 for disagg read task * `initThreadPool` will generate thread pool with 6*vcore threads at max for `BuildReadTaskForWNPool`/`BuildReadTaskForWNTablePool`/`BuildReadTaskPool`/`RNWritePageCachePool` - ScanDetails changes under disagg * Add rows_per_sec and bytes_per_sec for TableScan that summing from all concurrency * Fix num_columns and read_mode in scan_details * Fix the logging of `SegmentReadTaskPool` does not show mpp_task_id correctly * Add logging about finish building tasks from write node response - Add a http API /tiflash/remote/cache/evict for evicting local cache on compute node for testing Signed-off-by: JaySon-Huang <tshent@qq.com> Co-authored-by: JaySon-Huang <tshent@qq.com>
12 tasks
12 tasks
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
What problem does this PR solve?
Issue Number: ref #10356
Problem Summary: The query performance under disagg arch is slow when compute node local cache missed.
The main reason is that
SegmentReaderPooldefault size is vcore * dt_read_thread_count_scale, which is vcore * 2, andStorageDisaggregatedcreates SegmentReadTaskPool with max_active_segment = num_stream. When cache missed, SegmentReader will perform blocking IO calling S3 API. So the speed of reading data from the TableScan (which is reading from theSegmentReaderPool) is not sufficient for the Pipeline model executing other computing.The best way is to refine the StorageLayer reading logic and let it yield the current SegmentReaderTask from the SegmentReaderPool if it require network IO from remote storage service and let another SegmentReaderTask has chance for executing reading data from local cache. But it require lots of efforts.
** Now we increase the underlying SegmentReader concurrency for TableScan speed when cache miss under disagg arch. **
What is changed and how it works?
Tested with chbenchmark 8000
Check List
Tests
Side effects
Documentation
Release note