Skip to content

[fix](set) fix coredump of set op if total data size exceeds 4G (#61471)#62203

Merged
yiguolei merged 1 commit intoapache:branch-4.1from
jacktengg:branch-4.1
Apr 8, 2026
Merged

[fix](set) fix coredump of set op if total data size exceeds 4G (#61471)#62203
yiguolei merged 1 commit intoapache:branch-4.1from
jacktengg:branch-4.1

Conversation

@jacktengg
Copy link
Copy Markdown
Contributor

What problem does this PR solve?

Issue Number: pick #61471

Problem Summary:
Root Cause Analysis

核心原因:SetSinkOperatorX::sink() 中 build_block
被多次覆盖,导致哈希表中的旧条目成为悬空引用。

问题链路

  1. build_block 被覆盖

在 set_sink_operator.cpp:52-56:

if (eos || local_state._mutable_block.allocated_bytes() >= BUILD_BLOCK_MAX_SIZE) { // 4GB
build_block = local_state._mutable_block.to_block(); // 覆盖 build_block! RETURN_IF_ERROR(_process_build_block(local_state, build_block, state));
local_state._mutable_block.clear();
}

当数据总量超过 BUILD_BLOCK_MAX_SIZE(4GB)时,这个 flush 会触发多次:

  • 第一次 flush(allocated_bytes >= 4GB时):build_block = batch1(假设包含 rows 0..N1),哈希表存入 row_num = 0, 1, ..., N1
  • 第二次 flush(eos 时):build_block = batch2(新数据,rows 0..N2),batch1 的数据被销毁。哈希表新增 row_num = 0, 1, ..., N2
  1. 哈希表只存 row_num,不存 block 引用

RowRefListWithFlags 继承自 RowRef,只存储 uint32_t row_num(join_op.h:46),没有 block
指针或 offset。

在 hash_table_set_build.h:39,构建时存入的是:Mapped {k},即行号 k。

  1. 输出阶段使用单一 build_block

在 set_source_operator.cpp:161-162:

auto& column = *build_block.get_by_position(idx->second).column;
local_state._mutable_cols[idx->first]->insert_from(column, it->row_num);

此时 build_block 是最后一次 flush 的 batch2。但哈希表中来自 batch1 的条目的 row_num
可能超出 batch2 的行数范围。

  1. 越界访问导致 SIGSEGV

当 batch1 的 row_num = X(X > batch2 的行数)被用于 insert_from(column, X) 时:

// column_string.h:180-197
const size_t size_to_append = src.offsets[X] - src.offsets[X - 1]; // 越界读取 → 垃圾值
const size_t offset = src.offsets[X - 1]; // 垃圾值
// ...
memcpy(..., &src.chars[offset], size_to_append); // 垃圾 offset → 访问未映射内存 →
SIGSEGV

Release note

None

Check List (For Author)

  • Test

    • Regression test
    • Unit Test
    • Manual test (add detailed scripts or steps below)
    • No need to test or manual test. Explain why:
  • This is a refactor/code format and no logic has been changed.
    - [ ] Previous test can cover this change. - [ ] No code files have been changed. - [ ] Other reason

  • Behavior changed:

    • No.
    • Yes.
  • Does this need documentation?

    • No.
  • Yes.

Check List (For Reviewer who merge this PR)

  • Confirm the release note
  • Confirm test cases
  • Confirm document
  • Add branch pick label

What problem does this PR solve?

Issue Number: close #xxx

Related PR: #xxx

Problem Summary:

Release note

None

Check List (For Author)

  • Test

    • Regression test
    • Unit Test
    • Manual test (add detailed scripts or steps below)
    • No need to test or manual test. Explain why:
      • This is a refactor/code format and no logic has been changed.
      • Previous test can cover this change.
      • No code files have been changed.
      • Other reason
  • Behavior changed:

    • No.
    • Yes.
  • Does this need documentation?

    • No.
    • Yes.

Check List (For Reviewer who merge this PR)

  • Confirm the release note
  • Confirm test cases
  • Confirm document
  • Add branch pick label

…he#61471)

### What problem does this PR solve?

Issue Number: close #xxx

Related PR: #xxx

Problem Summary:
Root Cause Analysis

  核心原因:SetSinkOperatorX::sink() 中 build_block 
  被多次覆盖,导致哈希表中的旧条目成为悬空引用。

  问题链路

  1. build_block 被覆盖

  在 set_sink_operator.cpp:52-56:

if (eos || local_state._mutable_block.allocated_bytes() >=
BUILD_BLOCK_MAX_SIZE) { // 4GB
build_block = local_state._mutable_block.to_block(); // 覆盖 build_block!
RETURN_IF_ERROR(_process_build_block(local_state, build_block, state));
      local_state._mutable_block.clear();
  }

  当数据总量超过 BUILD_BLOCK_MAX_SIZE(4GB)时,这个 flush 会触发多次:

  - 第一次 flush(allocated_bytes >= 4GB时):build_block = batch1(假设包含 rows
  0..N1),哈希表存入 row_num = 0, 1, ..., N1
  - 第二次 flush(eos 时):build_block = batch2(新数据,rows 0..N2),batch1
  的数据被销毁。哈希表新增 row_num = 0, 1, ..., N2

  2. 哈希表只存 row_num,不存 block 引用

RowRefListWithFlags 继承自 RowRef,只存储 uint32_t row_num(join_op.h:46),没有
block
  指针或 offset。

  在 hash_table_set_build.h:39,构建时存入的是:Mapped {k},即行号 k。

  3. 输出阶段使用单一 build_block

  在 set_source_operator.cpp:161-162:

  auto& column = *build_block.get_by_position(idx->second).column;
local_state._mutable_cols[idx->first]->insert_from(column, it->row_num);

  此时 build_block 是最后一次 flush 的 batch2。但哈希表中来自 batch1 的条目的 row_num
  可能超出 batch2 的行数范围。

  4. 越界访问导致 SIGSEGV

  当 batch1 的 row_num = X(X > batch2 的行数)被用于 insert_from(column, X) 时:

  // column_string.h:180-197
const size_t size_to_append = src.offsets[X] - src.offsets[X - 1]; //
越界读取 → 垃圾值
const size_t offset = src.offsets[X - 1]; // 垃圾值
  // ...
memcpy(..., &src.chars[offset], size_to_append); // 垃圾 offset → 访问未映射内存
→
  SIGSEGV


### Release note

None

### Check List (For Author)

- Test <!-- At least one of them must be included. -->
    - [ ] Regression test
    - [ ] Unit Test
    - [ ] Manual test (add detailed scripts or steps below)
    - [ ] No need to test or manual test. Explain why:
- [ ] This is a refactor/code format and no logic has been changed.
        - [ ] Previous test can cover this change.
        - [ ] No code files have been changed.
        - [ ] Other reason <!-- Add your reason?  -->

- Behavior changed:
    - [ ] No.
    - [ ] Yes. <!-- Explain the behavior change -->

- Does this need documentation?
    - [ ] No.
- [ ] Yes. <!-- Add document PR link here. eg:
apache/doris-website#1214 -->

### Check List (For Reviewer who merge this PR)

- [ ] Confirm the release note
- [ ] Confirm test cases
- [ ] Confirm document
- [ ] Add branch pick label <!-- Add branch pick label that this PR
should merge into -->
@hello-stephen
Copy link
Copy Markdown
Contributor

Thank you for your contribution to Apache Doris.
Don't know what should be done next? See How to process your PR.

Please clearly describe your PR:

  1. What problem was fixed (it's best to include specific error reporting information). How it was fixed.
  2. Which behaviors were modified. What was the previous behavior, what is it now, why was it modified, and what possible impacts might there be.
  3. What features were added. Why was this function added?
  4. Which code was refactored and why was this part of the code refactored?
  5. Which functions were optimized and what is the difference before and after the optimization?

@jacktengg
Copy link
Copy Markdown
Contributor Author

run buildall

@yiguolei
Copy link
Copy Markdown
Contributor

yiguolei commented Apr 8, 2026

skip buildall

@github-actions github-actions bot added the approved Indicates a PR has been approved by one committer. label Apr 8, 2026
@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Apr 8, 2026

PR approved by at least one committer and no changes requested.

@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Apr 8, 2026

PR approved by anyone and no changes requested.

@yiguolei yiguolei merged commit 20d5302 into apache:branch-4.1 Apr 8, 2026
29 of 31 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by one committer. reviewed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants