Skip to content

Commit

Permalink
apacheGH-34474: [C++] Detect and raise an error if a join will need t…
Browse files Browse the repository at this point in the history
…oo much key data (apache#35087)

### Rationale for this change

This fixes the test in apache#34474 though there are likely still other bad scenarios with large joins.  I've fixed this one since the behavior (invalid data) is particularly bad.  Most of the time if there is too much data I'm guessing we probably just crash.  Still, I think a test suite of some kind stressing large joins would be good to have.  Perhaps this could be added if someone finds time to work on join spilling.

### What changes are included in this PR?

If the join will require more than 4GiB of key data it should now return an invalid status instead of invalid data.

### Are these changes tested?

No.  I created a unit test but it requires over 16GiB of RAM (Besides the input data itself (4GiB), by the time you get 4GiB of key data there are various other join state buffers that also grow.  The test also took nearly a minute to run.  I think investigation and creation of a test suite for large joins is probably a standalone effort.

### Are there any user-facing changes?

No.
* Closes: apache#34474

Authored-by: Weston Pace <weston.pace@gmail.com>
Signed-off-by: Joris Van den Bossche <jorisvandenbossche@gmail.com>
  • Loading branch information
westonpace authored and liujiacheng777 committed May 11, 2023
1 parent 0c93615 commit e9d55d3
Showing 1 changed file with 6 additions and 0 deletions.
6 changes: 6 additions & 0 deletions cpp/src/arrow/acero/swiss_join.cc
Original file line number Diff line number Diff line change
Expand Up @@ -473,6 +473,12 @@ Status RowArrayMerge::PrepareForMerge(RowArray* target,
(*first_target_row_id)[sources.size()] = num_rows;
}

if (num_bytes > std::numeric_limits<uint32_t>::max()) {
return Status::Invalid(
"There are more than 2^32 bytes of key data. Acero cannot "
"process a join of this magnitude");
}

// Allocate target memory
//
target->rows_.Clean();
Expand Down

0 comments on commit e9d55d3

Please sign in to comment.