Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GH-42198: [C++] Fix GetRecordBatchPayload crashes for device data #42199

Merged
merged 7 commits into from
Jul 2, 2024

Conversation

zeroshade
Copy link
Member

@zeroshade zeroshade commented Jun 18, 2024

Rationale for this change

Ensuring that creating IPC payloads works correctly for non-CPU data by utilizing CopyBufferSliceToCPU.

What changes are included in this PR?

Adding calls to CopyBufferSliceToCPU to the Ipc Writer for base binary types and for list types, to avoid calls to value_offset in those cases.

Are these changes tested?

Yes. Tests are added to cuda_test.cc

Are there any user-facing changes?

No.

Copy link

⚠️ GitHub issue #42198 has been automatically assigned in GitHub to PR creator.

Copy link
Member

@lidavidm lidavidm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems there are failing tests? (And flaky macOS builds)

@github-actions github-actions bot added awaiting changes Awaiting changes and removed awaiting committer review Awaiting committer review labels Jun 19, 2024
Copy link
Member

@paleolimbot paleolimbot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 on some failing tests (maybe because of tests that serialize a string, binary, or list with an offset?)

I am assuming this is the Arrow C++ version of what I'm trying to optimize in apache/arrow-nanoarrow#509 ? I haven't measured how slow the previous version was or the effect of copying the required buffer values asynchronously, but my current mental model of GPU--host behaviour would suggest that waiting for two calls to MemoryManager::CopyBufferSliceToCPU() to complete synchronously is suboptimal (although better than crashing!).

total_data_bytes = array.value_offset(array.length()) - array.value_offset(0);
offset_type first_offset_value, last_offset_value;
RETURN_NOT_OK(MemoryManager::CopyBufferSliceToCPU(
value_offsets, 0, sizeof(offset_type),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this need to take into account the array-level offset?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since we create the zero-based value offsets, it turns out we can just grab the final offset after the conversion and avoid the subtraction as that will be the total byte length that we need. So we only need to perform a single copy there.

cpp/src/arrow/ipc/writer.cc Outdated Show resolved Hide resolved
@jorisvandenbossche
Copy link
Member

Using GetRecordBatchPayload with non-CPU device buffers should work just fine.

Personally I wouldn't necessarily expect that (e.g. I would expect this to crash with sliced nested arrays?), but so if this does actually work, it would be good to document that in the docstring of GetRecordBatchPayload (it might also be good to add a general test for that in cuda_test.cc (because I don't think this is actually tested right now?) and not only of the corner cases fixed in this PR).

@github-actions github-actions bot added awaiting change review Awaiting change review awaiting changes Awaiting changes and removed awaiting changes Awaiting changes awaiting change review Awaiting change review labels Jun 19, 2024
@zeroshade
Copy link
Member Author

@paleolimbot You were right that I didn't handle the offset properly, I've fixed that now. Since the two different offsets aren't contiguous, we'd have to do the copies separately anyways. It would definitely be slightly more optimal to do two async mem copies before sync'ing on them but as you said, it's better than crashing 😄

@jorisvandenbossche currently, you are correct that it will crash with sliced arrays in some cases. I'm certain that there are edge cases I haven't fully covered yet, which is why I didn't update the docs with this change. For now this is only focusing on string/binary arrays and lists until we run into other cases.

@paleolimbot
Copy link
Member

For now this is only focusing on string/binary arrays and lists until we run into other cases.

While you (and we) are here, would it make sense to at least add tests for a few sliced arrays?

@zeroshade
Copy link
Member Author

@paleolimbot currently creating an IPC payload from sliced arrays will fail due to having to recompute the null count which won't work for device data. As a result, we can't add tests for this yet until we address / fix that problem.

@paleolimbot
Copy link
Member

we can't add tests for this yet

Would it be appropriate to add a test to ensure that this fails instead of crashes?

@zeroshade
Copy link
Member Author

@paleolimbot currently in debug mode it will abort on DCHECK(is_cpu()) and in release mode it'll crash from passing NULLPTR to CountSetBits. GetNullCount doesn't provide any way to return a Status currently or otherwise indicate a failure rather than crashing

@paleolimbot
Copy link
Member

I still think it would be a good idea to have ipc::IpcPayload fail instead of crash (and add the test to ensure this happens) while we're all here; however, I'm happy to defer to somebody with more experience with the expectations in Arrow C++ around testing 🙂 .

@zeroshade
Copy link
Member Author

I'll take a look and see if there's a relatively easy way to do that without some major refactoring. Ultimately the issue is that GetNullCount is never expected to fail, so it doesn't use arrow::Result but simply returning 0 would be (IMHO) worse than crashing as it would be invalid and inaccurate

@paleolimbot
Copy link
Member

Agreed that a refactor is out of scope! I basically had in mind a quick check before calling null_count() (I assume the crash happens hereish https://github.com/apache/arrow/pull/42199/files#diff-1b1d9dca9fdea7624e22f017b8762c4919edf57c2cf43c15d59b8a5e8e1b38a5L158 )

@zeroshade
Copy link
Member Author

That's exactly where the failure comes from, I'll add a check there and a test that triggers it and confirms with ASSERT_NOT_OK ensuring it doesn't crash.

@zeroshade zeroshade requested a review from lidavidm June 21, 2024 15:25
@github-actions github-actions bot added awaiting change review Awaiting change review and removed awaiting changes Awaiting changes labels Jun 21, 2024
@@ -154,6 +154,10 @@ class RecordBatchSerializer {
return Status::CapacityError("Cannot write arrays larger than 2^31 - 1 in length");
}

if (arr.offset() != 0 && arr.device_type() != DeviceAllocationType::kCPU) {
return Status::NotImplemented("Cannot compute null count for non-cpu sliced array");
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we link this to a follow up ticket?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I created a ticket #43029 and added a comment here to link to it

value_offsets, array.length() * sizeof(offset_type), sizeof(offset_type),
reinterpret_cast<uint8_t*>(&last_offset_value)));

total_data_bytes = last_offset_value;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why don't we take into account offset #0 here anymore?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just above this we use GetZeroBasedValueOffsets and populate value_offsets with zero-indexed offsets. This means that if we grab the very last offset in that buffer (array.length() * sizeof(offset_type)) we have the full number of data bytes and we can save the extra copy to grab offset #0

@zeroshade zeroshade requested a review from lidavidm June 24, 2024 20:05
Copy link
Member

@lidavidm lidavidm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think a rebase should fix a lot of the CI flakes?

@github-actions github-actions bot added awaiting merge Awaiting merge and removed awaiting changes Awaiting changes labels Jun 24, 2024
@zeroshade
Copy link
Member Author

@lidavidm I did a rebase, but I don't think the integration CI fail is related to this change, at least I can't think of how it would be. Any ideas what's up with the Java integration testing CI there?

@lidavidm
Copy link
Member

CC @vibhatha

@zeroshade
Copy link
Member Author

If i run archery docker run conda-integration I'm able to replicate all but the C++/C++ failure. I don't know Java well enough to run the tests and work out the issue there though.

@vibhatha any chance you'd be able to help out here?

@lidavidm
Copy link
Member

lidavidm commented Jul 1, 2024

I'll try to take a look in between ADBC stuff since there's been no response

@lidavidm
Copy link
Member

lidavidm commented Jul 1, 2024

Archery is quite frustrating when trying to just debug a single case...

@lidavidm
Copy link
Member

lidavidm commented Jul 1, 2024

I guess there's no way to do things other than to run the entire suite then try to pick up the pieces

@lidavidm
Copy link
Member

lidavidm commented Jul 1, 2024

I think it's C++, anyways. Java doesn't generate the file in question, it's

./build/debug/arrow-stream-to-file < /tmp/tmppsyt4cv0/a21e3cc2_generated_primitive_zerolength.producer_file_as_stream > /tmp/tmppsyt4cv0/a21e3cc2_generated_primitive_zerolength.consumer_stream_as_file

that's generating the invalid file.

@lidavidm
Copy link
Member

lidavidm commented Jul 1, 2024

And if I revert this patch, the problem goes away. So the problem is in this patch, not Java.

@zeroshade
Copy link
Member Author

I think it's C++, anyways. Java doesn't generate the file in question, it's

./build/debug/arrow-stream-to-file < /tmp/tmppsyt4cv0/a21e3cc2_generated_primitive_zerolength.producer_file_as_stream > /tmp/tmppsyt4cv0/a21e3cc2_generated_primitive_zerolength.consumer_stream_as_file

that's generating the invalid file.

You were able to replicate the failure of the case with this? If so, that would definitely make it easier to debug

@zeroshade
Copy link
Member Author

thanks for digging into it and figuring out that reproducer @lidavidm, i was able to debug and fix it in my latest commit

@zeroshade zeroshade requested a review from lidavidm July 1, 2024 18:52
@zeroshade zeroshade merged commit e9f35ff into apache:main Jul 2, 2024
35 of 39 checks passed
@zeroshade zeroshade deleted the fix-ipc-device-rb branch July 2, 2024 00:11
Copy link

After merging your PR, Conbench analyzed the 3 benchmarking runs that have been run so far on merge-commit e9f35ff.

There were no benchmark performance regressions. 🎉

The full Conbench report has more details. It also includes information about 49 possible false positives for unstable benchmarks that are known to sometimes produce them.

raulcd pushed a commit that referenced this pull request Jul 3, 2024
…2199)

<!--
Thanks for opening a pull request!
If this is your first pull request you can find detailed information on
how
to contribute here:
* [New Contributor's
Guide](https://arrow.apache.org/docs/dev/developers/guide/step_by_step/pr_lifecycle.html#reviews-and-merge-of-the-pull-request)
* [Contributing
Overview](https://arrow.apache.org/docs/dev/developers/overview.html)


If this is not a [minor
PR](https://github.com/apache/arrow/blob/main/CONTRIBUTING.md#Minor-Fixes).
Could you open an issue for this pull request on GitHub?
https://github.com/apache/arrow/issues/new/choose

Opening GitHub issues ahead of time contributes to the
[Openness](http://theapacheway.com/open/#:~:text=Openness%20allows%20new%20users%20the,must%20happen%20in%20the%20open.)
of the Apache Arrow project.

Then could you also rename the pull request title in the following
format?

    GH-${GITHUB_ISSUE_ID}: [${COMPONENT}] ${SUMMARY}

or

    MINOR: [${COMPONENT}] ${SUMMARY}

In the case of PARQUET issues on JIRA the title also supports:

    PARQUET-${JIRA_ISSUE_ID}: [${COMPONENT}] ${SUMMARY}

-->

### Rationale for this change
Ensuring that creating IPC payloads works correctly for non-CPU data by
utilizing `CopyBufferSliceToCPU`.

<!--
Why are you proposing this change? If this is already explained clearly
in the issue then this section is not needed.
Explaining clearly why changes are proposed helps reviewers understand
your changes and offer better suggestions for fixes.
-->

### What changes are included in this PR?
Adding calls to `CopyBufferSliceToCPU` to the Ipc Writer for base binary
types and for list types, to avoid calls to `value_offset` in those
cases.

<!--
There is no need to duplicate the description in the issue here but it
is sometimes worth providing a summary of the individual changes in this
PR.
-->

### Are these changes tested?
Yes. Tests are added to cuda_test.cc

<!--
We typically require tests for all PRs in order to:
1. Prevent the code from being accidentally broken by subsequent changes
2. Serve as another way to document the expected behavior of the code

If tests are not included in your PR, please explain why (for example,
are they covered by existing tests)?
-->

### Are there any user-facing changes?
No.

<!--
If there are user-facing changes then we may require documentation to be
updated before approving the PR.
-->

<!--
If there are any breaking changes to public APIs, please uncomment the
line below and explain which changes are breaking.
-->
<!-- **This PR includes breaking changes to public APIs.** -->

<!--
Please uncomment the line below (and provide explanation) if the changes
fix either (a) a security vulnerability, (b) a bug that caused incorrect
or invalid data to be produced, or (c) a bug that causes a crash (even
when the API contract is upheld). We use this to highlight fixes to
issues that may affect users without their knowledge. For this reason,
fixing bugs that cause errors don't count, since those are usually
obvious.
-->
<!-- **This PR contains a "Critical Fix".** -->
* GitHub Issue: #42198
zanmato1984 pushed a commit to zanmato1984/arrow that referenced this pull request Jul 9, 2024
…ta (apache#42199)

<!--
Thanks for opening a pull request!
If this is your first pull request you can find detailed information on
how
to contribute here:
* [New Contributor's
Guide](https://arrow.apache.org/docs/dev/developers/guide/step_by_step/pr_lifecycle.html#reviews-and-merge-of-the-pull-request)
* [Contributing
Overview](https://arrow.apache.org/docs/dev/developers/overview.html)


If this is not a [minor
PR](https://github.com/apache/arrow/blob/main/CONTRIBUTING.md#Minor-Fixes).
Could you open an issue for this pull request on GitHub?
https://github.com/apache/arrow/issues/new/choose

Opening GitHub issues ahead of time contributes to the
[Openness](http://theapacheway.com/open/#:~:text=Openness%20allows%20new%20users%20the,must%20happen%20in%20the%20open.)
of the Apache Arrow project.

Then could you also rename the pull request title in the following
format?

    GH-${GITHUB_ISSUE_ID}: [${COMPONENT}] ${SUMMARY}

or

    MINOR: [${COMPONENT}] ${SUMMARY}

In the case of PARQUET issues on JIRA the title also supports:

    PARQUET-${JIRA_ISSUE_ID}: [${COMPONENT}] ${SUMMARY}

-->

### Rationale for this change
Ensuring that creating IPC payloads works correctly for non-CPU data by
utilizing `CopyBufferSliceToCPU`.

<!--
Why are you proposing this change? If this is already explained clearly
in the issue then this section is not needed.
Explaining clearly why changes are proposed helps reviewers understand
your changes and offer better suggestions for fixes.
-->

### What changes are included in this PR?
Adding calls to `CopyBufferSliceToCPU` to the Ipc Writer for base binary
types and for list types, to avoid calls to `value_offset` in those
cases.

<!--
There is no need to duplicate the description in the issue here but it
is sometimes worth providing a summary of the individual changes in this
PR.
-->

### Are these changes tested?
Yes. Tests are added to cuda_test.cc

<!--
We typically require tests for all PRs in order to:
1. Prevent the code from being accidentally broken by subsequent changes
2. Serve as another way to document the expected behavior of the code

If tests are not included in your PR, please explain why (for example,
are they covered by existing tests)?
-->

### Are there any user-facing changes?
No.

<!--
If there are user-facing changes then we may require documentation to be
updated before approving the PR.
-->

<!--
If there are any breaking changes to public APIs, please uncomment the
line below and explain which changes are breaking.
-->
<!-- **This PR includes breaking changes to public APIs.** -->

<!--
Please uncomment the line below (and provide explanation) if the changes
fix either (a) a security vulnerability, (b) a bug that caused incorrect
or invalid data to be produced, or (c) a bug that causes a crash (even
when the API contract is upheld). We use this to highlight fixes to
issues that may affect users without their knowledge. For this reason,
fixing bugs that cause errors don't count, since those are usually
obvious.
-->
<!-- **This PR contains a "Critical Fix".** -->
* GitHub Issue: apache#42198
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants