-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GH-42198: [C++] Fix GetRecordBatchPayload crashes for device data #42199
Conversation
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems there are failing tests? (And flaky macOS builds)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 on some failing tests (maybe because of tests that serialize a string, binary, or list with an offset?)
I am assuming this is the Arrow C++ version of what I'm trying to optimize in apache/arrow-nanoarrow#509 ? I haven't measured how slow the previous version was or the effect of copying the required buffer values asynchronously, but my current mental model of GPU--host behaviour would suggest that waiting for two calls to MemoryManager::CopyBufferSliceToCPU()
to complete synchronously is suboptimal (although better than crashing!).
cpp/src/arrow/ipc/writer.cc
Outdated
total_data_bytes = array.value_offset(array.length()) - array.value_offset(0); | ||
offset_type first_offset_value, last_offset_value; | ||
RETURN_NOT_OK(MemoryManager::CopyBufferSliceToCPU( | ||
value_offsets, 0, sizeof(offset_type), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this need to take into account the array-level offset?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since we create the zero-based value offsets, it turns out we can just grab the final offset after the conversion and avoid the subtraction as that will be the total byte length that we need. So we only need to perform a single copy there.
Personally I wouldn't necessarily expect that (e.g. I would expect this to crash with sliced nested arrays?), but so if this does actually work, it would be good to document that in the docstring of |
a62c3a2
to
7c3b062
Compare
@paleolimbot You were right that I didn't handle the offset properly, I've fixed that now. Since the two different offsets aren't contiguous, we'd have to do the copies separately anyways. It would definitely be slightly more optimal to do two async mem copies before sync'ing on them but as you said, it's better than crashing 😄 @jorisvandenbossche currently, you are correct that it will crash with sliced arrays in some cases. I'm certain that there are edge cases I haven't fully covered yet, which is why I didn't update the docs with this change. For now this is only focusing on string/binary arrays and lists until we run into other cases. |
While you (and we) are here, would it make sense to at least add tests for a few sliced arrays? |
@paleolimbot currently creating an IPC payload from sliced arrays will fail due to having to recompute the null count which won't work for device data. As a result, we can't add tests for this yet until we address / fix that problem. |
Would it be appropriate to add a test to ensure that this fails instead of crashes? |
@paleolimbot currently in debug mode it will abort on |
I still think it would be a good idea to have |
I'll take a look and see if there's a relatively easy way to do that without some major refactoring. Ultimately the issue is that |
Agreed that a refactor is out of scope! I basically had in mind a quick check before calling |
That's exactly where the failure comes from, I'll add a check there and a test that triggers it and confirms with |
@@ -154,6 +154,10 @@ class RecordBatchSerializer { | |||
return Status::CapacityError("Cannot write arrays larger than 2^31 - 1 in length"); | |||
} | |||
|
|||
if (arr.offset() != 0 && arr.device_type() != DeviceAllocationType::kCPU) { | |||
return Status::NotImplemented("Cannot compute null count for non-cpu sliced array"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we link this to a follow up ticket?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I created a ticket #43029 and added a comment here to link to it
value_offsets, array.length() * sizeof(offset_type), sizeof(offset_type), | ||
reinterpret_cast<uint8_t*>(&last_offset_value))); | ||
|
||
total_data_bytes = last_offset_value; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why don't we take into account offset #0 here anymore?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just above this we use GetZeroBasedValueOffsets
and populate value_offsets
with zero-indexed offsets. This means that if we grab the very last offset in that buffer (array.length() * sizeof(offset_type)
) we have the full number of data bytes and we can save the extra copy to grab offset #0
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think a rebase should fix a lot of the CI flakes?
a0d8330
to
0de2111
Compare
@lidavidm I did a rebase, but I don't think the integration CI fail is related to this change, at least I can't think of how it would be. Any ideas what's up with the Java integration testing CI there? |
CC @vibhatha |
0de2111
to
12499d6
Compare
If i run @vibhatha any chance you'd be able to help out here? |
I'll try to take a look in between ADBC stuff since there's been no response |
Archery is quite frustrating when trying to just debug a single case... |
I guess there's no way to do things other than to run the entire suite then try to pick up the pieces |
I think it's C++, anyways. Java doesn't generate the file in question, it's
that's generating the invalid file. |
And if I revert this patch, the problem goes away. So the problem is in this patch, not Java. |
You were able to replicate the failure of the case with this? If so, that would definitely make it easier to debug |
thanks for digging into it and figuring out that reproducer @lidavidm, i was able to debug and fix it in my latest commit |
After merging your PR, Conbench analyzed the 3 benchmarking runs that have been run so far on merge-commit e9f35ff. There were no benchmark performance regressions. 🎉 The full Conbench report has more details. It also includes information about 49 possible false positives for unstable benchmarks that are known to sometimes produce them. |
…2199) <!-- Thanks for opening a pull request! If this is your first pull request you can find detailed information on how to contribute here: * [New Contributor's Guide](https://arrow.apache.org/docs/dev/developers/guide/step_by_step/pr_lifecycle.html#reviews-and-merge-of-the-pull-request) * [Contributing Overview](https://arrow.apache.org/docs/dev/developers/overview.html) If this is not a [minor PR](https://github.com/apache/arrow/blob/main/CONTRIBUTING.md#Minor-Fixes). Could you open an issue for this pull request on GitHub? https://github.com/apache/arrow/issues/new/choose Opening GitHub issues ahead of time contributes to the [Openness](http://theapacheway.com/open/#:~:text=Openness%20allows%20new%20users%20the,must%20happen%20in%20the%20open.) of the Apache Arrow project. Then could you also rename the pull request title in the following format? GH-${GITHUB_ISSUE_ID}: [${COMPONENT}] ${SUMMARY} or MINOR: [${COMPONENT}] ${SUMMARY} In the case of PARQUET issues on JIRA the title also supports: PARQUET-${JIRA_ISSUE_ID}: [${COMPONENT}] ${SUMMARY} --> ### Rationale for this change Ensuring that creating IPC payloads works correctly for non-CPU data by utilizing `CopyBufferSliceToCPU`. <!-- Why are you proposing this change? If this is already explained clearly in the issue then this section is not needed. Explaining clearly why changes are proposed helps reviewers understand your changes and offer better suggestions for fixes. --> ### What changes are included in this PR? Adding calls to `CopyBufferSliceToCPU` to the Ipc Writer for base binary types and for list types, to avoid calls to `value_offset` in those cases. <!-- There is no need to duplicate the description in the issue here but it is sometimes worth providing a summary of the individual changes in this PR. --> ### Are these changes tested? Yes. Tests are added to cuda_test.cc <!-- We typically require tests for all PRs in order to: 1. Prevent the code from being accidentally broken by subsequent changes 2. Serve as another way to document the expected behavior of the code If tests are not included in your PR, please explain why (for example, are they covered by existing tests)? --> ### Are there any user-facing changes? No. <!-- If there are user-facing changes then we may require documentation to be updated before approving the PR. --> <!-- If there are any breaking changes to public APIs, please uncomment the line below and explain which changes are breaking. --> <!-- **This PR includes breaking changes to public APIs.** --> <!-- Please uncomment the line below (and provide explanation) if the changes fix either (a) a security vulnerability, (b) a bug that caused incorrect or invalid data to be produced, or (c) a bug that causes a crash (even when the API contract is upheld). We use this to highlight fixes to issues that may affect users without their knowledge. For this reason, fixing bugs that cause errors don't count, since those are usually obvious. --> <!-- **This PR contains a "Critical Fix".** --> * GitHub Issue: #42198
…ta (apache#42199) <!-- Thanks for opening a pull request! If this is your first pull request you can find detailed information on how to contribute here: * [New Contributor's Guide](https://arrow.apache.org/docs/dev/developers/guide/step_by_step/pr_lifecycle.html#reviews-and-merge-of-the-pull-request) * [Contributing Overview](https://arrow.apache.org/docs/dev/developers/overview.html) If this is not a [minor PR](https://github.com/apache/arrow/blob/main/CONTRIBUTING.md#Minor-Fixes). Could you open an issue for this pull request on GitHub? https://github.com/apache/arrow/issues/new/choose Opening GitHub issues ahead of time contributes to the [Openness](http://theapacheway.com/open/#:~:text=Openness%20allows%20new%20users%20the,must%20happen%20in%20the%20open.) of the Apache Arrow project. Then could you also rename the pull request title in the following format? GH-${GITHUB_ISSUE_ID}: [${COMPONENT}] ${SUMMARY} or MINOR: [${COMPONENT}] ${SUMMARY} In the case of PARQUET issues on JIRA the title also supports: PARQUET-${JIRA_ISSUE_ID}: [${COMPONENT}] ${SUMMARY} --> ### Rationale for this change Ensuring that creating IPC payloads works correctly for non-CPU data by utilizing `CopyBufferSliceToCPU`. <!-- Why are you proposing this change? If this is already explained clearly in the issue then this section is not needed. Explaining clearly why changes are proposed helps reviewers understand your changes and offer better suggestions for fixes. --> ### What changes are included in this PR? Adding calls to `CopyBufferSliceToCPU` to the Ipc Writer for base binary types and for list types, to avoid calls to `value_offset` in those cases. <!-- There is no need to duplicate the description in the issue here but it is sometimes worth providing a summary of the individual changes in this PR. --> ### Are these changes tested? Yes. Tests are added to cuda_test.cc <!-- We typically require tests for all PRs in order to: 1. Prevent the code from being accidentally broken by subsequent changes 2. Serve as another way to document the expected behavior of the code If tests are not included in your PR, please explain why (for example, are they covered by existing tests)? --> ### Are there any user-facing changes? No. <!-- If there are user-facing changes then we may require documentation to be updated before approving the PR. --> <!-- If there are any breaking changes to public APIs, please uncomment the line below and explain which changes are breaking. --> <!-- **This PR includes breaking changes to public APIs.** --> <!-- Please uncomment the line below (and provide explanation) if the changes fix either (a) a security vulnerability, (b) a bug that caused incorrect or invalid data to be produced, or (c) a bug that causes a crash (even when the API contract is upheld). We use this to highlight fixes to issues that may affect users without their knowledge. For this reason, fixing bugs that cause errors don't count, since those are usually obvious. --> <!-- **This PR contains a "Critical Fix".** --> * GitHub Issue: apache#42198
Rationale for this change
Ensuring that creating IPC payloads works correctly for non-CPU data by utilizing
CopyBufferSliceToCPU
.What changes are included in this PR?
Adding calls to
CopyBufferSliceToCPU
to the Ipc Writer for base binary types and for list types, to avoid calls tovalue_offset
in those cases.Are these changes tested?
Yes. Tests are added to cuda_test.cc
Are there any user-facing changes?
No.