-
Notifications
You must be signed in to change notification settings - Fork 723
Fix write-heap-buffer-overflow in copy_out #15584
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/15584
Note: Links to docs will display an error until the docs builds have been completed. ❌ 2 New Failures, 4 Unrelated FailuresAs of commit cec4b7b with merge base 2f4ad68 ( NEW FAILURES - The following jobs have failed:
BROKEN TRUNK - The following jobs failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This PR needs a
|
76e0950 to
5b467b2
Compare
| // non-empty | ||
| if (internal::sizes_match_ignoring_leading_1s(out.sizes(), src.sizes()) && | ||
| src.numel() > 0) { | ||
| src.numel() > 0 && out.nbytes() >= src.nbytes() && |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@manuelcandales if numel are equal but nbytes isnt then should we be promoting the dtype here? Im not actually sure what copy_ does for mixed dtype?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@JacobSzwejbka If nbytes isn't equal, it should go into the else statement, which I think handles different dtypes
Summary:
Also add a check on dtypes, make sure out and src dtypes are the same. Otherwise we may copy the wrong dtype without conversion.
And fix the same issue in copy_
---
The crash is a write-heap-buffer-overflow that occurs in the `torch::executor::native::copy_out` function. The root cause is that the `std::memcpy` operation in this function does not check if the destination buffer `out` is large enough to hold the data from the source tensor `src`. Specifically, the condition `internal::sizes_match_ignoring_leading_1s(out.sizes(), src.sizes())` checks if the sizes of `out` and `src` match, ignoring any leading dimensions of size 1 in `out`, but it does not guarantee that `out.nbytes()` is greater than or equal to `src.nbytes()`.
The patch fixes the crash by adding an additional check `out.nbytes() >= src.nbytes()` before performing the `std::memcpy` operation. This ensures that the destination buffer `out` is large enough to hold the data from `src`, preventing the buffer overflow.
```cpp
if (internal::sizes_match_ignoring_leading_1s(out.sizes(), src.sizes()) &&
src.numel() > 0 && out.nbytes() >= src.nbytes()) {
std::memcpy(out.mutable_data_ptr(), src.const_data_ptr(), src.nbytes());
}
```
Other considerations that reviewers should take into account when validating the patch include verifying that the additional check does not introduce any performance regressions and that it correctly handles edge cases, such as when `src` is empty or when `out` and `src` have different data types. Reviewers should also check that the patch does not affect the functionality of the `copy_out` function in other scenarios. Additionally, it is worth verifying that the fix is consistent with the existing error handling and checking mechanisms in the `copy_out` function.
NOTE: This diff is entirely auto-generated by LLM-based patch generator.
Reviewer should carefully examine this diff as Lionhead does not guarrantee the
correctnesss of the patch beyond fixing the crash and passing existing tests.
Please commandeer this diff and revise as needed. Our bot does not respond to
comments or revision requests (yet).
Differential Revision: D80885980
5b467b2 to
5cf4e04
Compare
Summary:
Also add a check on dtypes, make sure out and src dtypes are the same. Otherwise we may copy the wrong dtype without conversion.
And fix the same issue in copy_
---
The crash is a write-heap-buffer-overflow that occurs in the `torch::executor::native::copy_out` function. The root cause is that the `std::memcpy` operation in this function does not check if the destination buffer `out` is large enough to hold the data from the source tensor `src`. Specifically, the condition `internal::sizes_match_ignoring_leading_1s(out.sizes(), src.sizes())` checks if the sizes of `out` and `src` match, ignoring any leading dimensions of size 1 in `out`, but it does not guarantee that `out.nbytes()` is greater than or equal to `src.nbytes()`.
The patch fixes the crash by adding an additional check `out.nbytes() >= src.nbytes()` before performing the `std::memcpy` operation. This ensures that the destination buffer `out` is large enough to hold the data from `src`, preventing the buffer overflow.
```cpp
if (internal::sizes_match_ignoring_leading_1s(out.sizes(), src.sizes()) &&
src.numel() > 0 && out.nbytes() >= src.nbytes()) {
std::memcpy(out.mutable_data_ptr(), src.const_data_ptr(), src.nbytes());
}
```
Other considerations that reviewers should take into account when validating the patch include verifying that the additional check does not introduce any performance regressions and that it correctly handles edge cases, such as when `src` is empty or when `out` and `src` have different data types. Reviewers should also check that the patch does not affect the functionality of the `copy_out` function in other scenarios. Additionally, it is worth verifying that the fix is consistent with the existing error handling and checking mechanisms in the `copy_out` function.
NOTE: This diff is entirely auto-generated by LLM-based patch generator.
Reviewer should carefully examine this diff as Lionhead does not guarrantee the
correctnesss of the patch beyond fixing the crash and passing existing tests.
Please commandeer this diff and revise as needed. Our bot does not respond to
comments or revision requests (yet).
Reviewed By: JacobSzwejbka
Differential Revision: D80885980
5cf4e04 to
cec4b7b
Compare
|
@pytorchbot cherry-pick --onto release/1.0 -c critical |
Cherry picking #15584The cherry pick PR is at #15784 and it is recommended to link a critical cherry pick PR with an issue. The following tracker issues are updated: Details for Dev Infra teamRaised by workflow job |
Summary:
Check that out.nbytes() is at least as large as src.nbytes() to prevent copying beyond the range of src.
Also add a check on dtypes, make sure out and src dtypes are the same. Otherwise we may copy the wrong dtype without conversion.
The crash is a write-heap-buffer-overflow that occurs in the
torch::executor::native::copy_outfunction. The root cause is that thestd::memcpyoperation in this function does not check if the destination bufferoutis large enough to hold the data from the source tensorsrc. Specifically, the conditioninternal::sizes_match_ignoring_leading_1s(out.sizes(), src.sizes())checks if the sizes ofoutandsrcmatch, ignoring any leading dimensions of size 1 inout, but it does not guarantee thatout.nbytes()is greater than or equal tosrc.nbytes().The patch fixes the crash by adding an additional check
out.nbytes() >= src.nbytes()before performing thestd::memcpyoperation. This ensures that the destination bufferoutis large enough to hold the data fromsrc, preventing the buffer overflow.Other considerations that reviewers should take into account when validating the patch include verifying that the additional check does not introduce any performance regressions and that it correctly handles edge cases, such as when
srcis empty or whenoutandsrchave different data types. Reviewers should also check that the patch does not affect the functionality of thecopy_outfunction in other scenarios. Additionally, it is worth verifying that the fix is consistent with the existing error handling and checking mechanisms in thecopy_outfunction.NOTE: This diff is entirely auto-generated by LLM-based patch generator.
Reviewer should carefully examine this diff as Lionhead does not guarrantee the
correctnesss of the patch beyond fixing the crash and passing existing tests.
Please commandeer this diff and revise as needed. Our bot does not respond to
comments or revision requests (yet).
Differential Revision: D80885980