Skip to content

Remove asserts in CoreML resize#17626

Merged
meta-codesync[bot] merged 1 commit intopytorch:mainfrom
metascroy:export-D93942192
Feb 25, 2026
Merged

Remove asserts in CoreML resize#17626
meta-codesync[bot] merged 1 commit intopytorch:mainfrom
metascroy:export-D93942192

Conversation

@metascroy
Copy link
Copy Markdown
Contributor

Summary:
The assert(shape[i] <= shape_[i]) constraint in MultiArray::MemoryLayout::resize() was unnecessary and overly restrictive.

MemoryLayout is purely metadata describing shape and strides - it has no knowledge of the underlying memory allocation. The actual memory safety is guaranteed at a higher level: ExecuTorch tensors are pre-allocated with sufficient capacity to handle dynamic output shapes from CoreML model execution.

The resize function is called in ETCoreMLModelManager.mm to update output tensor metadata after model inference, where the new shape comes directly from CoreML's modelOutputs[i].shape. Since the ExecuTorch output buffers are sized appropriately for the model's maximum output dimensions, the resize operation is always safe regardless of whether the new shape is smaller or larger than the previous metadata indicated.

The remaining asserts are retained because they catch actual programming errors:

  • assert(shape.size() == shape_.size()) - prevents rank mismatch causing out-of-bounds access
  • assert(shape[i] >= 1) - ensures valid dimensions for correct stride computation

Reviewed By: GregoryComer

Differential Revision: D93942192

@pytorch-bot
Copy link
Copy Markdown

pytorch-bot Bot commented Feb 23, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/17626

Note: Links to docs will display an error until the docs builds have been completed.

❌ 2 New Failures, 1 Unrelated Failure

As of commit 404f1d5 with merge base 4dadf24 (image):

NEW FAILURES - The following jobs have failed:

FLAKY - The following job failed but was likely due to flakiness present on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla Bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Feb 23, 2026
@meta-codesync
Copy link
Copy Markdown
Contributor

meta-codesync Bot commented Feb 23, 2026

@metascroy has exported this pull request. If you are a Meta employee, you can view the originating Diff in D93942192.

@github-actions
Copy link
Copy Markdown

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

metascroy added a commit to metascroy/executorch that referenced this pull request Feb 23, 2026
Summary:

The `assert(shape[i] <= shape_[i])` constraint in `MultiArray::MemoryLayout::resize()` was unnecessary and overly restrictive.

`MemoryLayout` is purely metadata describing shape and strides - it has no knowledge of the underlying memory allocation. The actual memory safety is guaranteed at a higher level: ExecuTorch tensors are pre-allocated with sufficient capacity to handle dynamic output shapes from CoreML model execution.

The resize function is called in `ETCoreMLModelManager.mm` to update output tensor metadata after model inference, where the new shape comes directly from CoreML's `modelOutputs[i].shape`. Since the ExecuTorch output buffers are sized appropriately for the model's maximum output dimensions, the resize operation is always safe regardless of whether the new shape is smaller or larger than the previous metadata indicated.

The remaining asserts are retained because they catch actual programming errors:
- `assert(shape.size() == shape_.size())` - prevents rank mismatch causing out-of-bounds access
- `assert(shape[i] >= 1)` - ensures valid dimensions for correct stride computation

Reviewed By: shabbyowen, GregoryComer

Differential Revision: D93942192
metascroy added a commit to metascroy/executorch that referenced this pull request Feb 24, 2026
Summary:

The `assert(shape[i] <= shape_[i])` constraint in `MultiArray::MemoryLayout::resize()` was unnecessary and overly restrictive.

`MemoryLayout` is purely metadata describing shape and strides - it has no knowledge of the underlying memory allocation. The actual memory safety is guaranteed at a higher level: ExecuTorch tensors are pre-allocated with sufficient capacity to handle dynamic output shapes from CoreML model execution.

The resize function is called in `ETCoreMLModelManager.mm` to update output tensor metadata after model inference, where the new shape comes directly from CoreML's `modelOutputs[i].shape`. Since the ExecuTorch output buffers are sized appropriately for the model's maximum output dimensions, the resize operation is always safe regardless of whether the new shape is smaller or larger than the previous metadata indicated.

The remaining asserts are retained because they catch actual programming errors:
- `assert(shape.size() == shape_.size())` - prevents rank mismatch causing out-of-bounds access
- `assert(shape[i] >= 1)` - ensures valid dimensions for correct stride computation

Reviewed By: shabbyowen, GregoryComer

Differential Revision: D93942192
metascroy added a commit to metascroy/executorch that referenced this pull request Feb 24, 2026
Summary:

The `assert(shape[i] <= shape_[i])` constraint in `MultiArray::MemoryLayout::resize()` was unnecessary and overly restrictive.

`MemoryLayout` is purely metadata describing shape and strides - it has no knowledge of the underlying memory allocation. The actual memory safety is guaranteed at a higher level: ExecuTorch tensors are pre-allocated with sufficient capacity to handle dynamic output shapes from CoreML model execution.

The resize function is called in `ETCoreMLModelManager.mm` to update output tensor metadata after model inference, where the new shape comes directly from CoreML's `modelOutputs[i].shape`. Since the ExecuTorch output buffers are sized appropriately for the model's maximum output dimensions, the resize operation is always safe regardless of whether the new shape is smaller or larger than the previous metadata indicated.

The remaining asserts are retained because they catch actual programming errors:
- `assert(shape.size() == shape_.size())` - prevents rank mismatch causing out-of-bounds access
- `assert(shape[i] >= 1)` - ensures valid dimensions for correct stride computation

Reviewed By: shabbyowen, GregoryComer

Differential Revision: D93942192
metascroy added a commit to metascroy/executorch that referenced this pull request Feb 25, 2026
Summary:

The `assert(shape[i] <= shape_[i])` constraint in `MultiArray::MemoryLayout::resize()` was unnecessary and overly restrictive.

`MemoryLayout` is purely metadata describing shape and strides - it has no knowledge of the underlying memory allocation. The actual memory safety is guaranteed at a higher level: ExecuTorch tensors are pre-allocated with sufficient capacity to handle dynamic output shapes from CoreML model execution.

The resize function is called in `ETCoreMLModelManager.mm` to update output tensor metadata after model inference, where the new shape comes directly from CoreML's `modelOutputs[i].shape`. Since the ExecuTorch output buffers are sized appropriately for the model's maximum output dimensions, the resize operation is always safe regardless of whether the new shape is smaller or larger than the previous metadata indicated.

The remaining asserts are retained because they catch actual programming errors:
- `assert(shape.size() == shape_.size())` - prevents rank mismatch causing out-of-bounds access
- `assert(shape[i] >= 1)` - ensures valid dimensions for correct stride computation

Reviewed By: shabbyowen, GregoryComer

Differential Revision: D93942192
Summary:
Pull Request resolved: pytorch#17626

The `assert(shape[i] <= shape_[i])` constraint in `MultiArray::MemoryLayout::resize()` was unnecessary and overly restrictive.

`MemoryLayout` is purely metadata describing shape and strides - it has no knowledge of the underlying memory allocation. The actual memory safety is guaranteed at a higher level: ExecuTorch tensors are pre-allocated with sufficient capacity to handle dynamic output shapes from CoreML model execution.

The resize function is called in `ETCoreMLModelManager.mm` to update output tensor metadata after model inference, where the new shape comes directly from CoreML's `modelOutputs[i].shape`. Since the ExecuTorch output buffers are sized appropriately for the model's maximum output dimensions, the resize operation is always safe regardless of whether the new shape is smaller or larger than the previous metadata indicated.

The remaining asserts are retained because they catch actual programming errors:
- `assert(shape.size() == shape_.size())` - prevents rank mismatch causing out-of-bounds access
- `assert(shape[i] >= 1)` - ensures valid dimensions for correct stride computation

Reviewed By: shabbyowen, GregoryComer

Differential Revision: D93942192
@meta-codesync meta-codesync Bot merged commit 1091e3b into pytorch:main Feb 25, 2026
161 of 164 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported meta-exported

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants