Skip to content

Conversation

zonglinpeng
Copy link
Contributor

Summary:
Method::load creates a PlatformMemoryAllocator as fallback when no temp allocator is provided. So our KernelRuntimeContext will always be PlatformMemoryAllocator since no temp mem is allocated. Overrides et_pal_allocate to allocate

code pointer: https://fburl.com/code/216qnnvt

Differential Revision: D80578578

Copy link

pytorch-bot bot commented Aug 19, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/13533

Note: Links to docs will display an error until the docs builds have been completed.

❌ 1 New Failure, 11 Pending, 2 Unrelated Failures

As of commit ef81355 with merge base aefdc8d (image):

NEW FAILURE - The following job has failed:

BROKEN TRUNK - The following jobs failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Aug 19, 2025
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D80578578

Copy link

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

zonglinpeng added a commit to zonglinpeng/executorch that referenced this pull request Aug 20, 2025
Summary:

Method::load creates a PlatformMemoryAllocator as fallback when no temp allocator is provided. So our KernelRuntimeContext will always be PlatformMemoryAllocator since no temp mem is allocated. Overrides et_pal_allocate to allocate

code pointer: https://fburl.com/code/216qnnvt

Differential Revision: D80578578
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D80578578

Summary:

**Have to ship this as an intermediate step to unblock 3 workstreams on the stack**

Modify aten tests to ingress FACTO generated testcases.
- Each test gets 30~50 cases with good coverage on
  - Optimized VS unoptimized flows
  - dtype switch cases

Known issues:
- FACTO test class is too big to run on default "heavyweight" CI
  - current skipping the whole target on CI. Will add back once skycastle flow is ready
- some FACTO is creating inputs that kernels does not handle, mainly dtypes
  - will create exception handling for that.
- TODO marks the 2 FACTO doesnt work well on the 2 ops.

Reviewed By: manuelcandales, hsharma35

Differential Revision: D79121474
Summary:

solve
```
*Error* Unhandled user exception: LoadProhibitedCause (0x00000000)
```

Differential Revision: D80487955
Summary:

Method::load creates a PlatformMemoryAllocator as fallback when no temp allocator is provided. So our KernelRuntimeContext will always be PlatformMemoryAllocator since no temp mem is allocated. Overrides et_pal_allocate to allocate

code pointer: https://fburl.com/code/216qnnvt

Differential Revision: D80578578
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D80578578

Copy link
Contributor

@jackzhxng jackzhxng left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@zonglinpeng fix merge conflicts

@facebook-github-bot facebook-github-bot merged commit c455f1b into pytorch:main Aug 26, 2025
105 of 110 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants