Skip to content

Conversation

hsharma35
Copy link
Contributor

Summary:
Add memory planning constraints for idma ops:

  1. idma load: output needs to be in DTCM
  2. idma store: input needs to be in DTCM
  3. idma wait: output aliases the input

Reviewed By: zonglinpeng

Differential Revision: D77232760

Copy link

pytorch-bot bot commented Jul 17, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/12597

Note: Links to docs will display an error until the docs builds have been completed.

❗ 1 Active SEVs

There are 1 currently active SEVs. If your PR is affected, please view them below:

❌ 3 New Failures, 2 Unrelated Failures

As of commit ba97db3 with merge base 02da205 (image):

NEW FAILURES - The following jobs have failed:

BROKEN TRUNK - The following jobs failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jul 17, 2025
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D77232760

hsharma35 added a commit to hsharma35/executorch that referenced this pull request Jul 18, 2025
Summary:

Add memory planning constraints for idma ops:
1. idma load: output needs to be in DTCM
2. idma store: input needs to be in DTCM
3. idma wait: output aliases the input

Reviewed By: zonglinpeng

Differential Revision: D77232760
hsharma35 added a commit to hsharma35/executorch that referenced this pull request Jul 18, 2025
Summary:

Add memory planning constraints for idma ops:
1. idma load: output needs to be in DTCM
2. idma store: input needs to be in DTCM
3. idma wait: output aliases the input

Reviewed By: zonglinpeng

Differential Revision: D77232760
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D77232760

@hsharma35 hsharma35 requested a review from skrtskrtfb July 18, 2025 22:57
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D77232760

hsharma35 added a commit to hsharma35/executorch that referenced this pull request Jul 18, 2025
Summary:
Pull Request resolved: pytorch#12597

Add memory planning constraints for idma ops:
1. idma load: output needs to be in DTCM
2. idma store: input needs to be in DTCM
3. idma wait: output aliases the input

Reviewed By: zonglinpeng

Differential Revision: D77232760
@hsharma35 hsharma35 added the release notes: none Do not include this in the release notes label Jul 18, 2025
hsharma35 added a commit to hsharma35/executorch that referenced this pull request Sep 2, 2025
Summary:

Add memory planning constraints for idma ops:
1. idma load: output needs to be in DTCM
2. idma store: input needs to be in DTCM
3. idma wait: output aliases the input

Reviewed By: zonglinpeng

Differential Revision: D77232760
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D77232760

hsharma35 added a commit to hsharma35/executorch that referenced this pull request Sep 2, 2025
Summary:
Pull Request resolved: pytorch#12597

Add memory planning constraints for idma ops:
1. idma load: output needs to be in DTCM
2. idma store: input needs to be in DTCM
3. idma wait: output aliases the input

Reviewed By: zonglinpeng

Differential Revision: D77232760
hsharma35 added a commit to hsharma35/executorch that referenced this pull request Sep 2, 2025
Summary:

Add memory planning constraints for idma ops:
1. idma load: output needs to be in DTCM
2. idma store: input needs to be in DTCM
3. idma wait: output aliases the input

Reviewed By: zonglinpeng

Differential Revision: D77232760
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D77232760

hsharma35 added a commit to hsharma35/executorch that referenced this pull request Sep 3, 2025
Summary:

Add memory planning constraints for idma ops:
1. idma load: output needs to be in DTCM
2. idma store: input needs to be in DTCM
3. idma wait: output aliases the input

Reviewed By: zonglinpeng

Differential Revision: D77232760
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D77232760

hsharma35 added a commit to hsharma35/executorch that referenced this pull request Sep 3, 2025
Summary:

Add memory planning constraints for idma ops:
1. idma load: output needs to be in DTCM
2. idma store: input needs to be in DTCM
3. idma wait: output aliases the input

Reviewed By: zonglinpeng

Differential Revision: D77232760
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D77232760

Summary:

Add memory planning constraints for idma ops:
1. idma load: output needs to be in DTCM
2. idma store: input needs to be in DTCM
3. idma wait: output aliases the input

Reviewed By: zonglinpeng

Differential Revision: D77232760
hsharma35 added a commit to hsharma35/executorch that referenced this pull request Sep 3, 2025
Summary:
Pull Request resolved: pytorch#12597

Add memory planning constraints for idma ops:
1. idma load: output needs to be in DTCM
2. idma store: input needs to be in DTCM
3. idma wait: output aliases the input

Reviewed By: zonglinpeng

Differential Revision: D77232760
@hsharma35 hsharma35 force-pushed the export-D77232760 branch 2 times, most recently from 7f5e782 to ba97db3 Compare September 3, 2025 00:34
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D77232760

hsharma35 added a commit to hsharma35/executorch that referenced this pull request Sep 3, 2025
Summary:

Add memory planning constraints for idma ops:
1. idma load: output needs to be in DTCM
2. idma store: input needs to be in DTCM
3. idma wait: output aliases the input

Reviewed By: zonglinpeng

Differential Revision: D77232760
@facebook-github-bot facebook-github-bot merged commit ae862f7 into pytorch:main Sep 3, 2025
108 of 115 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported release notes: none Do not include this in the release notes

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants