Skip to content

Conversation

@lucylq
Copy link
Contributor

@lucylq lucylq commented Aug 9, 2024

Summary:
In the preprocess nn.Module, we use a custom op for pad. The aten pad cannot export due to dynamism (require these changes D60687727).

Because the custom pad and aten pad perform the same function, we can replace the custom op with the aten op post-export and avoid writing a custom C++ kernel for pad.

Differential Revision: D60941693

@pytorch-bot
Copy link

pytorch-bot bot commented Aug 9, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/4603

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit ab7e2f3 with merge base ce7f5a0 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Aug 9, 2024
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D60941693

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D60941693

lucylq added a commit to lucylq/executorch-1 that referenced this pull request Aug 9, 2024
Summary:
Pull Request resolved: pytorch#4603

In the preprocess nn.Module, we use a custom op for pad. The aten pad cannot export due to dynamism (require these changes D60687727).

Because the custom pad and aten pad perform the same function, we can replace the custom op with the aten op post-export and avoid writing a custom C++ kernel for pad.

Note: add custom op to disallow list so it doesn't get converted into edge-op before being replaced.

Differential Revision: D60941693
@lucylq lucylq force-pushed the export-D60941693 branch from 2df8a86 to 82fc55e Compare August 9, 2024 00:09
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D60941693

lucylq added a commit to lucylq/executorch-1 that referenced this pull request Aug 9, 2024
Summary:
Pull Request resolved: pytorch#4603

In the preprocess nn.Module, we use a custom op for pad. The aten pad cannot export due to dynamism (require these changes D60687727).

Because the custom pad and aten pad perform the same function, we can replace the custom op with the aten op post-export and avoid writing a custom C++ kernel for pad.

Note: add custom op to disallow list so it doesn't get converted into edge-op before being replaced.

Reviewed By: angelayi

Differential Revision: D60941693
@lucylq lucylq force-pushed the export-D60941693 branch from 82fc55e to 9d7e740 Compare August 9, 2024 16:57
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D60941693

lucylq added a commit to lucylq/executorch-1 that referenced this pull request Aug 9, 2024
Summary:
Pull Request resolved: pytorch#4603

In the preprocess nn.Module, we use a custom op for pad. The aten pad cannot export due to dynamism (require these changes D60687727).

Because the custom pad and aten pad perform the same function, we can replace the custom op with the aten op post-export and avoid writing a custom C++ kernel for pad.

Note: add custom op to disallow list so it doesn't get converted into edge-op before being replaced.

Reviewed By: angelayi

Differential Revision: D60941693
@lucylq lucylq force-pushed the export-D60941693 branch from 9d7e740 to e6cc839 Compare August 9, 2024 17:43
lucylq added a commit to lucylq/executorch-1 that referenced this pull request Aug 9, 2024
Summary:
Pull Request resolved: pytorch#4603

In the preprocess nn.Module, we use a custom op for pad. The aten pad cannot export due to dynamism (require these changes D60687727).

Because the custom pad and aten pad perform the same function, we can replace the custom op with the aten op post-export and avoid writing a custom C++ kernel for pad.

Note: add custom op to disallow list so it doesn't get converted into edge-op before being replaced.

Reviewed By: angelayi

Differential Revision: D60941693
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D60941693

@lucylq lucylq force-pushed the export-D60941693 branch from e6cc839 to fc023fa Compare August 9, 2024 17:49
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D60941693

@lucylq lucylq force-pushed the export-D60941693 branch from fc023fa to 640ec37 Compare August 9, 2024 20:53
lucylq added a commit to lucylq/executorch-1 that referenced this pull request Aug 9, 2024
Summary:
Pull Request resolved: pytorch#4603

In the preprocess nn.Module, we use a custom op for pad. The aten pad cannot export due to dynamism (require these changes D60687727).

Because the custom pad and aten pad perform the same function, we can replace the custom op with the aten op post-export and avoid writing a custom C++ kernel for pad.

Note: add custom op to disallow list so it doesn't get converted into edge-op before being replaced.

Reviewed By: angelayi

Differential Revision: D60941693
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D60941693

lucylq added a commit to lucylq/executorch-1 that referenced this pull request Aug 9, 2024
Summary:
Pull Request resolved: pytorch#4603

In the preprocess nn.Module, we use a custom op for pad. The aten pad cannot export due to dynamism (require these changes D60687727).

Because the custom pad and aten pad perform the same function, we can replace the custom op with the aten op post-export and avoid writing a custom C++ kernel for pad.

Differential Revision: D60941693
@lucylq lucylq force-pushed the export-D60941693 branch from 640ec37 to 927c328 Compare August 9, 2024 23:46
Copy link
Contributor

@larryliu0820 larryliu0820 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you!

lucylq added a commit to lucylq/executorch-1 that referenced this pull request Aug 9, 2024
Summary:
Pull Request resolved: pytorch#4603

In the preprocess nn.Module, we use a custom op for pad. The aten pad cannot export due to dynamism (require these changes D60687727).

Because the custom pad and aten pad perform the same function, we can replace the custom op with the aten op post-export and avoid writing a custom C++ kernel for pad.

Reviewed By: larryliu0820

Differential Revision: D60941693
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D60941693

@lucylq lucylq force-pushed the export-D60941693 branch from 927c328 to 3101db2 Compare August 9, 2024 23:51
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D60941693

lucylq added a commit to lucylq/executorch-1 that referenced this pull request Aug 9, 2024
Summary:
Pull Request resolved: pytorch#4603

In the preprocess nn.Module, we use a custom op for pad. The aten pad cannot export due to dynamism (require these changes D60687727).

Because the custom pad and aten pad perform the same function, we can replace the custom op with the aten op post-export and avoid writing a custom C++ kernel for pad.

Reviewed By: larryliu0820

Differential Revision: D60941693
@lucylq lucylq force-pushed the export-D60941693 branch from 3101db2 to 9488e8a Compare August 9, 2024 23:57
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D60941693

lucylq added a commit to lucylq/executorch-1 that referenced this pull request Aug 10, 2024
Summary:
Pull Request resolved: pytorch#4603

In the preprocess nn.Module, we use a custom op for pad. The aten pad cannot export due to dynamism (require these changes D60687727).

Because the custom pad and aten pad perform the same function, we can replace the custom op with the aten op post-export and avoid writing a custom C++ kernel for pad.

Reviewed By: larryliu0820

Differential Revision: D60941693
Summary:
Pull Request resolved: pytorch#4603

In the preprocess nn.Module, we use a custom op for pad. The aten pad cannot export due to dynamism (require these changes D60687727).

Because the custom pad and aten pad perform the same function, we can replace the custom op with the aten op post-export and avoid writing a custom C++ kernel for pad.

Reviewed By: larryliu0820

Differential Revision: D60941693
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D60941693

@facebook-github-bot facebook-github-bot merged commit 18b829c into pytorch:main Aug 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants