-
Notifications
You must be signed in to change notification settings - Fork 758
Replace custom op pad with aten op, post-export #4603
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/4603
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit ab7e2f3 with merge base ce7f5a0 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
This pull request was exported from Phabricator. Differential Revision: D60941693 |
|
This pull request was exported from Phabricator. Differential Revision: D60941693 |
Summary: Pull Request resolved: pytorch#4603 In the preprocess nn.Module, we use a custom op for pad. The aten pad cannot export due to dynamism (require these changes D60687727). Because the custom pad and aten pad perform the same function, we can replace the custom op with the aten op post-export and avoid writing a custom C++ kernel for pad. Note: add custom op to disallow list so it doesn't get converted into edge-op before being replaced. Differential Revision: D60941693
2df8a86 to
82fc55e
Compare
|
This pull request was exported from Phabricator. Differential Revision: D60941693 |
Summary: Pull Request resolved: pytorch#4603 In the preprocess nn.Module, we use a custom op for pad. The aten pad cannot export due to dynamism (require these changes D60687727). Because the custom pad and aten pad perform the same function, we can replace the custom op with the aten op post-export and avoid writing a custom C++ kernel for pad. Note: add custom op to disallow list so it doesn't get converted into edge-op before being replaced. Reviewed By: angelayi Differential Revision: D60941693
82fc55e to
9d7e740
Compare
|
This pull request was exported from Phabricator. Differential Revision: D60941693 |
Summary: Pull Request resolved: pytorch#4603 In the preprocess nn.Module, we use a custom op for pad. The aten pad cannot export due to dynamism (require these changes D60687727). Because the custom pad and aten pad perform the same function, we can replace the custom op with the aten op post-export and avoid writing a custom C++ kernel for pad. Note: add custom op to disallow list so it doesn't get converted into edge-op before being replaced. Reviewed By: angelayi Differential Revision: D60941693
9d7e740 to
e6cc839
Compare
Summary: Pull Request resolved: pytorch#4603 In the preprocess nn.Module, we use a custom op for pad. The aten pad cannot export due to dynamism (require these changes D60687727). Because the custom pad and aten pad perform the same function, we can replace the custom op with the aten op post-export and avoid writing a custom C++ kernel for pad. Note: add custom op to disallow list so it doesn't get converted into edge-op before being replaced. Reviewed By: angelayi Differential Revision: D60941693
|
This pull request was exported from Phabricator. Differential Revision: D60941693 |
e6cc839 to
fc023fa
Compare
|
This pull request was exported from Phabricator. Differential Revision: D60941693 |
fc023fa to
640ec37
Compare
Summary: Pull Request resolved: pytorch#4603 In the preprocess nn.Module, we use a custom op for pad. The aten pad cannot export due to dynamism (require these changes D60687727). Because the custom pad and aten pad perform the same function, we can replace the custom op with the aten op post-export and avoid writing a custom C++ kernel for pad. Note: add custom op to disallow list so it doesn't get converted into edge-op before being replaced. Reviewed By: angelayi Differential Revision: D60941693
|
This pull request was exported from Phabricator. Differential Revision: D60941693 |
Summary: Pull Request resolved: pytorch#4603 In the preprocess nn.Module, we use a custom op for pad. The aten pad cannot export due to dynamism (require these changes D60687727). Because the custom pad and aten pad perform the same function, we can replace the custom op with the aten op post-export and avoid writing a custom C++ kernel for pad. Differential Revision: D60941693
640ec37 to
927c328
Compare
larryliu0820
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you!
Summary: Pull Request resolved: pytorch#4603 In the preprocess nn.Module, we use a custom op for pad. The aten pad cannot export due to dynamism (require these changes D60687727). Because the custom pad and aten pad perform the same function, we can replace the custom op with the aten op post-export and avoid writing a custom C++ kernel for pad. Reviewed By: larryliu0820 Differential Revision: D60941693
|
This pull request was exported from Phabricator. Differential Revision: D60941693 |
927c328 to
3101db2
Compare
|
This pull request was exported from Phabricator. Differential Revision: D60941693 |
Summary: Pull Request resolved: pytorch#4603 In the preprocess nn.Module, we use a custom op for pad. The aten pad cannot export due to dynamism (require these changes D60687727). Because the custom pad and aten pad perform the same function, we can replace the custom op with the aten op post-export and avoid writing a custom C++ kernel for pad. Reviewed By: larryliu0820 Differential Revision: D60941693
3101db2 to
9488e8a
Compare
|
This pull request was exported from Phabricator. Differential Revision: D60941693 |
Summary: Pull Request resolved: pytorch#4603 In the preprocess nn.Module, we use a custom op for pad. The aten pad cannot export due to dynamism (require these changes D60687727). Because the custom pad and aten pad perform the same function, we can replace the custom op with the aten op post-export and avoid writing a custom C++ kernel for pad. Reviewed By: larryliu0820 Differential Revision: D60941693
9488e8a to
b9af02c
Compare
Summary: Pull Request resolved: pytorch#4603 In the preprocess nn.Module, we use a custom op for pad. The aten pad cannot export due to dynamism (require these changes D60687727). Because the custom pad and aten pad perform the same function, we can replace the custom op with the aten op post-export and avoid writing a custom C++ kernel for pad. Reviewed By: larryliu0820 Differential Revision: D60941693
|
This pull request was exported from Phabricator. Differential Revision: D60941693 |
b9af02c to
ab7e2f3
Compare
Summary:
In the preprocess nn.Module, we use a custom op for pad. The aten pad cannot export due to dynamism (require these changes D60687727).
Because the custom pad and aten pad perform the same function, we can replace the custom op with the aten op post-export and avoid writing a custom C++ kernel for pad.
Differential Revision: D60941693