-
Notifications
You must be signed in to change notification settings - Fork 25.2k
[Vulkan] Implement GLU operator #80910
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful links
✅ No Failures (0 Pending)As of commit 5a9b835 (more details on the Dr. CI page): Expand to see more💚 💚 Looks good so far! There are no failures yet. 💚 💚 This comment was automatically generated by Dr. CI (expand for details).Please report bugs/suggestions to the (internal) Dr. CI Users group. |
This pull request was exported from Phabricator. Differential Revision: D37625389 |
1 similar comment
This pull request was exported from Phabricator. Differential Revision: D37625389 |
deb9f5d
to
78c53d3
Compare
Summary: Pull Request resolved: pytorch#80910 Implemented GLU operator for the Vulkan backend. Special case implementation: - Input tensor must be 4-dim, i.e. [N, C, H, W]. - C must be a multiple of 8 (number of channels of output tensor must be a multiple of 4). - dim must be 1. References - PyTorch Docs > torch.nn > [GLU](https://pytorch.org/docs/stable/generated/torch.nn.GLU.html) Test Plan: Added test case to `/xplat/caffe2/aten/src/ATen/test/vulkan_api_test.cpp` On Mac: ``` buck run //xplat/caffe2:pt_vulkan_api_test_binAppleMac ``` On Android: ``` buck build -c ndk.custom_libcxx=false -c pt.enable_qpl=0 //xplat/caffe2:pt_vulkan_api_test_binAndroid\#android-arm64 --show-output adb push buck-out/gen/xplat/caffe2/pt_vulkan_api_test_binAndroid\#android-arm64 /data/local/tmp/vulkan_api_test adb shell "/data/local/tmp/vulkan_api_test" ``` Reviewed By: SS-JIA Differential Revision: D37625389 fbshipit-source-id: cb1c48d0538b261ee35483d1c1cdb8a5465856d7
This pull request was exported from Phabricator. Differential Revision: D37625389 |
78c53d3
to
5a9b835
Compare
@pytorchbot merge (Initiating merge automatically since Phabricator Diff has merged) |
@pytorchbot successfully started a merge job. Check the current status here |
Hey @manuelcandales. |
Summary: Pull Request resolved: #80910 Implemented GLU operator for the Vulkan backend. Special case implementation: - Input tensor must be 4-dim, i.e. [N, C, H, W]. - C must be a multiple of 8 (number of channels of output tensor must be a multiple of 4). - dim must be 1. References - PyTorch Docs > torch.nn > [GLU](https://pytorch.org/docs/stable/generated/torch.nn.GLU.html) Test Plan: Added test case to `/xplat/caffe2/aten/src/ATen/test/vulkan_api_test.cpp` On Mac: ``` buck run //xplat/caffe2:pt_vulkan_api_test_binAppleMac ``` On Android: ``` buck build -c ndk.custom_libcxx=false -c pt.enable_qpl=0 //xplat/caffe2:pt_vulkan_api_test_binAndroid\#android-arm64 --show-output adb push buck-out/gen/xplat/caffe2/pt_vulkan_api_test_binAndroid\#android-arm64 /data/local/tmp/vulkan_api_test adb shell "/data/local/tmp/vulkan_api_test" ``` Reviewed By: SS-JIA Differential Revision: D37625389 fbshipit-source-id: 4d3f733abecbd6e9df53631a7f3c64ffd2cf0d10
Summary:
Implemented GLU operator for the Vulkan backend.
Special case implementation:
References
Test Plan:
Added test case to
/xplat/caffe2/aten/src/ATen/test/vulkan_api_test.cpp
On Mac:
On Android:
Differential Revision: D37625389