-
Notifications
You must be signed in to change notification settings - Fork 25.2k
[vulkan] use VMA at third-party #83934
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Remove the VMA checked in at `aten/src/ATen/native/vulkan/api/vk_mem_alloc.h`, and use the version checked into `fbsource/third_party` instead. Also change open source CMakeLists to look for VMA in third_party submodule directory. Note that I had to add an alternate VulkanMemoryAllocator target that uses `fb_xplat_cxx_library` instead of `oxx_static_library` to make it work with vulkan targets in `caffe2`. Before landing this diff, make sure #83906 is committed on open source, which adds VMA as a git submodule of pytorch. Differential Revision: [D38943217](https://our.internmc.facebook.com/intern/diff/D38943217/) **NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D38943217/)! [ghstack-poisoned]
🔗 Helpful links
❌ 1 New FailuresAs of commit e72988a (more details on the Dr. CI page): Expand to see more
🕵️ 1 new failure recognized by patternsThe following CI failures do not appear to be due to upstream breakages
|
Remove the VMA checked in at `aten/src/ATen/native/vulkan/api/vk_mem_alloc.h`, and use the version checked into `fbsource/third_party` instead. Also change open source CMakeLists to look for VMA in third_party submodule directory. Note that I had to add an alternate VulkanMemoryAllocator target that uses `fb_xplat_cxx_library` instead of `oxx_static_library` to make it work with vulkan targets in `caffe2`. Before landing this diff, make sure #83906 is committed on open source, which adds VMA as a git submodule of pytorch. Differential Revision: [D38943217](https://our.internmc.facebook.com/intern/diff/D38943217/) **NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D38943217/)! ghstack-source-id: 165376604 Pull Request resolved: #83934
Remove the VMA checked in at `aten/src/ATen/native/vulkan/api/vk_mem_alloc.h`, and use the version checked into `fbsource/third_party` instead. Also change open source CMakeLists to look for VMA in third_party submodule directory. Note that I had to add an alternate VulkanMemoryAllocator target that uses `fb_xplat_cxx_library` instead of `oxx_static_library` to make it work with vulkan targets in `caffe2`. Before landing this diff, make sure #83906 is committed on open source, which adds VMA as a git submodule of pytorch. Differential Revision: [D38943217](https://our.internmc.facebook.com/intern/diff/D38943217/) **NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D38943217/)! [ghstack-poisoned]
Pull Request resolved: #83934 Remove the VMA checked in at `aten/src/ATen/native/vulkan/api/vk_mem_alloc.h`, and use the version checked into `fbsource/third_party` instead. Also change open source CMakeLists to look for VMA in third_party submodule directory. Note that I had to add an alternate VulkanMemoryAllocator target that uses `fb_xplat_cxx_library` instead of `oxx_static_library` to make it work with vulkan targets in `caffe2`. Before landing this diff, make sure #83906 is committed on open source, which adds VMA as a git submodule of pytorch. ghstack-source-id: 165613925 Differential Revision: [D38943217](https://our.internmc.facebook.com/intern/diff/D38943217/) **NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D38943217/)!
@pytorchbot merge -f 'Landed internally' (Initiating merge automatically since Phabricator Diff has merged, using force because this PR might not pass merge_rules.json but landed internally) |
@pytorchbot successfully started a merge job. Check the current status here. |
Hey @SS-JIA. |
@pytorchbot merge -f 'Landed internally' (Initiating merge automatically since Phabricator Diff has merged, using force because this PR might not pass merge_rules.json but landed internally) |
Summary: Pull Request resolved: #83934 Remove the VMA checked in at `aten/src/ATen/native/vulkan/api/vk_mem_alloc.h`, and use the version checked into `fbsource/third_party` instead. Also change open source CMakeLists to look for VMA in third_party submodule directory. Note that I had to add an alternate VulkanMemoryAllocator target that uses `fb_xplat_cxx_library` instead of `oxx_static_library` to make it work with vulkan targets in `caffe2`. Before landing this diff, make sure #83906 is committed on open source, which adds VMA as a git submodule of pytorch. ghstack-source-id: 165613925 bypass-github-export-checks Test Plan: [Pytorch Vulkan Testing Procedures](https://www.internalfb.com/intern/wiki/Pytorch_Vulkan_Backend/Development/Vulkan_Testing_Procedures/) Ensure that internal builds work ``` cd ~/fbsource buck build //xplat/caffe2:aten_vulkanAppleMac\#macosx-arm64 buck build //xplat/caffe2:aten_vulkanAndroid\#android-arm64 ``` Ensure that fbcode builds work ``` cd ~/fbsource/fbcode buck build //caffe2:ATen-vulkan ``` Make changes to OSS Pytorch locally, and test that it can build correctly ``` cd ~/Github/pytorch USE_VULKAN=1 USE_VULKAN_GPU_DIAGNOSTICS=1 python3 setup.py install ``` Differential Revision: D38943217 fbshipit-source-id: 506199ede644a963733b164b94a5bb470309f583
Stack from ghstack (oldest at bottom):
Remove the VMA checked in at
aten/src/ATen/native/vulkan/api/vk_mem_alloc.h
, and use the version checked intofbsource/third_party
instead.Also change open source CMakeLists to look for VMA in third_party submodule directory.
Note that I had to add an alternate VulkanMemoryAllocator target that uses
fb_xplat_cxx_library
instead ofoxx_static_library
to make it work with vulkan targets incaffe2
.Before landing this diff, make sure #83906 is committed on open source, which adds VMA as a git submodule of pytorch.
Differential Revision: D38943217
NOTE FOR REVIEWERS: This PR has internal Facebook specific changes or comments, please review them on Phabricator!
@diff-train-skip-merge