Skip to content

Conversation

SS-JIA
Copy link
Contributor

@SS-JIA SS-JIA commented Aug 23, 2022

Stack from ghstack (oldest at bottom):

Remove the VMA checked in at aten/src/ATen/native/vulkan/api/vk_mem_alloc.h, and use the version checked into fbsource/third_party instead.

Also change open source CMakeLists to look for VMA in third_party submodule directory.

Note that I had to add an alternate VulkanMemoryAllocator target that uses fb_xplat_cxx_library instead of oxx_static_library to make it work with vulkan targets in caffe2.

Before landing this diff, make sure #83906 is committed on open source, which adds VMA as a git submodule of pytorch.

Differential Revision: D38943217

NOTE FOR REVIEWERS: This PR has internal Facebook specific changes or comments, please review them on Phabricator!

@diff-train-skip-merge

Remove the VMA checked in at `aten/src/ATen/native/vulkan/api/vk_mem_alloc.h`, and use the version checked into `fbsource/third_party` instead.

Also change open source CMakeLists to look for VMA in third_party submodule directory.

Note that I had to add an alternate VulkanMemoryAllocator target that uses `fb_xplat_cxx_library` instead of `oxx_static_library` to make it work with vulkan targets in `caffe2`.

Before landing this diff, make sure #83906 is committed on open source, which adds VMA as a git submodule of pytorch.

Differential Revision: [D38943217](https://our.internmc.facebook.com/intern/diff/D38943217/)

**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D38943217/)!

[ghstack-poisoned]
@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Aug 23, 2022

🔗 Helpful links

❌ 1 New Failures

As of commit e72988a (more details on the Dr. CI page):

Expand to see more
  • 1/1 failures introduced in this PR

🕵️ 1 new failure recognized by patterns

The following CI failures do not appear to be due to upstream breakages

See GitHub Actions build Lint / pr-sanity-checks (1/1)

Step: "PR size check" (full log | diagnosis details)

2022-08-25T13:18:12.7645019Z ##[error]Process completed with exit code 1.
2022-08-25T13:18:12.7545684Z If you think that this rule should not apply to your PR,
2022-08-25T13:18:12.7546230Z please contact @albanD or @seemethere.
2022-08-25T13:18:12.7546591Z 
2022-08-25T13:18:12.7585097Z + echo 'Your PR is 19562 LOC which is more than the 2000 maximum'
2022-08-25T13:18:12.7587081Z + echo 'allowed within PyTorch infra. PLease make sure to split up'
2022-08-25T13:18:12.7587582Z + echo 'your PR into smaller pieces that can be reviewed.'
2022-08-25T13:18:12.7588252Z + echo 'If you think that this rule should not apply to your PR,'
2022-08-25T13:18:12.7588614Z + echo 'please contact @albanD or @seemethere.'
2022-08-25T13:18:12.7588927Z + echo
2022-08-25T13:18:12.7589155Z + false
2022-08-25T13:18:12.7645019Z ##[error]Process completed with exit code 1.
2022-08-25T13:18:12.7692100Z Post job cleanup.
2022-08-25T13:18:12.7724155Z Post job cleanup.
2022-08-25T13:18:12.8789458Z [command]/usr/bin/git version
2022-08-25T13:18:12.8835264Z git version 2.37.2
2022-08-25T13:18:12.8880428Z Temporarily overriding HOME='/home/runner/work/_temp/f69f0d86-502f-4ded-a20c-a2fefa898d0b' before making global git config changes
2022-08-25T13:18:12.8880903Z Adding repository directory to the temporary git global config as a safe directory
2022-08-25T13:18:12.8886355Z [command]/usr/bin/git config --global --add safe.directory /home/runner/work/pytorch/pytorch
2022-08-25T13:18:12.8927097Z [command]/usr/bin/git config --local --name-only --get-regexp core\.sshCommand
2022-08-25T13:18:12.8963243Z [command]/usr/bin/git submodule foreach --recursive git config --local --name-only --get-regexp 'core\.sshCommand' && git config --local --unset-all 'core.sshCommand' || :
2022-08-25T13:18:12.9191226Z [command]/usr/bin/git config --local --name-only --get-regexp http\.https\:\/\/github\.com\/\.extraheader

This comment was automatically generated by Dr. CI (expand for details).

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

SS-JIA pushed a commit that referenced this pull request Aug 23, 2022
Remove the VMA checked in at `aten/src/ATen/native/vulkan/api/vk_mem_alloc.h`, and use the version checked into `fbsource/third_party` instead.

Also change open source CMakeLists to look for VMA in third_party submodule directory.

Note that I had to add an alternate VulkanMemoryAllocator target that uses `fb_xplat_cxx_library` instead of `oxx_static_library` to make it work with vulkan targets in `caffe2`.

Before landing this diff, make sure #83906 is committed on open source, which adds VMA as a git submodule of pytorch.

Differential Revision: [D38943217](https://our.internmc.facebook.com/intern/diff/D38943217/)

**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D38943217/)!

ghstack-source-id: 165376604
Pull Request resolved: #83934
Remove the VMA checked in at `aten/src/ATen/native/vulkan/api/vk_mem_alloc.h`, and use the version checked into `fbsource/third_party` instead.

Also change open source CMakeLists to look for VMA in third_party submodule directory.

Note that I had to add an alternate VulkanMemoryAllocator target that uses `fb_xplat_cxx_library` instead of `oxx_static_library` to make it work with vulkan targets in `caffe2`.

Before landing this diff, make sure #83906 is committed on open source, which adds VMA as a git submodule of pytorch.

Differential Revision: [D38943217](https://our.internmc.facebook.com/intern/diff/D38943217/)

**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D38943217/)!

[ghstack-poisoned]
SS-JIA pushed a commit that referenced this pull request Aug 25, 2022
Pull Request resolved: #83934

Remove the VMA checked in at `aten/src/ATen/native/vulkan/api/vk_mem_alloc.h`, and use the version checked into `fbsource/third_party` instead.

Also change open source CMakeLists to look for VMA in third_party submodule directory.

Note that I had to add an alternate VulkanMemoryAllocator target that uses `fb_xplat_cxx_library` instead of `oxx_static_library` to make it work with vulkan targets in `caffe2`.

Before landing this diff, make sure #83906 is committed on open source, which adds VMA as a git submodule of pytorch.
ghstack-source-id: 165613925

Differential Revision: [D38943217](https://our.internmc.facebook.com/intern/diff/D38943217/)

**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D38943217/)!
@facebook-github-bot
Copy link
Contributor

@pytorchbot merge -f 'Landed internally'

(Initiating merge automatically since Phabricator Diff has merged, using force because this PR might not pass merge_rules.json but landed internally)

@pytorchmergebot
Copy link
Collaborator

@pytorchbot successfully started a merge job. Check the current status here.
The merge job was triggered with the force (-f) flag. This means your change will be merged immediately, bypassing any CI checks (ETA: 1-5 minutes). If this is not the intended behavior, feel free to use some of the other merge options in the wiki.
Please reach out to the PyTorch DevX Team with feedback or questions!

@github-actions
Copy link
Contributor

Hey @SS-JIA.
You've committed this PR, but it does not have both a 'release notes: ...' and 'topics: ...' label. Please add one of each to the PR. The 'release notes: ...' label should represent the part of PyTorch that this PR changes (fx, autograd, distributed, etc) and the 'topics: ...' label should represent the kind of PR it is (not user facing, new feature, bug fix, perf improvement, etc). The list of valid labels can be found here for the 'release notes: ...' and here for the 'topics: ...'.
For changes that are 'topic: not user facing' there is no need for a release notes label.

@facebook-github-bot
Copy link
Contributor

@pytorchbot merge -f 'Landed internally'

(Initiating merge automatically since Phabricator Diff has merged, using force because this PR might not pass merge_rules.json but landed internally)

@pytorch pytorch deleted a comment from pytorchmergebot Aug 25, 2022
facebook-github-bot pushed a commit that referenced this pull request Aug 25, 2022
Summary:
Pull Request resolved: #83934

Remove the VMA checked in at `aten/src/ATen/native/vulkan/api/vk_mem_alloc.h`, and use the version checked into `fbsource/third_party` instead.

Also change open source CMakeLists to look for VMA in third_party submodule directory.

Note that I had to add an alternate VulkanMemoryAllocator target that uses `fb_xplat_cxx_library` instead of `oxx_static_library` to make it work with vulkan targets in `caffe2`.

Before landing this diff, make sure #83906 is committed on open source, which adds VMA as a git submodule of pytorch.
ghstack-source-id: 165613925

bypass-github-export-checks

Test Plan:
[Pytorch Vulkan Testing Procedures](https://www.internalfb.com/intern/wiki/Pytorch_Vulkan_Backend/Development/Vulkan_Testing_Procedures/)

Ensure that internal builds work

```
cd ~/fbsource
buck build //xplat/caffe2:aten_vulkanAppleMac\#macosx-arm64
buck build //xplat/caffe2:aten_vulkanAndroid\#android-arm64
```

Ensure that fbcode builds work

```
cd ~/fbsource/fbcode
buck build //caffe2:ATen-vulkan
```

Make changes to OSS Pytorch locally, and test that it can build correctly

```
cd ~/Github/pytorch
USE_VULKAN=1 USE_VULKAN_GPU_DIAGNOSTICS=1 python3 setup.py install
```

Differential Revision: D38943217

fbshipit-source-id: 506199ede644a963733b164b94a5bb470309f583
@facebook-github-bot facebook-github-bot deleted the gh/SS-JIA/185/head branch August 29, 2022 14:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants