Skip to content

[ET-VK] Add etvk.use_existing_vma config to avoid duplicate VMA symbols#18797

Merged
SS-JIA merged 1 commit intomainfrom
gh/SS-JIA/512/orig
Apr 9, 2026
Merged

[ET-VK] Add etvk.use_existing_vma config to avoid duplicate VMA symbols#18797
SS-JIA merged 1 commit intomainfrom
gh/SS-JIA/512/orig

Conversation

@pytorchbot
Copy link
Copy Markdown
Collaborator

This PR was created by the merge bot to help merge the original PR into the main branch.
ghstack PR number: #18522 by @SS-JIA
^ Please use this as the source of truth for the PR details, comments, and reviews
ghstack PR base: https://github.com/pytorch/executorch/tree/gh/SS-JIA/512/base
ghstack PR head: https://github.com/pytorch/executorch/tree/gh/SS-JIA/512/head
Merge bot PR base: https://github.com/pytorch/executorch/tree/main
Merge bot PR head: https://github.com/pytorch/executorch/tree/gh/SS-JIA/512/orig
Differential Revision: D98250268
@diff-train-skip-merge

Pull Request resolved: #18522

Apps that already link VulkanMemoryAllocatorInstantiated (e.g. Stella, via IGL
or Diamond/Skia) get duplicate symbol errors when also linking ExecuTorch's
Vulkan backend, because vma_api.cpp defines VMA_IMPLEMENTATION independently.

Add a Buck config flag `etvk.use_existing_vma=1` that:
- Defines ETVK_USE_META_VMA, which makes vma_api.h match the third-party
  VulkanMemoryAllocatorInstantiated config (Vulkan 1.2, dynamic function
  loading) so struct layouts agree
- Skips VMA_IMPLEMENTATION in vma_api.cpp so no duplicate definitions are
  emitted
- Swaps the Buck dep from VulkanMemoryAllocator_xplat (header-only) to
  VulkanMemoryAllocatorInstantiated (pre-compiled)

Off by default — no behavior change for existing builds or OSS.
ghstack-source-id: 364855830
@exported-using-ghexport

Differential Revision: [D98250268](https://our.internmc.facebook.com/intern/diff/D98250268/)
@pytorchbot pytorchbot requested a review from SS-JIA as a code owner April 9, 2026 19:14
@pytorch-bot
Copy link
Copy Markdown

pytorch-bot Bot commented Apr 9, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/18797

Note: Links to docs will display an error until the docs builds have been completed.

❗ 1 Active SEVs

There are 1 currently active SEVs. If your PR is affected, please view them below:

❌ 1 New Failure, 2 Pending, 2 Unrelated Failures

As of commit c01686c with merge base 0ee0f67 (image):

NEW FAILURE - The following job has failed:

FLAKY - The following job failed but was likely due to flakiness present on trunk:

BROKEN TRUNK - The following job failed but was present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla Bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Apr 9, 2026
@github-actions
Copy link
Copy Markdown

github-actions Bot commented Apr 9, 2026

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

@SS-JIA SS-JIA merged commit ecd97d5 into main Apr 9, 2026
163 of 170 checks passed
@SS-JIA SS-JIA deleted the gh/SS-JIA/512/orig branch April 9, 2026 21:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants