Skip to content

Conversation

SS-JIA
Copy link
Contributor

@SS-JIA SS-JIA commented Aug 1, 2025

Summary:

Changes

Revert some changes to the D78360038 / #12527 stack which enabled support for submitting command buffers with an associated semaphore.

Context

The original intent was to allow command buffers to be correctly ordered when submitting multiple command buffers for model inference. Previously it was thought that the Vulkan API would not be aware of the dependency between two separate command buffer submissions, so a semaphore would be needed to ensure correct execution order between them.

However, I noticed the following validation layer error on Mac:

Validation 0 vkQueueSubmit(): pSubmits[0].pSignalSemaphores[0] (VkSemaphore 0x10f000000010f) is being signaled by VkQueue 0x1181082a8, but it was previously signaled by VkQueue 0x1181082a8 and has not since been waited on.
The Vulkan spec states: Each binary semaphore element of the pSignalSemaphores member of any element of pSubmits must be unsignaled when the semaphore signal operation it defines is executed on the device (https://vulkan.lunarg.com/doc/view/1.4.313.0/mac/antora/spec/latest/chapters/cmdbuffers.html#VUID-vkQueueSubmit-pSignalSemaphores-00067)

The reason this happens is because we store the VkSemaphore together with the VkCommandBuffer, and use the same VkSemaphore in the submit info every time the command buffer is submitted. However, there's no mechanism to reset the VkSemaphore to "unsignaled" once it's been signaled.

Therefore, as-is the VkSemaphores do not serve any purpose after the first inference.

The correct approach if we were to use semaphores is to create a new one with every submission, and not have it be attached to a specific command buffer

However, after some deeper research I found that VkSemaphore is not actually needed to ensure correct execution order between command buffers; the command pipeline barriers that we already insert should be sufficient. My primary source is this stackoverflow question which references the Vulkan API spec in its answer.

Therefore, remove the VkSemaphore machinery since it's not required.

Differential Revision: D79468286

Summary:
## Changes

Revert some changes to the D78360038 / pytorch#12527  stack which enabled support for submitting command buffers with an associated semaphore.

## Context

The original intent was to allow command buffers to be correctly ordered when submitting multiple command buffers for model inference. Previously it was thought that the Vulkan API would not be aware of the dependency between two separate command buffer submissions, so a semaphore would be needed to ensure correct execution order between them.

However, I noticed the following validation layer error on Mac:

```
Validation 0 vkQueueSubmit(): pSubmits[0].pSignalSemaphores[0] (VkSemaphore 0x10f000000010f) is being signaled by VkQueue 0x1181082a8, but it was previously signaled by VkQueue 0x1181082a8 and has not since been waited on.
The Vulkan spec states: Each binary semaphore element of the pSignalSemaphores member of any element of pSubmits must be unsignaled when the semaphore signal operation it defines is executed on the device (https://vulkan.lunarg.com/doc/view/1.4.313.0/mac/antora/spec/latest/chapters/cmdbuffers.html#VUID-vkQueueSubmit-pSignalSemaphores-00067)
```

The reason this happens is because we store the `VkSemaphore` together with the `VkCommandBuffer`, and use the same `VkSemaphore` in the submit info every time the command buffer is submitted. However, there's no mechanism to reset the VkSemaphore to "unsignaled" once it's been signaled.

Therefore, as-is the `VkSemaphores` do not serve any purpose after the first inference.

The correct approach if we were to use semaphores is to create a new one with every submission, and not have it be attached to a specific command buffer

However, after some deeper research I found that `VkSemaphore` is not actually needed to ensure correct execution order between command buffers; the command pipeline barriers that we already insert should be sufficient. My primary source is this [stackoverflow question](the correct approach if we were to use semaphores is to create a new one with every submission, and not have it be attached to a specific command buffer) which references the Vulkan API spec in the answer.

Therefore, remove the `VkSemaphore` machinery since it's not required.

Differential Revision: D79468286
Copy link

pytorch-bot bot commented Aug 1, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/13070

Note: Links to docs will display an error until the docs builds have been completed.

❌ 1 New Failure, 1 Unrelated Failure

As of commit 1a4b217 with merge base 1d80837 (image):

NEW FAILURE - The following job has failed:

FLAKY - The following job failed but was likely due to flakiness present on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Aug 1, 2025
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D79468286

Copy link

github-actions bot commented Aug 1, 2025

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

@facebook-github-bot facebook-github-bot merged commit e19e4a7 into pytorch:main Aug 1, 2025
102 of 107 checks passed
agrima1304 pushed a commit to agrima1304/executorch that referenced this pull request Aug 26, 2025
Differential Revision: D79468286

Pull Request resolved: pytorch#13070
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants