Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ET-VK] Enable Dynamic shape support via tensor virtual and physical resizing #121598

Closed
wants to merge 1 commit into from

Conversation

SS-JIA
Copy link
Contributor

@SS-JIA SS-JIA commented Mar 10, 2024

Summary:

Context

This changeset lays the foundations for supporting dynamic shapes in the ExecuTorch Vulkan delegate via allowing Tensors to be resized in one of two ways:

  1. Discarding underlying vkImage or vkBuffer and reallocating a new vkImage or vkBuffer with updated sizes. This method is intended to be used when the current vkImage or vkBuffer is not large enough to contain the new sizes.
  2. Update the tensor's size metadata without reallocating any new resources. This allows shaders to interpret the underlying vkImage or vkBuffer as if it were smaller than it actually is, and allows command buffers to be preserved when sizes are changed.

Test Plan: Check CI. Tests have also been added to vulkan_compute_api_test that test the two methods of tensor resizing.

Differential Revision: D54728401

Copy link

pytorch-bot bot commented Mar 10, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/121598

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 02b4aa9 with merge base 86a2d67 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@pytorch-bot pytorch-bot bot added the release notes: vulkan release notes category label Mar 10, 2024
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D54728401

SS-JIA added a commit to SS-JIA/executorch-1 that referenced this pull request Mar 10, 2024
…pytorch#2340)

Summary:
X-link: pytorch/pytorch#121598


## Context

This changeset lays the foundations for supporting dynamic shapes in the ExecuTorch Vulkan delegate via allowing Tensors to be resized in one of two ways:

1. Discarding underlying `vkImage` or `vkBuffer` and reallocating a new `vkImage` or `vkBuffer` with updated sizes. This method is intended to be used when the current `vkImage` or `vkBuffer` is not large enough to contain the new sizes.
2. Update the tensor's size metadata without reallocating any new resources. This allows shaders to interpret the underlying `vkImage` or `vkBuffer` as if it were smaller than it actually is, and allows command buffers to be preserved when sizes are changed.

Differential Revision: D54728401
SS-JIA added a commit to SS-JIA/pytorch that referenced this pull request Mar 10, 2024
…resizing (pytorch#121598)

Summary:

X-link: pytorch/executorch#2340

## Context

This changeset lays the foundations for supporting dynamic shapes in the ExecuTorch Vulkan delegate via allowing Tensors to be resized in one of two ways:

1. Discarding underlying `vkImage` or `vkBuffer` and reallocating a new `vkImage` or `vkBuffer` with updated sizes. This method is intended to be used when the current `vkImage` or `vkBuffer` is not large enough to contain the new sizes.
2. Update the tensor's size metadata without reallocating any new resources. This allows shaders to interpret the underlying `vkImage` or `vkBuffer` as if it were smaller than it actually is, and allows command buffers to be preserved when sizes are changed.

Test Plan: Check CI. Tests have also been added to `vulkan_compute_api_test` that test the two methods of tensor resizing.

Differential Revision: D54728401
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D54728401

SS-JIA added a commit to SS-JIA/executorch-1 that referenced this pull request Mar 10, 2024
…pytorch#2340)

Summary:
X-link: pytorch/pytorch#121598


## Context

This changeset lays the foundations for supporting dynamic shapes in the ExecuTorch Vulkan delegate via allowing Tensors to be resized in one of two ways:

1. Discarding underlying `vkImage` or `vkBuffer` and reallocating a new `vkImage` or `vkBuffer` with updated sizes. This method is intended to be used when the current `vkImage` or `vkBuffer` is not large enough to contain the new sizes.
2. Update the tensor's size metadata without reallocating any new resources. This allows shaders to interpret the underlying `vkImage` or `vkBuffer` as if it were smaller than it actually is, and allows command buffers to be preserved when sizes are changed.

Differential Revision: D54728401
SS-JIA added a commit to SS-JIA/pytorch that referenced this pull request Mar 10, 2024
…resizing (pytorch#121598)

Summary:

X-link: pytorch/executorch#2340

## Context

This changeset lays the foundations for supporting dynamic shapes in the ExecuTorch Vulkan delegate via allowing Tensors to be resized in one of two ways:

1. Discarding underlying `vkImage` or `vkBuffer` and reallocating a new `vkImage` or `vkBuffer` with updated sizes. This method is intended to be used when the current `vkImage` or `vkBuffer` is not large enough to contain the new sizes.
2. Update the tensor's size metadata without reallocating any new resources. This allows shaders to interpret the underlying `vkImage` or `vkBuffer` as if it were smaller than it actually is, and allows command buffers to be preserved when sizes are changed.

Test Plan: Check CI. Tests have also been added to `vulkan_compute_api_test` that test the two methods of tensor resizing.

Differential Revision: D54728401
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D54728401

SS-JIA added a commit to SS-JIA/pytorch that referenced this pull request Mar 10, 2024
…resizing (pytorch#121598)

Summary:

X-link: pytorch/executorch#2340

## Context

This changeset lays the foundations for supporting dynamic shapes in the ExecuTorch Vulkan delegate via allowing Tensors to be resized in one of two ways:

1. Discarding underlying `vkImage` or `vkBuffer` and reallocating a new `vkImage` or `vkBuffer` with updated sizes. This method is intended to be used when the current `vkImage` or `vkBuffer` is not large enough to contain the new sizes.
2. Update the tensor's size metadata without reallocating any new resources. This allows shaders to interpret the underlying `vkImage` or `vkBuffer` as if it were smaller than it actually is, and allows command buffers to be preserved when sizes are changed.

Test Plan: Check CI. Tests have also been added to `vulkan_compute_api_test` that test the two methods of tensor resizing.

Differential Revision: D54728401
SS-JIA added a commit to SS-JIA/executorch-1 that referenced this pull request Mar 10, 2024
…pytorch#2340)

Summary:
X-link: pytorch/pytorch#121598


## Context

This changeset lays the foundations for supporting dynamic shapes in the ExecuTorch Vulkan delegate via allowing Tensors to be resized in one of two ways:

1. Discarding underlying `vkImage` or `vkBuffer` and reallocating a new `vkImage` or `vkBuffer` with updated sizes. This method is intended to be used when the current `vkImage` or `vkBuffer` is not large enough to contain the new sizes.
2. Update the tensor's size metadata without reallocating any new resources. This allows shaders to interpret the underlying `vkImage` or `vkBuffer` as if it were smaller than it actually is, and allows command buffers to be preserved when sizes are changed.

Differential Revision: D54728401
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D54728401

SS-JIA added a commit to SS-JIA/executorch-1 that referenced this pull request Mar 10, 2024
…pytorch#2340)

Summary:
X-link: pytorch/pytorch#121598


## Context

This changeset lays the foundations for supporting dynamic shapes in the ExecuTorch Vulkan delegate via allowing Tensors to be resized in one of two ways:

1. Discarding underlying `vkImage` or `vkBuffer` and reallocating a new `vkImage` or `vkBuffer` with updated sizes. This method is intended to be used when the current `vkImage` or `vkBuffer` is not large enough to contain the new sizes.
2. Update the tensor's size metadata without reallocating any new resources. This allows shaders to interpret the underlying `vkImage` or `vkBuffer` as if it were smaller than it actually is, and allows command buffers to be preserved when sizes are changed.

Differential Revision: D54728401
SS-JIA added a commit to SS-JIA/pytorch that referenced this pull request Mar 10, 2024
…resizing (pytorch#121598)

Summary:

X-link: pytorch/executorch#2340

## Context

This changeset lays the foundations for supporting dynamic shapes in the ExecuTorch Vulkan delegate via allowing Tensors to be resized in one of two ways:

1. Discarding underlying `vkImage` or `vkBuffer` and reallocating a new `vkImage` or `vkBuffer` with updated sizes. This method is intended to be used when the current `vkImage` or `vkBuffer` is not large enough to contain the new sizes.
2. Update the tensor's size metadata without reallocating any new resources. This allows shaders to interpret the underlying `vkImage` or `vkBuffer` as if it were smaller than it actually is, and allows command buffers to be preserved when sizes are changed.

Test Plan: Check CI. Tests have also been added to `vulkan_compute_api_test` that test the two methods of tensor resizing.

Differential Revision: D54728401
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D54728401

SS-JIA added a commit to SS-JIA/executorch-1 that referenced this pull request Mar 11, 2024
…pytorch#2340)

Summary:
X-link: pytorch/pytorch#121598


## Context

This changeset lays the foundations for supporting dynamic shapes in the ExecuTorch Vulkan delegate via allowing Tensors to be resized in one of two ways:

1. Discarding underlying `vkImage` or `vkBuffer` and reallocating a new `vkImage` or `vkBuffer` with updated sizes. This method is intended to be used when the current `vkImage` or `vkBuffer` is not large enough to contain the new sizes.
2. Update the tensor's size metadata without reallocating any new resources. This allows shaders to interpret the underlying `vkImage` or `vkBuffer` as if it were smaller than it actually is, and allows command buffers to be preserved when sizes are changed.

Differential Revision: D54728401
SS-JIA added a commit to SS-JIA/pytorch that referenced this pull request Mar 11, 2024
…resizing (pytorch#121598)

Summary:

X-link: pytorch/executorch#2340

## Context

This changeset lays the foundations for supporting dynamic shapes in the ExecuTorch Vulkan delegate via allowing Tensors to be resized in one of two ways:

1. Discarding underlying `vkImage` or `vkBuffer` and reallocating a new `vkImage` or `vkBuffer` with updated sizes. This method is intended to be used when the current `vkImage` or `vkBuffer` is not large enough to contain the new sizes.
2. Update the tensor's size metadata without reallocating any new resources. This allows shaders to interpret the underlying `vkImage` or `vkBuffer` as if it were smaller than it actually is, and allows command buffers to be preserved when sizes are changed.

Test Plan: Check CI. Tests have also been added to `vulkan_compute_api_test` that test the two methods of tensor resizing.

Differential Revision: D54728401
@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Mar 11, 2024
SS-JIA added a commit to SS-JIA/executorch-1 that referenced this pull request Mar 11, 2024
…pytorch#2340)

Summary:
X-link: pytorch/pytorch#121598


## Context

This changeset lays the foundations for supporting dynamic shapes in the ExecuTorch Vulkan delegate via allowing Tensors to be resized in one of two ways:

1. Discarding underlying `vkImage` or `vkBuffer` and reallocating a new `vkImage` or `vkBuffer` with updated sizes. This method is intended to be used when the current `vkImage` or `vkBuffer` is not large enough to contain the new sizes.
2. Update the tensor's size metadata without reallocating any new resources. This allows shaders to interpret the underlying `vkImage` or `vkBuffer` as if it were smaller than it actually is, and allows command buffers to be preserved when sizes are changed.

Reviewed By: jorgep31415

Differential Revision: D54728401
SS-JIA added a commit to SS-JIA/pytorch that referenced this pull request Mar 11, 2024
…resizing (pytorch#121598)

Summary:

X-link: pytorch/executorch#2340

## Context

This changeset lays the foundations for supporting dynamic shapes in the ExecuTorch Vulkan delegate via allowing Tensors to be resized in one of two ways:

1. Discarding underlying `vkImage` or `vkBuffer` and reallocating a new `vkImage` or `vkBuffer` with updated sizes. This method is intended to be used when the current `vkImage` or `vkBuffer` is not large enough to contain the new sizes.
2. Update the tensor's size metadata without reallocating any new resources. This allows shaders to interpret the underlying `vkImage` or `vkBuffer` as if it were smaller than it actually is, and allows command buffers to be preserved when sizes are changed.

Test Plan: Check CI. Tests have also been added to `vulkan_compute_api_test` that test the two methods of tensor resizing.

Reviewed By: jorgep31415

Differential Revision: D54728401
SS-JIA added a commit to SS-JIA/executorch-1 that referenced this pull request Mar 11, 2024
…pytorch#2340)

Summary:
X-link: pytorch/pytorch#121598


## Context

This changeset lays the foundations for supporting dynamic shapes in the ExecuTorch Vulkan delegate via allowing Tensors to be resized in one of two ways:

1. Discarding underlying `vkImage` or `vkBuffer` and reallocating a new `vkImage` or `vkBuffer` with updated sizes. This method is intended to be used when the current `vkImage` or `vkBuffer` is not large enough to contain the new sizes.
2. Update the tensor's size metadata without reallocating any new resources. This allows shaders to interpret the underlying `vkImage` or `vkBuffer` as if it were smaller than it actually is, and allows command buffers to be preserved when sizes are changed.

bypass-github-export-checks

Reviewed By: jorgep31415

Differential Revision: D54728401
@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Please use -f as last resort and instead consider -i/--ignore-current to continue the merge ignoring current failures. This will allow currently pending tests to finish and report signal before the merge.

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@@ -473,6 +512,65 @@ void vTensor::bind_allocation(const api::MemoryAllocation& allocation) {
}
}

void vTensor::update_size_metadata(const std::vector<int64_t>& new_sizes) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see what you mean by updating metadata now

SS-JIA added a commit to pytorch/executorch that referenced this pull request Mar 12, 2024
…hapes

## Context

pytorch/pytorch#121598 introduces the ability to support dynamic shapes through tensor metadata updates.

The idea is fairly simple. Instead of shaders accepting a UBO with size data for all arguments:

```
layout(set = 0, binding = 2) uniform PRECISION restrict Block {
  ivec4 output_sizes;
  ivec4 other_sizes;
  float alpha;
}
```

Shaders will accept separate UBOs for each piece of tensor metadata:

```
layout(set = 0, binding = 3) uniform PRECISION restrict OutSizes {
  ivec4 data;
}
out_sizes;

layout(set = 0, binding = 4) uniform PRECISION restrict InSizes {
  ivec4 data;
}
in_sizes;

layout(set = 0, binding = 5) uniform PRECISION restrict OtherSizes {
  ivec4 data;
}
other_sizes;

layout(set = 0, binding = 6) uniform PRECISION restrict Alpha {
  float data;
}
alpha;
```

Each UBO will be owned and maintained by the corresponding `vTensor` instance. To support a graph input resize, every tensor in the graph only needs to update their metadata UBOs via the `tensor.virtual_resize(new_sizes)` call. Shader dispatches in subsequent command buffer submissions will then see the updated metadata and execute as if the tensor were the updated sizes.

This changeset introduces a new shader library for the Vulkan graph runtime that enables dynamic shapes through this technique in favor of relying on the shader library from PyTorch Vulkan.


## Considerations

Technically, the UBO update technique can be applied to the shaders from PyTorch Vulkan as well. If that's the case, why introduce a new shader library for the graph runtime?

The primary motivation is code quality.

First, having `vTensor` supply UBOs for their own metadata greatly reduces the need to have operator specifc ad-hoc `Params` structs to organize arguments to write into a `api::UniformParamsBuffer`.

Constructing an `ExecuteNode` for binary operators is now

```
  graph.execute_nodes().emplace_back(new ExecuteNode(
      graph,
      api::shader_registry().get_shader_info(kernel_name.str()),
      global_size,
      local_size,
      {{out, api::MemoryAccessType::WRITE},
       {{arg1, arg2}, api::MemoryAccessType::READ}},
      {t_out.gpu_sizes_ubo(),
       t_in1.gpu_sizes_ubo(),
       t_in2.gpu_sizes_ubo(),
       graph.create_params_buffer(alpha_val)}))
```

instead of

```
ArithmeticParams block{
      get_size_as_ivec4(t_out),
      get_size_as_ivec4(t_in1),
      get_size_as_ivec4(t_in2),
      alpha_val,
  };
  api::UniformParamsBuffer params(graph.context(), block);

  graph.execute_nodes().emplace_back(new ExecuteNode(
      graph,
      shader,
      global_size,
      local_size,
      {{out, api::MemoryAccessType::WRITE},
       {{arg1, arg2}, api::MemoryAccessType::READ}},
      std::move(params)));
```

Another consideration is that pytorch/pytorch#115948 which was landed fairly recently enables much more expressive shader templates through the use of Python code blocks in the GLSL template. This enables shader templates that can easily express variants for different data types, packing structures, etc. Introducing a new shader library provides the opportunity to rewrite the shaders in PyTorch Vulkan in a more generic and extensible way.

Differential Revision: [D54754545](https://our.internmc.facebook.com/intern/diff/D54754545/)

[ghstack-poisoned]
SS-JIA added a commit to pytorch/executorch that referenced this pull request Mar 12, 2024
…rary that enables dynamic shapes"

## Context

pytorch/pytorch#121598 introduces the ability to support dynamic shapes through tensor metadata updates.

The idea is fairly simple. Instead of shaders accepting a UBO with size data for all arguments:

```
layout(set = 0, binding = 2) uniform PRECISION restrict Block {
  ivec4 output_sizes;
  ivec4 other_sizes;
  float alpha;
}
```

Shaders will accept separate UBOs for each piece of tensor metadata:

```
layout(set = 0, binding = 3) uniform PRECISION restrict OutSizes {
  ivec4 data;
}
out_sizes;

layout(set = 0, binding = 4) uniform PRECISION restrict InSizes {
  ivec4 data;
}
in_sizes;

layout(set = 0, binding = 5) uniform PRECISION restrict OtherSizes {
  ivec4 data;
}
other_sizes;

layout(set = 0, binding = 6) uniform PRECISION restrict Alpha {
  float data;
}
alpha;
```

Each UBO will be owned and maintained by the corresponding `vTensor` instance. To support a graph input resize, every tensor in the graph only needs to update their metadata UBOs via the `tensor.virtual_resize(new_sizes)` call. Shader dispatches in subsequent command buffer submissions will then see the updated metadata and execute as if the tensor were the updated sizes.

This changeset introduces a new shader library for the Vulkan graph runtime that enables dynamic shapes through this technique in favor of relying on the shader library from PyTorch Vulkan.


## Considerations

Technically, the UBO update technique can be applied to the shaders from PyTorch Vulkan as well. If that's the case, why introduce a new shader library for the graph runtime?

The primary motivation is code quality.

First, having `vTensor` supply UBOs for their own metadata greatly reduces the need to have operator specifc ad-hoc `Params` structs to organize arguments to write into a `api::UniformParamsBuffer`.

Constructing an `ExecuteNode` for binary operators is now

```
  graph.execute_nodes().emplace_back(new ExecuteNode(
      graph,
      api::shader_registry().get_shader_info(kernel_name.str()),
      global_size,
      local_size,
      {{out, api::MemoryAccessType::WRITE},
       {{arg1, arg2}, api::MemoryAccessType::READ}},
      {t_out.gpu_sizes_ubo(),
       t_in1.gpu_sizes_ubo(),
       t_in2.gpu_sizes_ubo(),
       graph.create_params_buffer(alpha_val)}))
```

instead of

```
ArithmeticParams block{
      get_size_as_ivec4(t_out),
      get_size_as_ivec4(t_in1),
      get_size_as_ivec4(t_in2),
      alpha_val,
  };
  api::UniformParamsBuffer params(graph.context(), block);

  graph.execute_nodes().emplace_back(new ExecuteNode(
      graph,
      shader,
      global_size,
      local_size,
      {{out, api::MemoryAccessType::WRITE},
       {{arg1, arg2}, api::MemoryAccessType::READ}},
      std::move(params)));
```

Another consideration is that pytorch/pytorch#115948 which was landed fairly recently enables much more expressive shader templates through the use of Python code blocks in the GLSL template. This enables shader templates that can easily express variants for different data types, packing structures, etc. Introducing a new shader library provides the opportunity to rewrite the shaders in PyTorch Vulkan in a more generic and extensible way.

Differential Revision: [D54754545](https://our.internmc.facebook.com/intern/diff/D54754545/)

[ghstack-poisoned]
SS-JIA added a commit to pytorch/executorch that referenced this pull request Mar 12, 2024
…s dynamic shapes"

## Context

pytorch/pytorch#121598 introduces the ability to support dynamic shapes through tensor metadata updates.

The idea is fairly simple. Instead of shaders accepting a UBO with size data for all arguments:

```
layout(set = 0, binding = 2) uniform PRECISION restrict Block {
  ivec4 output_sizes;
  ivec4 other_sizes;
  float alpha;
}
```

Shaders will accept separate UBOs for each piece of tensor metadata:

```
layout(set = 0, binding = 3) uniform PRECISION restrict OutSizes {
  ivec4 data;
}
out_sizes;

layout(set = 0, binding = 4) uniform PRECISION restrict InSizes {
  ivec4 data;
}
in_sizes;

layout(set = 0, binding = 5) uniform PRECISION restrict OtherSizes {
  ivec4 data;
}
other_sizes;

layout(set = 0, binding = 6) uniform PRECISION restrict Alpha {
  float data;
}
alpha;
```

Each UBO will be owned and maintained by the corresponding `vTensor` instance. To support a graph input resize, every tensor in the graph only needs to update their metadata UBOs via the `tensor.virtual_resize(new_sizes)` call. Shader dispatches in subsequent command buffer submissions will then see the updated metadata and execute as if the tensor were the updated sizes.

This changeset introduces a new shader library for the Vulkan graph runtime that enables dynamic shapes through this technique in favor of relying on the shader library from PyTorch Vulkan.


## Considerations

Technically, the UBO update technique can be applied to the shaders from PyTorch Vulkan as well. If that's the case, why introduce a new shader library for the graph runtime?

The primary motivation is code quality.

First, having `vTensor` supply UBOs for their own metadata greatly reduces the need to have operator specifc ad-hoc `Params` structs to organize arguments to write into a `api::UniformParamsBuffer`.

Constructing an `ExecuteNode` for binary operators is now

```
  graph.execute_nodes().emplace_back(new ExecuteNode(
      graph,
      api::shader_registry().get_shader_info(kernel_name.str()),
      global_size,
      local_size,
      {{out, api::MemoryAccessType::WRITE},
       {{arg1, arg2}, api::MemoryAccessType::READ}},
      {t_out.gpu_sizes_ubo(),
       t_in1.gpu_sizes_ubo(),
       t_in2.gpu_sizes_ubo(),
       graph.create_params_buffer(alpha_val)}))
```

instead of

```
ArithmeticParams block{
      get_size_as_ivec4(t_out),
      get_size_as_ivec4(t_in1),
      get_size_as_ivec4(t_in2),
      alpha_val,
  };
  api::UniformParamsBuffer params(graph.context(), block);

  graph.execute_nodes().emplace_back(new ExecuteNode(
      graph,
      shader,
      global_size,
      local_size,
      {{out, api::MemoryAccessType::WRITE},
       {{arg1, arg2}, api::MemoryAccessType::READ}},
      std::move(params)));
```

Another consideration is that pytorch/pytorch#115948 which was landed fairly recently enables much more expressive shader templates through the use of Python code blocks in the GLSL template. This enables shader templates that can easily express variants for different data types, packing structures, etc. Introducing a new shader library provides the opportunity to rewrite the shaders in PyTorch Vulkan in a more generic and extensible way.

Differential Revision: [D54754545](https://our.internmc.facebook.com/intern/diff/D54754545/)

[ghstack-poisoned]
SS-JIA added a commit to pytorch/executorch that referenced this pull request Mar 13, 2024
…rary that enables dynamic shapes"

## Context

pytorch/pytorch#121598 introduces the ability to support dynamic shapes through tensor metadata updates.

The idea is fairly simple. Instead of shaders accepting a UBO with size data for all arguments:

```
layout(set = 0, binding = 2) uniform PRECISION restrict Block {
  ivec4 output_sizes;
  ivec4 other_sizes;
  float alpha;
}
```

Shaders will accept separate UBOs for each piece of tensor metadata:

```
layout(set = 0, binding = 3) uniform PRECISION restrict OutSizes {
  ivec4 data;
}
out_sizes;

layout(set = 0, binding = 4) uniform PRECISION restrict InSizes {
  ivec4 data;
}
in_sizes;

layout(set = 0, binding = 5) uniform PRECISION restrict OtherSizes {
  ivec4 data;
}
other_sizes;

layout(set = 0, binding = 6) uniform PRECISION restrict Alpha {
  float data;
}
alpha;
```

Each UBO will be owned and maintained by the corresponding `vTensor` instance. To support a graph input resize, every tensor in the graph only needs to update their metadata UBOs via the `tensor.virtual_resize(new_sizes)` call. Shader dispatches in subsequent command buffer submissions will then see the updated metadata and execute as if the tensor were the updated sizes.

This changeset introduces a new shader library for the Vulkan graph runtime that enables dynamic shapes through this technique in favor of relying on the shader library from PyTorch Vulkan.


## Considerations

Technically, the UBO update technique can be applied to the shaders from PyTorch Vulkan as well. If that's the case, why introduce a new shader library for the graph runtime?

The primary motivation is code quality.

First, having `vTensor` supply UBOs for their own metadata greatly reduces the need to have operator specifc ad-hoc `Params` structs to organize arguments to write into a `api::UniformParamsBuffer`.

Constructing an `ExecuteNode` for binary operators is now

```
  graph.execute_nodes().emplace_back(new ExecuteNode(
      graph,
      api::shader_registry().get_shader_info(kernel_name.str()),
      global_size,
      local_size,
      {{out, api::MemoryAccessType::WRITE},
       {{arg1, arg2}, api::MemoryAccessType::READ}},
      {t_out.gpu_sizes_ubo(),
       t_in1.gpu_sizes_ubo(),
       t_in2.gpu_sizes_ubo(),
       graph.create_params_buffer(alpha_val)}))
```

instead of

```
ArithmeticParams block{
      get_size_as_ivec4(t_out),
      get_size_as_ivec4(t_in1),
      get_size_as_ivec4(t_in2),
      alpha_val,
  };
  api::UniformParamsBuffer params(graph.context(), block);

  graph.execute_nodes().emplace_back(new ExecuteNode(
      graph,
      shader,
      global_size,
      local_size,
      {{out, api::MemoryAccessType::WRITE},
       {{arg1, arg2}, api::MemoryAccessType::READ}},
      std::move(params)));
```

Another consideration is that pytorch/pytorch#115948 which was landed fairly recently enables much more expressive shader templates through the use of Python code blocks in the GLSL template. This enables shader templates that can easily express variants for different data types, packing structures, etc. Introducing a new shader library provides the opportunity to rewrite the shaders in PyTorch Vulkan in a more generic and extensible way.

Differential Revision: [D54754545](https://our.internmc.facebook.com/intern/diff/D54754545/)

[ghstack-poisoned]
SS-JIA added a commit to pytorch/executorch that referenced this pull request Mar 13, 2024
…s dynamic shapes"

## Context

pytorch/pytorch#121598 introduces the ability to support dynamic shapes through tensor metadata updates.

The idea is fairly simple. Instead of shaders accepting a UBO with size data for all arguments:

```
layout(set = 0, binding = 2) uniform PRECISION restrict Block {
  ivec4 output_sizes;
  ivec4 other_sizes;
  float alpha;
}
```

Shaders will accept separate UBOs for each piece of tensor metadata:

```
layout(set = 0, binding = 3) uniform PRECISION restrict OutSizes {
  ivec4 data;
}
out_sizes;

layout(set = 0, binding = 4) uniform PRECISION restrict InSizes {
  ivec4 data;
}
in_sizes;

layout(set = 0, binding = 5) uniform PRECISION restrict OtherSizes {
  ivec4 data;
}
other_sizes;

layout(set = 0, binding = 6) uniform PRECISION restrict Alpha {
  float data;
}
alpha;
```

Each UBO will be owned and maintained by the corresponding `vTensor` instance. To support a graph input resize, every tensor in the graph only needs to update their metadata UBOs via the `tensor.virtual_resize(new_sizes)` call. Shader dispatches in subsequent command buffer submissions will then see the updated metadata and execute as if the tensor were the updated sizes.

This changeset introduces a new shader library for the Vulkan graph runtime that enables dynamic shapes through this technique in favor of relying on the shader library from PyTorch Vulkan.


## Considerations

Technically, the UBO update technique can be applied to the shaders from PyTorch Vulkan as well. If that's the case, why introduce a new shader library for the graph runtime?

The primary motivation is code quality.

First, having `vTensor` supply UBOs for their own metadata greatly reduces the need to have operator specifc ad-hoc `Params` structs to organize arguments to write into a `api::UniformParamsBuffer`.

Constructing an `ExecuteNode` for binary operators is now

```
  graph.execute_nodes().emplace_back(new ExecuteNode(
      graph,
      api::shader_registry().get_shader_info(kernel_name.str()),
      global_size,
      local_size,
      {{out, api::MemoryAccessType::WRITE},
       {{arg1, arg2}, api::MemoryAccessType::READ}},
      {t_out.gpu_sizes_ubo(),
       t_in1.gpu_sizes_ubo(),
       t_in2.gpu_sizes_ubo(),
       graph.create_params_buffer(alpha_val)}))
```

instead of

```
ArithmeticParams block{
      get_size_as_ivec4(t_out),
      get_size_as_ivec4(t_in1),
      get_size_as_ivec4(t_in2),
      alpha_val,
  };
  api::UniformParamsBuffer params(graph.context(), block);

  graph.execute_nodes().emplace_back(new ExecuteNode(
      graph,
      shader,
      global_size,
      local_size,
      {{out, api::MemoryAccessType::WRITE},
       {{arg1, arg2}, api::MemoryAccessType::READ}},
      std::move(params)));
```

Another consideration is that pytorch/pytorch#115948 which was landed fairly recently enables much more expressive shader templates through the use of Python code blocks in the GLSL template. This enables shader templates that can easily express variants for different data types, packing structures, etc. Introducing a new shader library provides the opportunity to rewrite the shaders in PyTorch Vulkan in a more generic and extensible way.

Differential Revision: [D54754545](https://our.internmc.facebook.com/intern/diff/D54754545/)

[ghstack-poisoned]
SS-JIA added a commit to pytorch/executorch that referenced this pull request Mar 13, 2024
…hapes

Pull Request resolved: #2366

## Context

pytorch/pytorch#121598 introduces the ability to support dynamic shapes through tensor metadata updates.

The idea is fairly simple. Instead of shaders accepting a UBO with size data for all arguments:

```
layout(set = 0, binding = 2) uniform PRECISION restrict Block {
  ivec4 output_sizes;
  ivec4 other_sizes;
  float alpha;
}
```

Shaders will accept separate UBOs for each piece of tensor metadata:

```
layout(set = 0, binding = 3) uniform PRECISION restrict OutSizes {
  ivec4 data;
}
out_sizes;

layout(set = 0, binding = 4) uniform PRECISION restrict InSizes {
  ivec4 data;
}
in_sizes;

layout(set = 0, binding = 5) uniform PRECISION restrict OtherSizes {
  ivec4 data;
}
other_sizes;

layout(set = 0, binding = 6) uniform PRECISION restrict Alpha {
  float data;
}
alpha;
```

Each UBO will be owned and maintained by the corresponding `vTensor` instance. To support a graph input resize, every tensor in the graph only needs to update their metadata UBOs via the `tensor.virtual_resize(new_sizes)` call. Shader dispatches in subsequent command buffer submissions will then see the updated metadata and execute as if the tensor were the updated sizes.

This changeset introduces a new shader library for the Vulkan graph runtime that enables dynamic shapes through this technique in favor of relying on the shader library from PyTorch Vulkan.


## Considerations

Technically, the UBO update technique can be applied to the shaders from PyTorch Vulkan as well. If that's the case, why introduce a new shader library for the graph runtime?

The primary motivation is code quality.

First, having `vTensor` supply UBOs for their own metadata greatly reduces the need to have operator specifc ad-hoc `Params` structs to organize arguments to write into a `api::UniformParamsBuffer`.

Constructing an `ExecuteNode` for binary operators is now

```
  graph.execute_nodes().emplace_back(new ExecuteNode(
      graph,
      api::shader_registry().get_shader_info(kernel_name.str()),
      global_size,
      local_size,
      {{out, api::MemoryAccessType::WRITE},
       {{arg1, arg2}, api::MemoryAccessType::READ}},
      {t_out.gpu_sizes_ubo(),
       t_in1.gpu_sizes_ubo(),
       t_in2.gpu_sizes_ubo(),
       graph.create_params_buffer(alpha_val)}))
```

instead of

```
ArithmeticParams block{
      get_size_as_ivec4(t_out),
      get_size_as_ivec4(t_in1),
      get_size_as_ivec4(t_in2),
      alpha_val,
  };
  api::UniformParamsBuffer params(graph.context(), block);

  graph.execute_nodes().emplace_back(new ExecuteNode(
      graph,
      shader,
      global_size,
      local_size,
      {{out, api::MemoryAccessType::WRITE},
       {{arg1, arg2}, api::MemoryAccessType::READ}},
      std::move(params)));
```

Another consideration is that pytorch/pytorch#115948 which was landed fairly recently enables much more expressive shader templates through the use of Python code blocks in the GLSL template. This enables shader templates that can easily express variants for different data types, packing structures, etc. Introducing a new shader library provides the opportunity to rewrite the shaders in PyTorch Vulkan in a more generic and extensible way.
ghstack-source-id: 218421178
@exported-using-ghexport

Differential Revision: [D54754545](https://our.internmc.facebook.com/intern/diff/D54754545/)
SS-JIA added a commit to pytorch/executorch that referenced this pull request Mar 13, 2024
…rary that enables dynamic shapes"

## Context

pytorch/pytorch#121598 introduces the ability to support dynamic shapes through tensor metadata updates.

The idea is fairly simple. Instead of shaders accepting a UBO with size data for all arguments:

```
layout(set = 0, binding = 2) uniform PRECISION restrict Block {
  ivec4 output_sizes;
  ivec4 other_sizes;
  float alpha;
}
```

Shaders will accept separate UBOs for each piece of tensor metadata:

```
layout(set = 0, binding = 3) uniform PRECISION restrict OutSizes {
  ivec4 data;
}
out_sizes;

layout(set = 0, binding = 4) uniform PRECISION restrict InSizes {
  ivec4 data;
}
in_sizes;

layout(set = 0, binding = 5) uniform PRECISION restrict OtherSizes {
  ivec4 data;
}
other_sizes;

layout(set = 0, binding = 6) uniform PRECISION restrict Alpha {
  float data;
}
alpha;
```

Each UBO will be owned and maintained by the corresponding `vTensor` instance. To support a graph input resize, every tensor in the graph only needs to update their metadata UBOs via the `tensor.virtual_resize(new_sizes)` call. Shader dispatches in subsequent command buffer submissions will then see the updated metadata and execute as if the tensor were the updated sizes.

This changeset introduces a new shader library for the Vulkan graph runtime that enables dynamic shapes through this technique in favor of relying on the shader library from PyTorch Vulkan.


## Considerations

Technically, the UBO update technique can be applied to the shaders from PyTorch Vulkan as well. If that's the case, why introduce a new shader library for the graph runtime?

The primary motivation is code quality.

First, having `vTensor` supply UBOs for their own metadata greatly reduces the need to have operator specifc ad-hoc `Params` structs to organize arguments to write into a `api::UniformParamsBuffer`.

Constructing an `ExecuteNode` for binary operators is now

```
  graph.execute_nodes().emplace_back(new ExecuteNode(
      graph,
      api::shader_registry().get_shader_info(kernel_name.str()),
      global_size,
      local_size,
      {{out, api::MemoryAccessType::WRITE},
       {{arg1, arg2}, api::MemoryAccessType::READ}},
      {t_out.gpu_sizes_ubo(),
       t_in1.gpu_sizes_ubo(),
       t_in2.gpu_sizes_ubo(),
       graph.create_params_buffer(alpha_val)}))
```

instead of

```
ArithmeticParams block{
      get_size_as_ivec4(t_out),
      get_size_as_ivec4(t_in1),
      get_size_as_ivec4(t_in2),
      alpha_val,
  };
  api::UniformParamsBuffer params(graph.context(), block);

  graph.execute_nodes().emplace_back(new ExecuteNode(
      graph,
      shader,
      global_size,
      local_size,
      {{out, api::MemoryAccessType::WRITE},
       {{arg1, arg2}, api::MemoryAccessType::READ}},
      std::move(params)));
```

Another consideration is that pytorch/pytorch#115948 which was landed fairly recently enables much more expressive shader templates through the use of Python code blocks in the GLSL template. This enables shader templates that can easily express variants for different data types, packing structures, etc. Introducing a new shader library provides the opportunity to rewrite the shaders in PyTorch Vulkan in a more generic and extensible way.

Differential Revision: [D54754545](https://our.internmc.facebook.com/intern/diff/D54754545/)

[ghstack-poisoned]
SS-JIA added a commit to pytorch/executorch that referenced this pull request Mar 13, 2024
…s dynamic shapes"

## Context

pytorch/pytorch#121598 introduces the ability to support dynamic shapes through tensor metadata updates.

The idea is fairly simple. Instead of shaders accepting a UBO with size data for all arguments:

```
layout(set = 0, binding = 2) uniform PRECISION restrict Block {
  ivec4 output_sizes;
  ivec4 other_sizes;
  float alpha;
}
```

Shaders will accept separate UBOs for each piece of tensor metadata:

```
layout(set = 0, binding = 3) uniform PRECISION restrict OutSizes {
  ivec4 data;
}
out_sizes;

layout(set = 0, binding = 4) uniform PRECISION restrict InSizes {
  ivec4 data;
}
in_sizes;

layout(set = 0, binding = 5) uniform PRECISION restrict OtherSizes {
  ivec4 data;
}
other_sizes;

layout(set = 0, binding = 6) uniform PRECISION restrict Alpha {
  float data;
}
alpha;
```

Each UBO will be owned and maintained by the corresponding `vTensor` instance. To support a graph input resize, every tensor in the graph only needs to update their metadata UBOs via the `tensor.virtual_resize(new_sizes)` call. Shader dispatches in subsequent command buffer submissions will then see the updated metadata and execute as if the tensor were the updated sizes.

This changeset introduces a new shader library for the Vulkan graph runtime that enables dynamic shapes through this technique in favor of relying on the shader library from PyTorch Vulkan.


## Considerations

Technically, the UBO update technique can be applied to the shaders from PyTorch Vulkan as well. If that's the case, why introduce a new shader library for the graph runtime?

The primary motivation is code quality.

First, having `vTensor` supply UBOs for their own metadata greatly reduces the need to have operator specifc ad-hoc `Params` structs to organize arguments to write into a `api::UniformParamsBuffer`.

Constructing an `ExecuteNode` for binary operators is now

```
  graph.execute_nodes().emplace_back(new ExecuteNode(
      graph,
      api::shader_registry().get_shader_info(kernel_name.str()),
      global_size,
      local_size,
      {{out, api::MemoryAccessType::WRITE},
       {{arg1, arg2}, api::MemoryAccessType::READ}},
      {t_out.gpu_sizes_ubo(),
       t_in1.gpu_sizes_ubo(),
       t_in2.gpu_sizes_ubo(),
       graph.create_params_buffer(alpha_val)}))
```

instead of

```
ArithmeticParams block{
      get_size_as_ivec4(t_out),
      get_size_as_ivec4(t_in1),
      get_size_as_ivec4(t_in2),
      alpha_val,
  };
  api::UniformParamsBuffer params(graph.context(), block);

  graph.execute_nodes().emplace_back(new ExecuteNode(
      graph,
      shader,
      global_size,
      local_size,
      {{out, api::MemoryAccessType::WRITE},
       {{arg1, arg2}, api::MemoryAccessType::READ}},
      std::move(params)));
```

Another consideration is that pytorch/pytorch#115948 which was landed fairly recently enables much more expressive shader templates through the use of Python code blocks in the GLSL template. This enables shader templates that can easily express variants for different data types, packing structures, etc. Introducing a new shader library provides the opportunity to rewrite the shaders in PyTorch Vulkan in a more generic and extensible way.

Differential Revision: [D54754545](https://our.internmc.facebook.com/intern/diff/D54754545/)

[ghstack-poisoned]
facebook-github-bot pushed a commit to pytorch/executorch that referenced this pull request Mar 13, 2024
…2366)

Summary:
Pull Request resolved: #2366

## Context

pytorch/pytorch#121598 introduces the ability to support dynamic shapes through tensor metadata updates.

The idea is fairly simple. Instead of shaders accepting a UBO with size data for all arguments:

```
layout(set = 0, binding = 2) uniform PRECISION restrict Block {
  ivec4 output_sizes;
  ivec4 other_sizes;
  float alpha;
}
```

Shaders will accept separate UBOs for each piece of tensor metadata:

```
layout(set = 0, binding = 3) uniform PRECISION restrict OutSizes {
  ivec4 data;
}
out_sizes;

layout(set = 0, binding = 4) uniform PRECISION restrict InSizes {
  ivec4 data;
}
in_sizes;

layout(set = 0, binding = 5) uniform PRECISION restrict OtherSizes {
  ivec4 data;
}
other_sizes;

layout(set = 0, binding = 6) uniform PRECISION restrict Alpha {
  float data;
}
alpha;
```

Each UBO will be owned and maintained by the corresponding `vTensor` instance. To support a graph input resize, every tensor in the graph only needs to update their metadata UBOs via the `tensor.virtual_resize(new_sizes)` call. Shader dispatches in subsequent command buffer submissions will then see the updated metadata and execute as if the tensor were the updated sizes.

This changeset introduces a new shader library for the Vulkan graph runtime that enables dynamic shapes through this technique in favor of relying on the shader library from PyTorch Vulkan.

## Considerations

Technically, the UBO update technique can be applied to the shaders from PyTorch Vulkan as well. If that's the case, why introduce a new shader library for the graph runtime?

The primary motivation is code quality.

First, having `vTensor` supply UBOs for their own metadata greatly reduces the need to have operator specifc ad-hoc `Params` structs to organize arguments to write into a `api::UniformParamsBuffer`.

Constructing an `ExecuteNode` for binary operators is now

```
  graph.execute_nodes().emplace_back(new ExecuteNode(
      graph,
      api::shader_registry().get_shader_info(kernel_name.str()),
      global_size,
      local_size,
      {{out, api::MemoryAccessType::WRITE},
       {{arg1, arg2}, api::MemoryAccessType::READ}},
      {t_out.gpu_sizes_ubo(),
       t_in1.gpu_sizes_ubo(),
       t_in2.gpu_sizes_ubo(),
       graph.create_params_buffer(alpha_val)}))
```

instead of

```
ArithmeticParams block{
      get_size_as_ivec4(t_out),
      get_size_as_ivec4(t_in1),
      get_size_as_ivec4(t_in2),
      alpha_val,
  };
  api::UniformParamsBuffer params(graph.context(), block);

  graph.execute_nodes().emplace_back(new ExecuteNode(
      graph,
      shader,
      global_size,
      local_size,
      {{out, api::MemoryAccessType::WRITE},
       {{arg1, arg2}, api::MemoryAccessType::READ}},
      std::move(params)));
```

Another consideration is that pytorch/pytorch#115948 which was landed fairly recently enables much more expressive shader templates through the use of Python code blocks in the GLSL template. This enables shader templates that can easily express variants for different data types, packing structures, etc. Introducing a new shader library provides the opportunity to rewrite the shaders in PyTorch Vulkan in a more generic and extensible way.
ghstack-source-id: 218429132
exported-using-ghexport

bypass-github-export-checks
bypass-github-pytorch-ci-checks
bypass-github-executorch-ci-checks

Reviewed By: jorgep31415

Differential Revision: D54754545

fbshipit-source-id: 7e2074699b61f8358358775a8b790d34dcb99ee6
SS-JIA added a commit to pytorch/executorch that referenced this pull request Mar 14, 2024

## Context

Updating PyTorch nightly pin to 03/13 to capture dynamic shape support in ATen Vulkan (pytorch/pytorch#121598).

Previous upate was to 03/12 (#2370) so this should not be a risky change. Also, according to PyTorch nightly HUD (https://hud.pytorch.org/hud/pytorch/pytorch/nightly) this 03/13 nightly is all green while 03/12 nightly had some failures.

Differential Revision: [D54870270](https://our.internmc.facebook.com/intern/diff/D54870270)

[ghstack-poisoned]
SS-JIA added a commit to pytorch/executorch that referenced this pull request Mar 14, 2024

## Context

Updating PyTorch nightly pin to 03/13 to capture dynamic shape support in ATen Vulkan (pytorch/pytorch#121598).

Previous upate was to 03/12 (#2370) so this should not be a risky change. Also, according to PyTorch nightly HUD (https://hud.pytorch.org/hud/pytorch/pytorch/nightly) this 03/13 nightly is all green while 03/12 nightly had some failures.

Differential Revision: [D54870270](https://our.internmc.facebook.com/intern/diff/D54870270)

[ghstack-poisoned]
SS-JIA added a commit to pytorch/executorch that referenced this pull request Mar 20, 2024

## Context

Updating PyTorch nightly pin to 03/13 to capture dynamic shape support in ATen Vulkan (pytorch/pytorch#121598).

Previous upate was to 03/12 (#2370) so this should not be a risky change. Also, according to PyTorch nightly HUD (https://hud.pytorch.org/hud/pytorch/pytorch/nightly) this 03/13 nightly is all green while 03/12 nightly had some failures.

Differential Revision: [D54870270](https://our.internmc.facebook.com/intern/diff/D54870270)

[ghstack-poisoned]
SS-JIA added a commit to pytorch/executorch that referenced this pull request Mar 20, 2024

## Context

Updating PyTorch nightly pin to 03/13 to capture dynamic shape support in ATen Vulkan (pytorch/pytorch#121598).

Previous upate was to 03/12 (#2370) so this should not be a risky change. Also, according to PyTorch nightly HUD (https://hud.pytorch.org/hud/pytorch/pytorch/nightly) this 03/13 nightly is all green while 03/12 nightly had some failures.

Differential Revision: [D54870270](https://our.internmc.facebook.com/intern/diff/D54870270)

[ghstack-poisoned]
@SS-JIA
Copy link
Contributor Author

SS-JIA commented Mar 25, 2024

@pytorchbot cherry-pick --onto ONTO release/2.3

Copy link

pytorch-bot bot commented Mar 25, 2024

❌ 🤖 pytorchbot command failed:

@pytorchbot cherry-pick: error: the following arguments are required: -c/--classification

usage: @pytorchbot cherry-pick --onto ONTO [--fixes FIXES] -c
                               {regression,critical,fixnewfeature,docs,release}

Try @pytorchbot --help for more info.

@SS-JIA
Copy link
Contributor Author

SS-JIA commented Mar 25, 2024

@pytorchbot cherry-pick --onto release/2.3

Copy link

pytorch-bot bot commented Mar 25, 2024

❌ 🤖 pytorchbot command failed:

@pytorchbot cherry-pick: error: the following arguments are required: -c/--classification

usage: @pytorchbot cherry-pick --onto ONTO [--fixes FIXES] -c
                               {regression,critical,fixnewfeature,docs,release}

Try @pytorchbot --help for more info.

@SS-JIA
Copy link
Contributor Author

SS-JIA commented Mar 25, 2024

@pytorchbot cherry-pick --onto ONTO -c fixnewfeature

@SS-JIA
Copy link
Contributor Author

SS-JIA commented Mar 25, 2024

@pytorchbot cherry-pick --onto release/2.3 -c fixnewfeature

pytorchbot pushed a commit that referenced this pull request Mar 25, 2024
…resizing (#121598)

Summary:
## Context

This changeset lays the foundations for supporting dynamic shapes in the ExecuTorch Vulkan delegate via allowing Tensors to be resized in one of two ways:

1. Discarding underlying `vkImage` or `vkBuffer` and reallocating a new `vkImage` or `vkBuffer` with updated sizes. This method is intended to be used when the current `vkImage` or `vkBuffer` is not large enough to contain the new sizes.
2. Update the tensor's size metadata without reallocating any new resources. This allows shaders to interpret the underlying `vkImage` or `vkBuffer` as if it were smaller than it actually is, and allows command buffers to be preserved when sizes are changed.

Test Plan: Check CI. Tests have also been added to `vulkan_compute_api_test` that test the two methods of tensor resizing.

Differential Revision: D54728401

Pull Request resolved: #121598
Approved by: https://github.com/jorgep31415

(cherry picked from commit cc51e10)
@pytorchbot
Copy link
Collaborator

Cherry picking #121598

The cherry pick PR is at #122634 and it is recommended to link a fixnewfeature cherry pick PR with an issue

Details for Dev Infra team Raised by workflow job

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ciflow/trunk Trigger trunk jobs on your pull request fb-exported Merged release notes: vulkan release notes category
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants