Skip to content

Conversation

SS-JIA
Copy link
Contributor

@SS-JIA SS-JIA commented Nov 13, 2024

Stack from ghstack (oldest at bottom):

Context

The final logit linear layer in the Transformer architecture has extremely large tensors, since the output and weight tensors will have a tensor with dim equal to the vocabulary size, which may be extremely large. Because of this, image textures cannot be used to execute the op when running with the Vulkan delegate, so an implementation using buffer based tensors must be used.

Unfortunately, Vulkan does not have a performant implementation of linear with buffer based tensors at the moment. As a result, if this final linear layer is executed in Vulkan, model inference is extremely slow.

Changes

The below diff will prevent the final logit linear layer from being delegated to Vulkan by enforcing a GPU buffer limit.

This diff modifies the export llama script to apply the XNNPACK partitioner after the Vulkan partitioner if lowering to Vulkan, to ensure that remaining ops will be accelerated with XNNPACK. 4 bit quantization will also apply an additional Quantizer after applying the Vulkan quantizer (which will skip the final logit linear layer) so that the final logit linear can be quantized as well.

Long Term

This is a temporary measure while an optimized buffer based linear implementation is developed. Once the Vulkan implementation achieves parity with XNNPACK, the final logit linear will be delegated to Vulkan once more.

Differential Revision: D65899827

## Context

The final logit linear layer in the Transformer architecture has extremely large tensors, since the output and weight tensors will have a tensor with dim equal to the vocabulary size, which may be extremely large. Because of this, image textures cannot be used to execute the op when running with the Vulkan delegate, so an implementation using buffer based tensors must be used.

Unfortunately, Vulkan does not have a performant implementation of linear with buffer based tensors at the moment. As a result, if this final linear layer is executed in Vulkan, model inference is extremely slow.

## Changes

The below diff will prevent the final logit linear layer from being delegated to Vulkan by enforcing a GPU buffer limit.

This diff modifies the export llama script to apply the XNNPACK partitioner after the Vulkan partitioner if lowering to Vulkan, to ensure that remaining ops will be accelerated with XNNPACK. 4 bit quantization will also apply an additional Quantizer after applying the Vulkan quantizer (which will skip the final logit linear layer) so that the final logit linear can be quantized as well.

## Long Term

This is a temporary measure while an optimized buffer based linear implementation is developed. Once the Vulkan implementation achieves parity with XNNPACK, the final logit linear will be delegated to Vulkan once more.

Differential Revision: [D65899827](https://our.internmc.facebook.com/intern/diff/D65899827/)

[ghstack-poisoned]
@pytorch-bot pytorch-bot bot added ciflow/periodic module: vulkan Issues related to the Vulkan delegate and code under backends/vulkan/ labels Nov 13, 2024
Copy link

pytorch-bot bot commented Nov 13, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/6830

Note: Links to docs will display an error until the docs builds have been completed.

❗ 1 Active SEVs

There are 1 currently active SEVs. If your PR is affected, please view them below:

❌ 1 New Failure

As of commit 5dcfa4f with merge base ecdc007 (image):

NEW FAILURE - The following job has failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Nov 13, 2024
SS-JIA added a commit that referenced this pull request Nov 13, 2024
## Context

The final logit linear layer in the Transformer architecture has extremely large tensors, since the output and weight tensors will have a tensor with dim equal to the vocabulary size, which may be extremely large. Because of this, image textures cannot be used to execute the op when running with the Vulkan delegate, so an implementation using buffer based tensors must be used.

Unfortunately, Vulkan does not have a performant implementation of linear with buffer based tensors at the moment. As a result, if this final linear layer is executed in Vulkan, model inference is extremely slow.

## Changes

The below diff will prevent the final logit linear layer from being delegated to Vulkan by enforcing a GPU buffer limit.

This diff modifies the export llama script to apply the XNNPACK partitioner after the Vulkan partitioner if lowering to Vulkan, to ensure that remaining ops will be accelerated with XNNPACK. 4 bit quantization will also apply an additional Quantizer after applying the Vulkan quantizer (which will skip the final logit linear layer) so that the final logit linear can be quantized as well.

## Long Term

This is a temporary measure while an optimized buffer based linear implementation is developed. Once the Vulkan implementation achieves parity with XNNPACK, the final logit linear will be delegated to Vulkan once more.

Differential Revision: [D65899827](https://our.internmc.facebook.com/intern/diff/D65899827/)

ghstack-source-id: 253391362
Pull Request resolved: #6830
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D65899827

…ing to Vulkan"

## Context

The final logit linear layer in the Transformer architecture has extremely large tensors, since the output and weight tensors will have a tensor with dim equal to the vocabulary size, which may be extremely large. Because of this, image textures cannot be used to execute the op when running with the Vulkan delegate, so an implementation using buffer based tensors must be used.

Unfortunately, Vulkan does not have a performant implementation of linear with buffer based tensors at the moment. As a result, if this final linear layer is executed in Vulkan, model inference is extremely slow.

## Changes

The below diff will prevent the final logit linear layer from being delegated to Vulkan by enforcing a GPU buffer limit.

This diff modifies the export llama script to apply the XNNPACK partitioner after the Vulkan partitioner if lowering to Vulkan, to ensure that remaining ops will be accelerated with XNNPACK. 4 bit quantization will also apply an additional Quantizer after applying the Vulkan quantizer (which will skip the final logit linear layer) so that the final logit linear can be quantized as well.

## Long Term

This is a temporary measure while an optimized buffer based linear implementation is developed. Once the Vulkan implementation achieves parity with XNNPACK, the final logit linear will be delegated to Vulkan once more.

Differential Revision: [D65899827](https://our.internmc.facebook.com/intern/diff/D65899827/)

[ghstack-poisoned]
SS-JIA added a commit that referenced this pull request Nov 13, 2024
Pull Request resolved: #6830

## Context

The final logit linear layer in the Transformer architecture has extremely large tensors, since the output and weight tensors will have a tensor with dim equal to the vocabulary size, which may be extremely large. Because of this, image textures cannot be used to execute the op when running with the Vulkan delegate, so an implementation using buffer based tensors must be used.

Unfortunately, Vulkan does not have a performant implementation of linear with buffer based tensors at the moment. As a result, if this final linear layer is executed in Vulkan, model inference is extremely slow.

## Changes

The below diff will prevent the final logit linear layer from being delegated to Vulkan by enforcing a GPU buffer limit.

This diff modifies the export llama script to apply the XNNPACK partitioner after the Vulkan partitioner if lowering to Vulkan, to ensure that remaining ops will be accelerated with XNNPACK. 4 bit quantization will also apply an additional Quantizer after applying the Vulkan quantizer (which will skip the final logit linear layer) so that the final logit linear can be quantized as well.

## Long Term

This is a temporary measure while an optimized buffer based linear implementation is developed. Once the Vulkan implementation achieves parity with XNNPACK, the final logit linear will be delegated to Vulkan once more.

Differential Revision: [D65899827](https://our.internmc.facebook.com/intern/diff/D65899827/)
ghstack-source-id: 253425727
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D65899827

…ing to Vulkan"

## Context

The final logit linear layer in the Transformer architecture has extremely large tensors, since the output and weight tensors will have a tensor with dim equal to the vocabulary size, which may be extremely large. Because of this, image textures cannot be used to execute the op when running with the Vulkan delegate, so an implementation using buffer based tensors must be used.

Unfortunately, Vulkan does not have a performant implementation of linear with buffer based tensors at the moment. As a result, if this final linear layer is executed in Vulkan, model inference is extremely slow.

## Changes

The below diff will prevent the final logit linear layer from being delegated to Vulkan by enforcing a GPU buffer limit.

This diff modifies the export llama script to apply the XNNPACK partitioner after the Vulkan partitioner if lowering to Vulkan, to ensure that remaining ops will be accelerated with XNNPACK. 4 bit quantization will also apply an additional Quantizer after applying the Vulkan quantizer (which will skip the final logit linear layer) so that the final logit linear can be quantized as well.

## Long Term

This is a temporary measure while an optimized buffer based linear implementation is developed. Once the Vulkan implementation achieves parity with XNNPACK, the final logit linear will be delegated to Vulkan once more.

Differential Revision: [D65899827](https://our.internmc.facebook.com/intern/diff/D65899827/)

[ghstack-poisoned]
SS-JIA added a commit that referenced this pull request Nov 14, 2024
Pull Request resolved: #6830

## Context

The final logit linear layer in the Transformer architecture has extremely large tensors, since the output and weight tensors will have a tensor with dim equal to the vocabulary size, which may be extremely large. Because of this, image textures cannot be used to execute the op when running with the Vulkan delegate, so an implementation using buffer based tensors must be used.

Unfortunately, Vulkan does not have a performant implementation of linear with buffer based tensors at the moment. As a result, if this final linear layer is executed in Vulkan, model inference is extremely slow.

## Changes

The below diff will prevent the final logit linear layer from being delegated to Vulkan by enforcing a GPU buffer limit.

This diff modifies the export llama script to apply the XNNPACK partitioner after the Vulkan partitioner if lowering to Vulkan, to ensure that remaining ops will be accelerated with XNNPACK. 4 bit quantization will also apply an additional Quantizer after applying the Vulkan quantizer (which will skip the final logit linear layer) so that the final logit linear can be quantized as well.

## Long Term

This is a temporary measure while an optimized buffer based linear implementation is developed. Once the Vulkan implementation achieves parity with XNNPACK, the final logit linear will be delegated to Vulkan once more.
ghstack-source-id: 253568942

Differential Revision: [D65899827](https://our.internmc.facebook.com/intern/diff/D65899827/)
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D65899827

@facebook-github-bot facebook-github-bot merged commit b6da372 into gh/SS-JIA/147/base Nov 14, 2024
78 of 81 checks passed
@facebook-github-bot facebook-github-bot deleted the gh/SS-JIA/147/head branch November 14, 2024 18:16
SS-JIA added a commit that referenced this pull request Nov 14, 2024
…an (#6857)

* [ET-VK] Enforce GPU buffer limit when partitioning

Pull Request resolved: #6829

## Context

In Vulkan, there is a limit on the number of elements a GPU buffer can have. If a GPU buffer exceeds this limit, then the API will either produce an error or undefined behaviour will ensue.

## Changes

Along with `texture_limits`, introduce a configurable `buffer_limit` entry in the partitioner configuration.
ghstack-source-id: 253568943

Differential Revision: [D65899828](https://our.internmc.facebook.com/intern/diff/D65899828/)

* [ET-VK][Llama] Apply XNNPACK partitoner as well when lowering to Vulkan

Pull Request resolved: #6830

## Context

The final logit linear layer in the Transformer architecture has extremely large tensors, since the output and weight tensors will have a tensor with dim equal to the vocabulary size, which may be extremely large. Because of this, image textures cannot be used to execute the op when running with the Vulkan delegate, so an implementation using buffer based tensors must be used.

Unfortunately, Vulkan does not have a performant implementation of linear with buffer based tensors at the moment. As a result, if this final linear layer is executed in Vulkan, model inference is extremely slow.

## Changes

The below diff will prevent the final logit linear layer from being delegated to Vulkan by enforcing a GPU buffer limit.

This diff modifies the export llama script to apply the XNNPACK partitioner after the Vulkan partitioner if lowering to Vulkan, to ensure that remaining ops will be accelerated with XNNPACK. 4 bit quantization will also apply an additional Quantizer after applying the Vulkan quantizer (which will skip the final logit linear layer) so that the final logit linear can be quantized as well.

## Long Term

This is a temporary measure while an optimized buffer based linear implementation is developed. Once the Vulkan implementation achieves parity with XNNPACK, the final logit linear will be delegated to Vulkan once more.
ghstack-source-id: 253568942

Differential Revision: [D65899827](https://our.internmc.facebook.com/intern/diff/D65899827/)

---------

Co-authored-by: Stephen Jia <ssjia@meta.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/periodic CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported module: vulkan Issues related to the Vulkan delegate and code under backends/vulkan/

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants