[ET-VK] Insert prepack nodes for constant primary inputs of prepacking ops#17850
[ET-VK] Insert prepack nodes for constant primary inputs of prepacking ops#17850meta-codesync[bot] merged 1 commit intogh/SS-JIA/459/basefrom
Conversation
…g ops The insert_prepack_nodes pass was skipping prepack node insertion for all constant tensor args of ops with supports_prepacking=True. However, these ops only handle prepacking for weight/bias tensors internally; the primary input tensor is still expected to be a GPU tensor. If the primary input happens to be a constant tensor (serialized as TensorRef), the op throws an exception at runtime. Fix this by detecting the primary input index directly in insert_prepack_nodes. Most prepacking ops have the primary input at arg 0, but embedding uses arg 1 since its signature is embedding(weight, indices, ...). The pass now checks whether a constant tensor is used as the primary input of a prepacking op, and if so, still inserts a prepack node for it. Differential Revision: [D95217949](https://our.internmc.facebook.com/intern/diff/D95217949/) [ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/17850
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 New Failure, 4 Unrelated FailuresAs of commit 2d65b78 with merge base 1a75394 ( NEW FAILURE - The following job has failed:
BROKEN TRUNK - The following jobs failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
…g ops The insert_prepack_nodes pass was skipping prepack node insertion for all constant tensor args of ops with supports_prepacking=True. However, these ops only handle prepacking for weight/bias tensors internally; the primary input tensor is still expected to be a GPU tensor. If the primary input happens to be a constant tensor (serialized as TensorRef), the op throws an exception at runtime. Fix this by detecting the primary input index directly in insert_prepack_nodes. Most prepacking ops have the primary input at arg 0, but embedding uses arg 1 since its signature is embedding(weight, indices, ...). The pass now checks whether a constant tensor is used as the primary input of a prepacking op, and if so, still inserts a prepack node for it. Differential Revision: [D95217949](https://our.internmc.facebook.com/intern/diff/D95217949/) ghstack-source-id: 347411473 Pull Request resolved: #17850
This PR needs a
|
8c4d7a9
into
gh/SS-JIA/459/base
…g ops The insert_prepack_nodes pass was skipping prepack node insertion for all constant tensor args of ops with supports_prepacking=True. However, these ops only handle prepacking for weight/bias tensors internally; the primary input tensor is still expected to be a GPU tensor. If the primary input happens to be a constant tensor (serialized as TensorRef), the op throws an exception at runtime. Fix this by detecting the primary input index directly in insert_prepack_nodes. Most prepacking ops have the primary input at arg 0, but embedding uses arg 1 since its signature is embedding(weight, indices, ...). The pass now checks whether a constant tensor is used as the primary input of a prepacking op, and if so, still inserts a prepack node for it. Differential Revision: [D95217949](https://our.internmc.facebook.com/intern/diff/D95217949/) ghstack-source-id: 347411473 Pull Request resolved: #17850
…g ops The insert_prepack_nodes pass was skipping prepack node insertion for all constant tensor args of ops with supports_prepacking=True. However, these ops only handle prepacking for weight/bias tensors internally; the primary input tensor is still expected to be a GPU tensor. If the primary input happens to be a constant tensor (serialized as TensorRef), the op throws an exception at runtime. Fix this by detecting the primary input index directly in insert_prepack_nodes. Most prepacking ops have the primary input at arg 0, but embedding uses arg 1 since its signature is embedding(weight, indices, ...). The pass now checks whether a constant tensor is used as the primary input of a prepacking op, and if so, still inserts a prepack node for it. Differential Revision: [D95217949](https://our.internmc.facebook.com/intern/diff/D95217949/) ghstack-source-id: 347411473 Pull Request resolved: #17850
…g ops The insert_prepack_nodes pass was skipping prepack node insertion for all constant tensor args of ops with supports_prepacking=True. However, these ops only handle prepacking for weight/bias tensors internally; the primary input tensor is still expected to be a GPU tensor. If the primary input happens to be a constant tensor (serialized as TensorRef), the op throws an exception at runtime. Fix this by detecting the primary input index directly in insert_prepack_nodes. Most prepacking ops have the primary input at arg 0, but embedding uses arg 1 since its signature is embedding(weight, indices, ...). The pass now checks whether a constant tensor is used as the primary input of a prepacking op, and if so, still inserts a prepack node for it. Differential Revision: [D95217949](https://our.internmc.facebook.com/intern/diff/D95217949/) ghstack-source-id: 347411473 Pull Request resolved: #17850
…g ops The insert_prepack_nodes pass was skipping prepack node insertion for all constant tensor args of ops with supports_prepacking=True. However, these ops only handle prepacking for weight/bias tensors internally; the primary input tensor is still expected to be a GPU tensor. If the primary input happens to be a constant tensor (serialized as TensorRef), the op throws an exception at runtime. Fix this by detecting the primary input index directly in insert_prepack_nodes. Most prepacking ops have the primary input at arg 0, but embedding uses arg 1 since its signature is embedding(weight, indices, ...). The pass now checks whether a constant tensor is used as the primary input of a prepacking op, and if so, still inserts a prepack node for it. Differential Revision: [D95217949](https://our.internmc.facebook.com/intern/diff/D95217949/) ghstack-source-id: 347411473 Pull Request resolved: pytorch#17850
Stack from ghstack (oldest at bottom):
The insert_prepack_nodes pass was skipping prepack node insertion for all
constant tensor args of ops with supports_prepacking=True. However, these ops
only handle prepacking for weight/bias tensors internally; the primary input
tensor is still expected to be a GPU tensor. If the primary input happens to be
a constant tensor (serialized as TensorRef), the op throws an exception at
runtime.
Fix this by detecting the primary input index directly in insert_prepack_nodes.
Most prepacking ops have the primary input at arg 0, but embedding uses arg 1
since its signature is embedding(weight, indices, ...). The pass now checks
whether a constant tensor is used as the primary input of a prepacking op, and
if so, still inserts a prepack node for it.
Differential Revision: D95217949