Add quantized_batch_matmul to OPERATORS list#18200
Add quantized_batch_matmul to OPERATORS list#18200meta-codesync[bot] merged 1 commit intopytorch:mainfrom
Conversation
Summary:
op_quantized_batch_matmul.cpp has a complete CMSIS-NN implementation of cortex_m::native::quantized_batch_matmul_out (using arm_batch_matmul_s8), but "quantized_batch_matmul" was missing from the OPERATORS list in targets.bzl.
Without it, define_operator_target("quantized_batch_matmul") was never called, so no :op_quantized_batch_matmul Buck target was created. cortex_m_operators exports all_op_targets using [":op_{}".format(op) for op in OPERATORS], so the operator was never included as a dependency.
executorch_generated_lib reads operators.yaml (which correctly declares cortex_m::quantized_batch_matmul.out) and generates RegisterCodegenUnboxedKernelsEverything.cpp, which calls cortex_m::native::quantized_batch_matmul_out. Since the compiled implementation was not linked in, ARM builds failed:
undefined reference to cortex_m::native::quantized_batch_matmul_out
Fix: add "quantized_batch_matmul" to OPERATORS in both targets.bzl mirrors (xplat and fbcode). This creates the missing :op_quantized_batch_matmul Buck target, includes it in cortex_m_operators, and allows cortex_m_generated_lib and cortex_m_no_except_generated_lib to link successfully.
Differential Revision: D96686225
This PR needs a
|
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/18200
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 Cancelled JobAs of commit 0f35f7d with merge base 76df414 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
There was a problem hiding this comment.
Pull request overview
Adds the missing Cortex-M build target for the quantized_batch_matmul operator so the CMSIS-NN implementation is compiled and linked, preventing undefined-reference link failures in ARM embedded builds.
Changes:
- Add
quantized_batch_matmulto the Cortex-MOPERATORSlist so:op_quantized_batch_matmulis generated and included in:cortex_m_operators.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
You can also share your feedback on Copilot code review. Take the survey.
Summary:
op_quantized_batch_matmul.cpp has a complete CMSIS-NN implementation of cortex_m::native::quantized_batch_matmul_out (using arm_batch_matmul_s8), but "quantized_batch_matmul" was missing from the OPERATORS list in targets.bzl.
Without it, define_operator_target("quantized_batch_matmul") was never called, so no :op_quantized_batch_matmul Buck target was created. cortex_m_operators exports all_op_targets using [":op_{}".format(op) for op in OPERATORS], so the operator was never included as a dependency.
executorch_generated_lib reads operators.yaml (which correctly declares cortex_m::quantized_batch_matmul.out) and generates RegisterCodegenUnboxedKernelsEverything.cpp, which calls cortex_m::native::quantized_batch_matmul_out. Since the compiled implementation was not linked in, ARM builds failed:
undefined reference to cortex_m::native::quantized_batch_matmul_out
Fix: add "quantized_batch_matmul" to OPERATORS in both targets.bzl mirrors (xplat and fbcode). This creates the missing :op_quantized_batch_matmul Buck target, includes it in cortex_m_operators, and allows cortex_m_generated_lib and cortex_m_no_except_generated_lib to link successfully.
Differential Revision: D96686225