Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add BatchedMatMulInst for backends that don't want to lower #3752

wants to merge 1 commit into from
Changes from all commits
File filter...
Filter file types
Jump to…
Jump to file or symbol
Failed to load files and symbols.


Just for now

@@ -2597,6 +2597,11 @@ void BoundInterpreterFunction::fwdMatMulInst(const glow::MatMulInst *I) {
I->getLHS()->getElementType(), I);

void BoundInterpreterFunction::fwdBatchMatMulInst(
const glow::BatchMatMulInst *I) {
DCHECK(!"Found BatchMatMulInst but BatchMatMul is lowered on Interpreter");

// Row-wise quantized FC
@@ -240,6 +240,16 @@ int main(int argc, char **argv) {
.autoVerify(VerifyKind::SameElementType, {"Dest", "LHS", "RHS"});

/// Performs batch matrix multiplication between the LHS and RHS. The operands
/// are a stack of two dimensional matrices. Example: (N, A, Z) x (N, Z, B) =>
/// (N, A, B).
.addOperand("Dest", OperandKind::Out)
.addOperand("LHS", OperandKind::In)
.addOperand("RHS", OperandKind::In)
.autoVerify(VerifyKind::SameElementType, {"Dest", "LHS", "RHS"});

/// Accumulates all of the layers in the batch along the Axis dimension and
/// produce a tensor that has the same dimensions as the input tensor without
/// the Axis dimension.
ProTip! Use n and p to navigate between commits in a pull request.
You can’t perform that action at this time.