Skip to content
Browse files

Add BatchedMatMulInst for backends that don't want to lower (#3752)

We generally lower BatchMatMulNode to a series of Reshape + MatMul + Concat, so we don't have an instruction for it. Adding one for a backend that doesn't benefit from that lowering.

Documentation: comments
Pull Request resolved: #3752

Test Plan: ninja check (no backend uses this instruction)

Differential Revision: D18384284

Pulled By: nickgg

fbshipit-source-id: 1b79516a9fbb6032c9ab49c67fa79197051e6bb5
  • Loading branch information...
nickgg authored and facebook-github-bot committed Nov 8, 2019
1 parent 529962e commit 297c8c924ee38e73a26de203a759b550ff039cbc
Showing with 15 additions and 0 deletions.
  1. +5 −0 lib/Backends/Interpreter/InterpreterNodes.cpp
  2. +10 −0 tools/ClassGen/InstrGen.cpp
@@ -2597,6 +2597,11 @@ void BoundInterpreterFunction::fwdMatMulInst(const glow::MatMulInst *I) {
I->getLHS()->getElementType(), I);

void BoundInterpreterFunction::fwdBatchMatMulInst(
const glow::BatchMatMulInst *I) {
DCHECK(!"Found BatchMatMulInst but BatchMatMul is lowered on Interpreter");

// Row-wise quantized FC
@@ -240,6 +240,16 @@ int main(int argc, char **argv) {
.autoVerify(VerifyKind::SameElementType, {"Dest", "LHS", "RHS"});

/// Performs batch matrix multiplication between the LHS and RHS. The operands
/// are a stack of two dimensional matrices. Example: (N, A, Z) x (N, Z, B) =>
/// (N, A, B).
.addOperand("Dest", OperandKind::Out)
.addOperand("LHS", OperandKind::In)
.addOperand("RHS", OperandKind::In)
.autoVerify(VerifyKind::SameElementType, {"Dest", "LHS", "RHS"});

/// Accumulates all of the layers in the batch along the Axis dimension and
/// produce a tensor that has the same dimensions as the input tensor without
/// the Axis dimension.

0 comments on commit 297c8c9

Please sign in to comment.
You can’t perform that action at this time.