Tensor Compiler is a tool that converts models in the ONNX format into an intermediate representation (IR) using an MLIR dialect.
Currently, the project supports core neural network operations including:
- Conv
- Relu
- Add
- Mul
- MatMul
- Gemm
- Transpose
- LLVM + MLIR version 20 or newer
- either built from source, or
- installed system-wide
python -m venv venv
source venv/bin/activatepip install conanconan install . --build=missing -s build_type=Releasecmake --preset conan-release \
-DMLIR_DIR=/path/to/llvm-project/build/lib/cmake/mlir \
-DLLVM_DIR=/path/to/llvm-project/build/lib/cmake/llvm❗ Important: You must provide paths to your built LLVM and MLIR installations:
MLIR_DIR— path to themlirdirectory inside the LLVM buildLLVM_DIR— path to thellvmdirectory inside the LLVM build
cmake --build --preset conan-releasectest --test-dir build/ReleaseAfter building, you can pass an ONNX model to the compiler to generate MLIR IR:
./build/Release/tensor-compiler <model.onnx>By default, the computation graph is printed to stdout.
Export the computation graph to graph.dot.
Print the generated high-level MLIR dialect.
Developed and maintained by ask0later. Feel free to open issues and contribute.