ncnn is a high-performance neural network inference framework optimized for the mobile platform
-
Updated
Jun 25, 2024 - C++
ncnn is a high-performance neural network inference framework optimized for the mobile platform
A retargetable MLIR-based machine learning compiler and runtime toolkit.
Concrete: TFHE Compiler that converts python programs into FHE equivalent
BladeDISC is an end-to-end DynamIc Shape Compiler project for machine learning workloads.
MegCC是一个运行时超轻量,高效,移植简单的深度学习模型编译器
VAST is an experimental compiler pipeline designed for program analysis of C and C++. It provides a tower of IRs as MLIR dialects to choose the best fit representations for a program analysis or further program abstraction.
Highly optimized inference engine for Binarized Neural Networks
LLVM (Low Level Virtual Machine) Guide. Learn all about the compiler infrastructure, which is designed for compile-time, link-time, run-time, and "idle-time" optimization of programs. Originally implemented for C/C++ , though, has a variety of front-ends, including Java, Python, etc.
C++ compiler for heterogeneous quantum-classical computing built on Clang and XACC
An experimental Racket implementation using LLVM/MLIR
An MLIR based compiler dynamic circuit compiler for real-time control systems supporting OpenQASM 3
Add a description, image, and links to the mlir topic page so that developers can more easily learn about it.
To associate your repository with the mlir topic, visit your repo's landing page and select "manage topics."