ncnn is a high-performance neural network inference framework optimized for the mobile platform
-
Updated
Jul 10, 2024 - C++
ncnn is a high-performance neural network inference framework optimized for the mobile platform
A retargetable MLIR-based machine learning compiler and runtime toolkit.
BladeDISC is an end-to-end DynamIc Shape Compiler project for machine learning workloads.
Concrete: TFHE Compiler that converts python programs into FHE equivalent
MegCC是一个运行时超轻量,高效,移植简单的深度学习模型编译器
C++ compiler for heterogeneous quantum-classical computing built on Clang and XACC
Highly optimized inference engine for Binarized Neural Networks
VAST is an experimental compiler pipeline designed for program analysis of C and C++. It provides a tower of IRs as MLIR dialects to choose the best fit representations for a program analysis or further program abstraction.
An MLIR based compiler dynamic circuit compiler for real-time control systems supporting OpenQASM 3
LLVM (Low Level Virtual Machine) Guide. Learn all about the compiler infrastructure, which is designed for compile-time, link-time, run-time, and "idle-time" optimization of programs. Originally implemented for C/C++ , though, has a variety of front-ends, including Java, Python, etc.
Library to interface Compilers and ML models for ML-Enabled Compiler Optimizations
Add a description, image, and links to the mlir topic page so that developers can more easily learn about it.
To associate your repository with the mlir topic, visit your repo's landing page and select "manage topics."