- HongKong
- https://zhengpeirong.github.io/
Block or Report
Block or report zhengpeirong
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abuseedge AI
Infrastructure to enable deployment of ML models to low-power resource-constrained embedded targets (including microcontrollers and digital signal processors).
ncnn is a high-performance neural network inference framework optimized for the mobile platform
YoloV8 for a bare Raspberry Pi 4 or 5
On-device AI across mobile, embedded and edge for PyTorch
GPT4All: Chat with Local LLMs on Any Device
MNN is a blazing fast, lightweight deep learning framework, battle-tested by business-critical use cases in Alibaba
Python library for GPGPU programming on Raspberry Pi 4
C++ library for programming the VideoCore GPU on all Raspberry Pi's.
ShaderNN is a lightweight deep learning inference framework optimized for Convolutional Neural Networks on mobile platforms.
FyuseNet is an OpenGL(ES) based library that allows to run neural network inference on GPUs that support OpenGL or OpenGL/ES, which is the case for most desktop and mobile GPUs on the market.
Collective communications library with various primitives for multi-machine training.
How to run Stable Diffusion on Raspberry Pi 4
Lightweight inference library for ONNX files, written in C++. It can run SDXL on a RPI Zero 2 but also Mistral 7B on desktops and servers.
C++ implementation of ChatGLM-6B & ChatGLM2-6B & ChatGLM3 & GLM4
纯c++的全平台llm加速库,支持python调用,chatglm-6B级模型单卡可达10000+token / s,支持glm, llama, moss基座,手机端流畅运行
MobileLLM Optimizing Sub-billion Parameter Language Models for On-Device Use Cases. In ICML 2024.
The llama-cpp-agent framework is a tool designed for easy interaction with Large Language Models (LLMs). Allowing users to chat with LLM models, execute structured function calls and get structured…
LlamaIndex is a data framework for your LLM applications
Raspberry Pi 4 Buster 64-bit OS with deep learning examples
Raspberry Pi 4 Bullseye 64-bit OS with deep learning examples
Fast inference engine for Transformer models
Distributed LLM inference for mobile, desktop and server.
Run your own AI cluster at home with everyday devices 📱💻 🖥️⌚