Skip to content

dignfei/mnn-llm

 
 

Repository files navigation

mnn-llm

mnn-llm

License Download

English

示例工程

  • cli: 使用命令行编译,android编译参考android_build.sh
  • web: 使用命令行编译,运行时需要指定web资源
  • android: 使用Android Studio打开编译;APK下载: Download
  • ios: 使用Xcode打开编译;🚀🚀🚀该示例代码100%由ChatGPT生成🚀🚀🚀
  • python: 基于pymnn实现的纯python推理代码;
  • other: 新增文本embedding,向量查询,文本解析,记忆库与知识库能力🔥;

模型支持

llm模型导出onnxmnn模型请使用llm-export

当前支持以下模型:

model onnx-fp32 mnn-quant
chatglm-6b Download Download
chatglm2-6b Download Download
chatglm3-6b Download Download
codegeex2-6b Download Download
Qwen-7B-Chat Download Download
Baichuan2-7B-Chat Download Download
Llama-2-7b-chat Download Download
Llama-3-8B-Instruct Download Download
internlm-chat-7b Download Download
Yi-6B-Chat Download Download
deepseek-llm-7b-chat Download Download
Qwen-1.8B-Chat Download Download
phi-2 Download Download
bge-large-zh Download Download
TinyLlama-1.1B-Chat Download Download
Qwen1.5-0.5B-Chat Download Download
Qwen1.5-1.8B-Chat Download Download
Qwen1.5-4B-Chat Download Download
Qwen1.5-7B-Chat Download Download

其他版本:

  • Qwen-1_8B-Chat-int8:Download

速度

CPU 4线程速度: prefill / decode tok/s

model android(f16/32) macos (f32) linux (f32) windows (f32)
qwen-1.8b-int4 100.21 / 22.22 84.85 / 19.93 151.00 / 35.89 117.30 / 33.40
qwen-1.8b-int8 99.95 / 16.94 67.70 / 13.45 118.51 / 24.90 97.19 / 22.76
chatglm-6b-int4 17.37 / 6.69 19.79 / 6.10 34.05 / 10.82 30.73 / 10.63
chatglm2-6b-int4 26.41 / 8.21 20.78 / 6.70 36.99 / 11.50 33.25 / 11.47
chatglm3-6b-int4 26.24 / 7.94 19.67 / 6.67 37.33 / 11.92 33.61 / 11.21
qwen-7b-int4 14.60 / 6.96 19.79 / 6.06 33.55 / 10.20 29.05 / 9.62
baichuan2-7b-int4 13.87 / 6.08 17.21 / 6.10 30.11 / 10.87 26.31 / 9.84
llama-2-7b-int4 17.98 / 5.17 19.72 / 5.06 34.47 / 9.29 28.66 / 8.90

测试的系统和设备信息如下,

os device CPU Memory
android XiaoMi12 Snapdragon 8gen1 8 GB
macos MacBook Pro 2019 Intel(R) Core(TM) i7-9750H 16 GB
linux PC Intel(R) Core(TM) i7-13700K 32GB
windows PC Intel(R) Core(TM) i7-13700K 32GB

下载int4模型

# <model> like `chatglm-6b`
# linux/macos
./script/download_model.sh <model>

# windows
./script/download_model.ps1 <model>

构建

当前构建状态:

System Build Statud
Linux Build Status
Macos Build Status
Windows Build Status
Android Build Status

本地编译

# linux
./script/build.sh

# macos
./script/build.sh

# windows msvc
./script/build.ps1

# android
./script/android_build.sh

一些编译宏:

  • BUILD_FOR_ANDROID: 编译到Android设备;
  • USING_VISUAL_MODEL: 支持多模态能力的模型,需要依赖libMNNOpenCV
  • DUMP_PROFILE_INFO: 每次对话后dump出性能数据到命令行中;

默认使用CPU后端且不实用多模态能力,如果使用其他后端或能力,可以在编译MNN的脚本中添加MNN编译宏

  • cuda: -DMNN_CUDA=ON
  • opencl: -DMNN_OPENCL=ON
  • opencv: -DMNN_BUILD_OPENCV=ON -DMNN_IMGCODECS=ON

4. 执行

# linux/macos
./cli_demo qwen-1.8b-int4 # cli demo
./web_demo qwen-1.8b-int4 ../web # web ui demo

# windows
.\Debug\cli_demo.exe qwen-1.8b-int4
.\Debug\web_demo.exe qwen-1.8b-int4 ../web

# android
adb push libs/*.so build/libllm.so build/cli_demo /data/local/tmp
adb push model_dir /data/local/tmp
adb shell "cd /data/local/tmp && export LD_LIBRARY_PATH=. && ./cli_demo qwen-1.8b-int4"

Reference

About

llm deploy project based mnn.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • C++ 90.4%
  • HTML 2.8%
  • Java 2.6%
  • JavaScript 1.0%
  • Python 0.9%
  • Dockerfile 0.7%
  • Other 1.6%