Run large language models like Qwen and LLaMA locally on Android for offline, private, real-time question answering and chat - powered by ONNX Runtime.
-
Updated
Jun 22, 2025 - Kotlin
Run large language models like Qwen and LLaMA locally on Android for offline, private, real-time question answering and chat - powered by ONNX Runtime.
A comprehensive toolkit for streamlining and simplifying the offline inference process for LLMs across various models and libraries.
Add a description, image, and links to the offline-inference topic page so that developers can more easily learn about it.
To associate your repository with the offline-inference topic, visit your repo's landing page and select "manage topics."