Native C++ daemon that owns model residency and inference for the Open Intelligence Runtime. Runs as a system service under its own SELinux domain (u:r:oird:s0), registers oir_worker with servicemanager, and serves requests from OIRService over AIDL.
- Loads LLM / VLM / ONNX / whisper models on demand.
- Shares loaded models across every app that asks for the same capability — one copy in memory, N callers.
- Pools inference contexts per model (
ContextPoolfor llama-backed,WhisperPoolfor whisper) with priority-aware wait queues. - Accounts KV-cache memory in the resident budget so eviction decisions are accurate.
- Dispatches across backends (llama.cpp, whisper.cpp, ONNX Runtime, libmtmd) based on capability.
Installs as /system_ext/bin/oird via prebuilt_etc. Lives at system/oird/ in the AOSP tree.
oird is built as part of a JibarOS tree:
cd ~/aaosp
source build/envsetup.sh
lunch aosp_cf_x86_64_phone-trunk_staging-userdebug
m -j8 oird- AIDL interfaces from
oir-framework-addons platform_external_llamacppplatform_external_whispercppplatform_external_onnxruntime
github.com/Jibar-OS/JibarOS for architecture + capability model.