Skip to content

Jibar-OS/oird

oird — OIR native inference daemon

Native C++ daemon that owns model residency and inference for the Open Intelligence Runtime. Runs as a system service under its own SELinux domain (u:r:oird:s0), registers oir_worker with servicemanager, and serves requests from OIRService over AIDL.

What it does

  • Loads LLM / VLM / ONNX / whisper models on demand.
  • Shares loaded models across every app that asks for the same capability — one copy in memory, N callers.
  • Pools inference contexts per model (ContextPool for llama-backed, WhisperPool for whisper) with priority-aware wait queues.
  • Accounts KV-cache memory in the resident budget so eviction decisions are accurate.
  • Dispatches across backends (llama.cpp, whisper.cpp, ONNX Runtime, libmtmd) based on capability.

Tree location

Installs as /system_ext/bin/oird via prebuilt_etc. Lives at system/oird/ in the AOSP tree.

Building

oird is built as part of a JibarOS tree:

cd ~/aaosp
source build/envsetup.sh
lunch aosp_cf_x86_64_phone-trunk_staging-userdebug
m -j8 oird

Dependencies

See also

github.com/Jibar-OS/JibarOS for architecture + capability model.

About

Native inference daemon (C++) for the Open Intelligence Runtime — installs to /system_ext/bin/oird.

Topics

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages