Open-source tools for running AI on hardware you own. Local inference, content pipelines, RAG, voice transcription, autonomous agents. No cloud APIs, no subscriptions, no data leaving your network.
Everything here runs on a single Framework Desktop with 128GB unified memory, Ubuntu 24.04, and rootless Podman containers. Qwen3.5-35B at 29 tokens per second. The 122B variant for deep reasoning. Whisper for speech-to-text. n8n for orchestration. All local and open source.
| Repo | What it does |
|---|---|
| stmna-desk | Reference architecture for a sovereign AI workstation. Hardware guide (Framework Desktop, DGX Spark, Mac Studio comparisons), inference stack documentation, compose files for 12 services, Ubuntu 24.04 install guide for AMD Strix Halo. |
| stmna-signal | Content intelligence pipeline. Send a YouTube video, URL, ebook, or voice note via Signal messenger, webhook, or Nextcloud. Get a structured summary with optional translation and TTS audio. Four n8n workflows, fully open source. |
| stmna-voice | Self-improving speech-to-text. Push-to-talk on Linux and Android with Whisper transcription, LLM-powered correction, and automatic training pair collection for fine-tuning. OpenAI-compatible API. |
| stmna-voice-mobile | Android push-to-talk app for STMNA_Voice. Tap, speak, get transcribed text at your cursor in any app. Server-side transcription on your own hardware. |
STMNA_Desk is the platform. STMNA_Signal and STMNA_Voice are applications that run on it. Each repo is self-contained with its own documentation, compose files, and setup guides.
llama.cpp · whisper.cpp · llama-swap · n8n · Open WebUI · Podman · Framework
The most useful contributions right now are benchmark data on AMD Strix Halo hardware, documentation improvements, and new service compose files. See CONTRIBUTING.md for org-wide guidelines.
Built by STMNA_ · Engineered resilience. Sovereign by design.
