Offline TTS using Kokoro-82M via kokoro-onnx. Apache 2.0 model, ~340MB, CPU real-time, plays straight to system audio. Designed to be importable as a Python library, drivable as a CLI, or poked via a unix socket for ~13ms speech requests from shell scripts.
🔊 Hear it: docs/demo.wav — five seconds of two voices speaking the tagline (af_heart then bf_emma).
From PyPI — recommended for most users:
pipx install stackvox # `stackvox` CLI on PATH
# or
pip install stackvox # use as a libraryIf you want the low-latency bash helper (stackvox-say) for shell scripts and hooks, install it on PATH after installing the package:
stackvox install-helper # copies bash helper to ~/.local/bin
# use --prefix DIR to install elsewhereThis is a one-time step. The helper is shipped as package data rather than as an automatic install script — explicit beats magical, and it keeps stackvox compatible with modern build backends. Skip it if you only ever use the Python stackvox say client.
From git, if you want an unreleased commit:
pipx install git+https://github.com/StackOneHQ/stackvox.git
# upgrade later with: pipx install --force git+https://github.com/StackOneHQ/stackvox.gitDev install from a clone:
python3 -m venv .venv && source .venv/bin/activate
pip install -e ".[dev]"Model + voice files auto-download to ~/.cache/stackvox/ on first use. Override with STACKVOX_CACHE_DIR.
stackvox "Hello world" # synthesize and play in-process
stackvox speak "Hi" --voice bf_emma # same, explicit subcommand
stackvox speak "save" --out a.wav # write wav instead of playing
stackvox welcome # multilingual welcome (6 languages)
stackvox voices # list all voice ids
echo "from a pipe" | stackvox # piped stdin works for speak/say
stackvox speak --file message.txt # read a whole fileBash completion:
eval "$(stackvox completion bash)" # current shell
# or persist:
stackvox completion bash > ~/.stackvox-completion.bash
echo 'source ~/.stackvox-completion.bash' >> ~/.bashrcstackvox reads per-user defaults from a TOML file, so you don't need to repeat --voice bf_emma --speed 1.1 on every invocation. Set values in ~/.config/stackvox/config.toml (or $XDG_CONFIG_HOME/stackvox/config.toml, or wherever STACKVOX_CONFIG points):
[defaults]
voice = "bf_emma"
speed = 1.1
lang = "en-gb"CLI flags always win over config values, and config values always win over the built-in defaults. A missing file is fine — built-ins apply. A malformed file logs a warning and is ignored.
Keeps the model resident so each subsequent call is instant:
stackvox serve # foreground; run with `nohup stackvox serve &` to background
stackvox status # is the daemon up? also shows version + any pending PyPI update
stackvox say "Hello" # send text to the daemon (fails if not running)
stackvox stop # graceful shutdownstackvox checks PyPI for newer versions but only at two moments — when you run stackvox status and at daemon startup. The script-heavy paths (say, speak, stackvox-say, hooks, CI) never make a network call. To see notices on every invocation set STACKVOX_UPDATE_NOTICE=1. To disable the check entirely set STACKVOX_NO_UPDATE_CHECK=1. The check is auto-skipped when common CI env vars (CI, GITHUB_ACTIONS, etc.) are set so build logs stay clean.
When you want minimum latency from shell scripts (hooks, CI steps, etc.), skip the Python client and use the bash helper — it talks directly to the daemon's unix socket via nc:
stackvox-say "back to you in 5"
stackvox-say --voice bf_emma --speed 1.1 "hello"
stackvox-say --fallback-say "text" # shell out to macOS `say` if daemon is downExit codes: 0 ok, 2 daemon unreachable (unless --fallback-say was given).
from stackvox import Stackvox, speak, synthesize
# One-shot — model loads on first call, reused for subsequent calls.
speak("Hello world")
# Reusable engine.
tts = Stackvox(voice="af_bella")
tts.speak("First line")
tts.speak("Faster", speed=1.2)
# Non-blocking playback.
tts.speak("async", blocking=False)
tts.stop()
# Raw samples for custom processing.
samples, sr = tts.synthesize("give me the array")
# Gapless multi-line playback with concurrent synthesis.
tts.speak_sequence([
{"text": "Hello", "voice": "af_heart", "lang": "en-us"},
{"text": "Bonjour", "voice": "ff_siwis", "lang": "fr-fr"},
])from stackvox import daemon
ok, resp = daemon.say("queue this via the running daemon")
if daemon.is_running():
daemon.stop()Kokoro ships voices across several languages. Voice prefix encodes gender + language:
| Prefix | Language | Example |
|---|---|---|
af_*, am_* |
American English | af_heart, am_michael |
bf_*, bm_* |
British English | bf_emma, bm_fable |
ff_* |
French | ff_siwis |
hf_*, hm_* |
Hindi | hf_alpha, hm_omega |
if_*, im_* |
Italian | if_sara, im_nicola |
pf_*, pm_* |
Portuguese | pf_dora, pm_alex |
ef_*, em_* |
Spanish | ef_dora, em_alex |
jf_*, jm_* |
Japanese | jf_alpha |
zf_*, zm_* |
Mandarin Chinese | zf_xiaoxiao |
Run stackvox voices for the authoritative list.
┌────────────────────┐ unix socket ┌─────────────────────────┐
│ stackvox-say │ ───────────────────────▶ │ stackvox daemon │
│ (bash, ~13ms) │ JSON line per request │ (Python, long-lived) │
└────────────────────┘ │ │
┌────────────────────┐ ~500ms (Py startup) │ preloaded Kokoro ONNX │
│ stackvox say │ ───────────────────────▶ │ worker thread playback │
│ (Python client) │ │ → sounddevice → audio │
└────────────────────┘ └─────────────────────────┘
┌────────────────────┐
│ stackvox speak │ loads model in-process, plays, exits
│ (one-shot CLI) │
└────────────────────┘
Socket lives at ~/.cache/stackvox/daemon.sock (override with STACKVOX_SOCKET for the client, STACKVOX_CACHE_DIR for the daemon). Protocol is one line of JSON per connection: {"text":"...", "voice":"...", "speed":1.0, "lang":"en-us"}; reply is ok / busy / err: <msg>. Plain text (no JSON) is accepted as a fallback and treated as {"text": line}.
Queue depth is 2 — rapid-fire requests beyond that get busy rather than piling up.
Before each utterance the daemon resets PortAudio so it picks up the current system default output device. Swap from speakers to Bluetooth headphones mid-session and the next say follows you — no daemon restart needed. The refresh costs ~10–50ms per play, which is invisible next to synthesis time.
- Python 3.10+
- macOS or Linux
nc(BSD netcat — default on macOS,netcat-openbsdon Linux) for the bash helper
stackvox doesn't open any network port. The daemon binds a unix socket under ~/.cache/stackvox/ (default file-mode 0600, i.e. user-only per the OS defaults for files in $HOME). Any process running as the same local user can send text to the daemon — there's no per-message authentication on the socket itself. That's the trust boundary: stackvox assumes anything running as your UID is allowed to speak on your behalf.
If you're exposing stackvox through a different surface (HTTP server, shared system service, container), authentication and rate-limiting are your responsibility at that layer.
Model weights (kokoro-v1.0.onnx, ~340 MB) and voices are downloaded from the kokoro-onnx GitHub release assets on first use and cached under ~/.cache/stackvox/. If you operate in a restricted environment, pre-seed that directory offline.
Security issues themselves should not be filed as public GitHub issues — see SECURITY.md for the disclosure process.
stackvox is a fairly opinionated narrow slice of the TTS space. Here's where it sits next to the obvious neighbours:
| Tool | Offline? | Quality | Latency (typical) | License | Best for |
|---|---|---|---|---|---|
| stackvox (Kokoro-82M) | ✅ | High (24kHz, 50+ voices, 9 languages) | ~300ms in-process · ~13ms via daemon helper | Apache 2.0 | Local apps, shell hooks, anything that wants natural voice without the cloud |
macOS say |
✅ | OK | ~50ms | macOS only | macOS-only scripts, "good enough" voice |
espeak-ng |
✅ | Robotic | ~10ms | GPL-3.0 | Accessibility, screen readers, embedded |
| Piper | ✅ | High | ~100ms | MIT | Similar use-case to stackvox; ONNX-based, more voices in some languages |
| Coqui TTS | ✅ | Very high (research models) | seconds | MPL-2.0 | Research, fine-tuning, voice cloning |
| OpenAI / ElevenLabs / etc. | ❌ | Highest | network-bound | Proprietary | Production apps that can pay per-call and accept network dependency |
Where stackvox tries to be different from Piper specifically: a resident daemon + bash helper path that gets you sub-15ms speech requests from shell scripts (CI hooks, terminal notifications, status announcements) without paying Python's startup cost on every call. That's basically the point — voice quality alone wouldn't be enough to switch off Piper, but the IPC story makes a difference for shell-driven workflows.
Pick stackvox if you want good voices, fully offline, with a fast shell-friendly API.
stackvox itself is licensed under the Apache License, Version 2.0 — see LICENSE. Third-party attributions are collected in NOTICE; the summary below is informational.
Model. Speech is generated by Kokoro-82M (© hexgrad, Apache 2.0). The ONNX-converted weights (kokoro-v1.0.onnx) and voice pack (voices-v1.0.bin) are downloaded from the kokoro-onnx release assets on first use and cached under ~/.cache/stackvox/. stackvox does not modify or redistribute them.
Runtime dependencies. kokoro-onnx (MIT, © thewh1teagle), onnxruntime (MIT, © Microsoft), sounddevice (MIT, © Matthias Geier), soundfile (BSD-3, © Bastian Bechtold), numpy (BSD-3).
GPL note. kokoro-onnx pulls in phonemizer-fork as a transitive runtime dependency; it is licensed under GPL-3.0. stackvox does not bundle, modify, or statically link it — pip installs it alongside stackvox and the two communicate through phonemizer's published Python API at runtime. If you redistribute a combined work (e.g. a frozen binary, container image, or vendored wheel set) that includes phonemizer-fork, review GPL-3.0 obligations for that distribution.