Skip to content

heardlabs/heard

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

153 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Heard

Your coding agent has a voice now.

Heard speaks your coding agent's outputs so you can get up, walk around, and still know what's going on.

Counterpart to input tools like Wispr Flow. Wispr handles what you say to your agent; Heard handles what it says back.

Latest release License: Apache-2.0 GitHub stars

heard.dev  ·  Releases  ·  Issues


See and hear it run

Demo video coming soon. Run heard demo after install for a 15-second preview of the current voice + persona.

Install

Bring an ElevenLabs key for the best voices, or use Kokoro — free, local, no key.

Have your coding tool install it (recommended)

Paste this into Claude Code, Codex, or any AI coding tool:

Install Heard so you narrate your responses to me. Run: curl -L https://github.com/heardlabs/heard/releases/latest/download/Heard.zip -o /tmp/heard.zip && unzip -o /tmp/heard.zip -d /Applications && xattr -dr com.apple.quarantine /Applications/Heard.app && open /Applications/Heard.app — a window will pop up and I'll fill it in.

Manual

Download the latest Heard.zip, drag Heard.app into /Applications, then right-click → Open the first time (unsigned build). Onboarding walks you through voice / API key / hotkey / which agents to wire up.

CLI

pipx install heard           # or: uv tool install heard
heard install claude-code    # wires the hook
heard demo                   # ~15s preview

What it does

  • Narrates tool calls + intermediate prose, not just final summaries. "Looking at your test failures." "Three failures in auth.py."
  • Multi-agent aware. Run 3+ agents in parallel; Heard auto-routes narration in distinct voices so you can actually follow the work.
  • Four personas, fork-your-own. Aria (calm, direct), Friday (bright, breezy), Jarvis (Marvel butler), Atlas (cinematic narrator).
  • Works with any coding CLI. First-class adapters for Claude Code + Codex; heard run <command> wraps anything else.

Personas

Persona Vibe
aria Calm, direct, never editorial. Senior pair-programmer.
friday Bright, breezy, three steps ahead. Sprinkles "boss".
jarvis Marvel JARVIS-coded butler. Dry wit, "Sir" only on summaries.
atlas Cinematic narrator. Greek tragedy applied to compile cycles.

▶ Hear the voices in action on heard.dev →

Fork your own — drop a Markdown file with frontmatter into ~/Library/Application Support/heard/personas/.

Running multiple agents

Heard auto-detects 2+ concurrent sessions and shifts mode.

Mode When What you hear
Solo One session active Full narration in your persona's voice.
Swarm 2+ sessions concurrent Most-recent session narrates; background agents pierce only on failures and questions, with a periodic digest. Each gets a distinct voice.
Pinned You picked one in the menu Focus locks to that session.

Tuning

Four verbosity profiles (quietbriefnormalverbose), tunable globally or per-repo via .heard.yaml. Failures and wait-state questions always pierce.

Everyday commands:

heard preset jarvis              # switch persona
heard config set verbosity brief # quieter
heard silence                    # or tap Right Option to interrupt
heard doctor                     # end-to-end self-test

FAQ

Does my agent's output leave my machine?

Depends on which backends you opt into.

  • Voice synth. ElevenLabs sends spoken text over HTTPS. Kokoro runs fully locally — nothing leaves the machine.
  • Persona rewrites. If you provide an Anthropic key, Heard sends short candidate lines to Claude Haiku 4.5 to rewrite in-character. Skip the key and Heard uses neutral templates locally.
  • Telemetry. None. No analytics, no crash reporters, no phone-home.
What does ElevenLabs actually cost in practice?

The free tier covers light daily use. A heavy day of pair-programming (2-3 hrs of narration) typically lands in the few-cents-to-low-dimes range on the paid Starter plan. Switch to Kokoro (free, local) for a hard ceiling.

Will narration slow down my agent?

No. Hooks fire-and-forget over a Unix socket; the daemon synthesises and plays asynchronously. Your agent never blocks on Heard.

Is this open source? How do I contribute?

Yes — Apache 2.0. The easiest places to contribute are adapters (heard/adapters/), personas (heard/personas/*.md), and verbosity profiles (heard/profiles/*.yaml).

Compatibility

macOS 13+ · Claude Code + Codex first-class · Cursor and Aider planned · anything else via heard run.

Status

v0.4 — multi-agent routing, profile-based verbosity, automatic ElevenLabs ⇄ Kokoro failover. Used daily by the author. APIs may still change before v1.

License

Apache 2.0.

About

A voice companion for AI coding agents. Speaks your agent's replies so you can keep working.

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors