English | 简体中文
A desktop pet that lives on your screen and eats the AI tokens you burn — feeds on Claude Code and Codex CLI today (Cursor support coming).
Privacy first: nom never sends your token data anywhere. It only reads usage numbers (not prompts/responses) from local transcripts, stores everything in
~/.nom/on your machine, and you canrm -rf ~/.nomat any time.
- Eats tokens in real time, from multiple agents — tails
~/.claude/projects/*.jsonl(Claude Code) and~/.codex/sessions/**/*.jsonl(Codex CLI). Toggle each source independently from the right-click menu. - Greets new sessions — wakes up and bubbles a hello when you open a new Claude Code session.
- Wanders on its own — strolls around the screen between activity, like a real desktop companion (toggle off via right-click).
- Skin support — install any petdex pack with
npx petdex install <slug>, then right-click → 选择宠物 to switch on the fly. No restart. - Sleeps when idle, wakes when you're back — 30 min of silence and it dozes off.
- Chat-card bubbles — contextual lines on session start, milestones, click-to-talk, eating bursts. Local templates by default; optional LLM upgrade for dynamic, situation-aware lines (see below).
- Drag anywhere on the pet to move it; window position remembers across restarts.
- Multi-display friendly —
⌘⌥Nsummons it back to whichever screen your cursor is on.
Download links always point to the latest release — bookmark and forget.
- Apple Silicon (M1/M2/M3/M4):
nom-arm64.dmg - Intel Mac:
nom-x64.dmg
Drag nom.app into /Applications. First launch macOS will block it — go to System Settings → Privacy & Security, scroll to the bottom and click Open Anyway next to nom. Confirm in the dialog and it'll launch from then on.
nom-setup.exe— NSIS installer wizard, x64
Double-click the setup, walk through the wizard. You'll get a desktop shortcut and a Start Menu entry.
Browsing all versions: Releases page.
Browse the catalogue at petdex.crafter.run and install any pack:
npx petdex install boba # or doraemon, goku-blue, ...Right-click the pet → 选择宠物 → pick your new skin. Pets live in ~/.codex/pets/<slug>/ and ~/.nom/pets/<slug>/.
| Item | What it does |
|---|---|
| ☑ 允许游走 | Toggle auto-wander on/off |
| ☐ AI 台词 | Toggle LLM-powered dialogue (see below) |
| 数据源 → | Per-source on/off (Claude Code, Codex) |
| 选择宠物 → | Switch among installed petdex skins |
| 打开配置文件 | Open ~/.nom/state.json for manual edits |
| 关闭宠物 | Quit |
Plus a global shortcut: ⌘⌥N (Mac) / Ctrl+Alt+N (Win) to summon the pet to the current screen.
By default nom speaks from a local template file — fully offline, deterministic, no network. If you want context-aware lines (e.g. "凌晨两点了还在用 Claude,你这个 prompt 写得有点暴躁啊"), wire it to any OpenAI-compatible chat-completions endpoint — your own Anthropic key, an Ollama instance, a self-hosted model, anything that speaks the OpenAI API.
- Right-click the pet → 设置… (or press
⌘,/Ctrl+,) - Find the AI 台词 card, flip the toggle on
- Fill in Endpoint / Model / API Key (key is optional for endpoints that don't require auth)
- Click 测试连接 — you'll get a real reply preview if it's wired up correctly, or a specific error (HTTP code, empty content, timeout, etc) if not
- Click 保存 AI 配置
Model picking: nom asks for a one-sentence reply, so the speed difference between a mini/chat/instruct model and a reasoning model (o1, r1, M2, qwq…) is large — thinking models burn extra tokens on internal reasoning before saying a single line. nom does send enable_thinking: false / reasoning_effort: 'none' and friends across the major vendor dialects, and falls back to reasoning_content if the server emits the reply there, but if you want snappy bubbles, pick a non-reasoning model.
Privacy contract: only metadata (trigger type, time of day, token counts, pet name) ever leaves your machine. Your prompts and Claude's responses are never sent to the LLM endpoint. Failed / timed-out LLM calls silently fall back to the local templates — the pet keeps working even if your endpoint goes down.
npm install
npm run dev # electron-vite dev with HMR
npm run typecheck # tsc --noEmit
npm run pack:mac # build .dmg → release/
npm run pack:win # build .exe → release/Requires Node ≥ 18.
Architecture, technical decisions, and reasoning are in CLAUDE.md. Product scope and out-of-scope items are in PRODUCT.md.
nom is paranoid by design:
- No network calls by default. The base experience is fully offline — everything ships from your local Claude Code transcripts. The optional AI dialogue feature is the only thing that can hit the network, and only when you explicitly enable it and configure an endpoint.
- No prompt/response content ever read or sent. nom only parses
usage.{input,output,cache_*}_tokensnumbers from JSONL. When AI dialogue is on, only metadata (trigger, time, counts) goes to your LLM endpoint — never the actual conversation. - Startup re-reads historical JSONL. Each launch runs two passes: a fast 7-day scan (so "yesterday's recap" and today's counter have data immediately) and a full lifetime scan in the background (used to self-heal if
~/.nom/state.jsonwas wiped or tampered). Both passes only readusage.*_tokensnumbers — no prompts, no responses, no file paths leave your machine. Results stay in~/.nom/state.json. - All state local.
~/.nom/state.jsonis human-readable JSON. Nuke the dir to fully reset.
MIT for source code. Bundled sprite assets carry their own licenses — see CREDITS.md.
