Resume Codex CLI conversations in one keystroke, keep institutional memory in sync, and share it across terminals or the web.
- Why Codextendo
- Requirements
- Install & First Run
- How It Works
- Workflows
- Configuration Cheat Sheet
- Cost & Model Controls
- Data & Security
- Troubleshooting & FAQ
- Dashboard
- Contributing
- Keeping Up To Date
- License
Codextendo stitches together the Codex CLI transcripts you already generate and turns them into resumable conversations, rich summaries, and dashboards. Instead of scrolling through JSONL files, you can:
- jump back into the right session in one command
- leave breadcrumbs for teammates via structured summaries and history timelines
- review everything from a browser-friendly dashboard backed by the same data
- Bash or a compatible shell that can source Bash functions
- Python 3.9+
jq
for JSON parsingrequests
for API callstiktoken
(optional – precise token counts)OPENAI_API_KEY
exported in the environment for summarisation commands
- Clone and install the helpers:
cd ~/tools git clone https://github.com/BranchManager69/codextendo.git cd codextendo ./install.sh
- Reload your shell so the new functions and config helpers take effect:
source ~/.bashrc # or open a new terminal
- Try resuming a recent conversation:
Select a result by number, and Codextendo reopens the session while prompting the assistant for a Past/Present/Future recap so you can restart with context.
codextendo "missing greetings"
Tip: Codextendo ships shell helpers, a Python summariser, and a lightweight config/pricing layer. Rerun
./install.sh
any time after updating the repo to refresh~/.codextendo/
.
Codextendo keeps minimal state on disk and leans on your existing Codex transcripts. Preferences such as the default summariser model or UI options live in ~/.codextendo/config.json
, which both the CLI and dashboard read unless you override them via environment variables.
flowchart LR
A[codextendo CLI] -->|reads| B(~/.codex/sessions/*.jsonl)
A -->|kicks off| C{{Python summariser}}
C -->|writes| D(~/.codextendo/summaries/*.json)
C -->|writes| E(~/.codextendo/summaries/*.md)
C -->|appends| F(~/.codextendo/summaries/*.history.md)
D & E & F --> G[index.json metadata]
G --> H[Terminal helpers & dashboard]
A --> I[OpenAI API]
codextendo <query>
fuzzy-searches Codex sessions, shows the newest matches, and resumes the one you pick.- The helper sends a recap message (customisable) so the assistant can tell you Past/Present/Future in one response.
codextendo --summary
(or--summarize
) creates Markdown + JSON summaries and appends a timeline entry to<session>.history.md
.codextendo refresh
walks every transcript, reuses cached hashes to skip unchanged files, and updatesindex.json
for fast lookup.
- All summaries land in
~/.codextendo/summaries/
along withindex.json
, enabling instant lookups by CLI helpers or the optional Next.js dashboard. - Set
CODEXTENDO_SUMMARY_DIR
to point the CLI and dashboard at a shared location (e.g., synced storage).
- When: you just jumped back into a terminal and need the right conversation fast.
- Command:
codextendo "enable transcript logs"
- What you get: fuzzy-matched sessions, a numeric picker, automatic recap prompt, and the Codex CLI resumes where you left off.
- Follow-up: export the session with
codexopen <index> --dest ~/codex-replay
if you need the raw JSONL later.
- When: you want clean notes for teammates or to close out a task.
- Command:
codextendo --summarize "branch manager onboarding"
- What you get:
<session>.json
,<session>.md
, and<session>.history.md
in the summary directory, each stamped with model, token counts, highlights, concerns, and next steps. - Follow-up: share the Markdown file or let the dashboard pick it up automatically.
- When: you need summaries to stay fresh without manual runs.
- Command:
codextendo refresh --limit 50
- What you get: the newest N transcripts reprocessed, cache-aware skipping for untouched sessions, and an updated
index.json
. - Follow-up: add the command to cron/launchd; combine with
--force
if you want a full rebuild (e.g., after template changes). The dashboard’s Bulk Summarize card shows the same progress if you prefer to launch runs there.
- When: you want to sanity-check spend or pick a cheaper model before kicking off a refresh.
- Where: open the dashboard sidebar, expand Bulk Summarize, and review the live token stats + estimated prompt/completion dollars pulled from the bundled pricing catalogue.
- What you get: a dropdown of supported models, clarity on how many sessions still lack recorded usage, and projected costs for the pending run versus the entire dataset.
- Follow-up: adjust the model (unless
CODEXTENDO_SUMMARY_MODEL
locks one in place) and launch the run once the forecast looks right.
- When: you prefer a UI to scan sessions, filter by labels, or demonstrate history.
- Commands:
cd dashboard npm install npm run dev
- What you get: a Next.js app on
http://localhost:3000
that groups conversations by project (derived from their cwd) and lets you drill into each session summary. - Follow-up: run
npm run build && npm run start
for production and pointCODEXTENDO_SUMMARY_DIR
wherever the summaries live in prod.
-
When: you need to understand how many tokens (≈ words/pages) each conversation will feed to the summariser.
-
Command:
codextendo session-sizes --details --limit 15
-
What you get: per-session kept tokens, word/page estimates, truncation flags, and (with
--details
) a payload breakdown between real dialogue and tool noise. -
Follow-up: trim noisy transcripts, lower token limits, or chunk long conversations before kicking off an expensive refresh.
-
Need machine-readable data? Add
--json
(and--session <id>
to focus on one run) to emit structured metrics, including reachback timestamps and transcript totals. Example:codextendo session-sizes --session 01998964-b3c1-7b23-8335-1414db4c323f --details --thresholds --json
- When: you want to read the conversation itself (not just summaries) without opening the JSONL manually.
- Command:
codextendo transcript --session 01998964-b3c1-7b23-8335-1414db4c323f --page 1 --page-size 200 --json
- What you get: newest segments with timestamps, payload type, text, and optional token counts. Increase
--page
to fetch older slices, or add--order oldest
to read from the beginning. Drop--json
for a quick textual preview.
Setting | Default | Purpose | Change it when |
---|---|---|---|
CODEXTENDO_SUMMARY_MODEL |
gpt-5 |
Preferred summariser model (wins over config/UI choices). | You need a fixed model for policy/compliance or want a cheaper/faster option. |
CODEXTENDO_SUMMARY_TOKEN_LIMIT |
200000 |
Upper bound on transcript tokens passed to the summariser. | Sessions exceed the limit or you want to cap API usage. |
CODEXTENDO_SUMMARY_DIR |
~/.codextendo/summaries |
Destination for summaries, history, and index.json . |
Storing output on shared or remote storage, or pointing the dashboard elsewhere. |
CODEX_LABEL_FILE |
~/.codex/search_labels.json |
Where session labels are persisted. | Coordinating labels with teammates or keeping them in repo-controlled storage. |
CODEXTENDO_RESUME_PROMPT |
rich Past/Present/Future prompt | Default catch-up message sent on resume. | You want a different onboarding prompt. |
CODEXTENDO_RESUME_PROMPT_DISABLED |
unset | Disables the automatic recap entirely. | You have your own recap flow or don’t want Codex to speak first. |
codextendo --summary-dir PATH |
inherits from env | CLI override for summary location. | Ad-hoc runs to a temporary directory. |
codextendo --sessions-dir PATH |
~/.codex/sessions |
Source of Codex transcripts. | Codex stores transcripts somewhere else or you are replaying archived sessions. |
codextendo refresh --force |
off | Rebuild every summary ignoring cache. | You updated templates or fixed a summariser bug. |
~/.codextendo/config.json |
auto-created | Persisted preferences (e.g., summariser model) shared by CLI + dashboard. | Use the dashboard to edit, or tweak manually when you need durable defaults without env vars. |
Environment variables always override the config file; rely on them for temporary overrides or when you must enforce a global policy.
- Codextendo ships OpenAI’s model pricing catalogue at
resources/pricing/openai_api_model_pricing.json
;./install.sh
mirrors it to~/.codextendo/resources/pricing/
for runtime use. - The dashboard’s Bulk Summarize card reads the catalogue, loads token stats, and estimates the prompt/completion cost for both pending work and the full dataset before you press run.
- Use the same card to pick a different summariser model. Your choice is saved to
~/.codextendo/config.json
so future CLI jobs honour it. SetCODEXTENDO_SUMMARY_MODEL
when you need to enforce a specific deployment. - Automations can load the pricing file via
tools/codextendo/pricing.py
or hit the dashboard’s/api/cost-estimate
endpoint. The schema is documented inOPENAI_PRICING_SCHEMA.md
.
- Stored artefacts: Codex transcripts stay in
~/.codex/sessions/
; Codextendo writes summaries, Markdown, history logs, andindex.json
under~/.codextendo/summaries/
(or your configured directory). - Config & logs:
~/.codextendo/config.json
holds persisted preferences, and long-running refreshes stream into~/.codextendo/logs/refresh-*.log
. Delete them if you need to reset preferences or purge history. - Retention: delete a session by removing the transcript (
~/.codex/sessions/<id>.jsonl
) and associated summary files. Re-runcodextendo refresh
to clean upindex.json
. - Redaction: sanitise transcripts before sharing by editing the JSONL files or by post-processing the Markdown summaries. The helper does no automatic PII scrubbing.
- Credentials: summarisation calls the OpenAI API using
OPENAI_API_KEY
; keep the key exported in your shell and rotate it per your organisation’s policy. - Network footprint: only the summariser hits external APIs. Resume flows operate on local transcripts, so you can use Codextendo while offline as long as summaries already exist.
- Picker prompt missing: reload your shell (
source ~/.bashrc
) after reinstalling. The prompt writes to/dev/tty
, so it will not appear in captured transcripts even when working. ModuleNotFoundError: tiktoken
: install optional dependencies withpip install -r requirements.txt
(orpip install tiktoken requests
). Summaries still run withouttiktoken
, but token counts fall back to estimates.OPENAI_API_KEY
not set: export the key before running summary commands, or set it in your shell profile.- Stale summaries after editing transcripts: run
codextendo refresh --force
to rebuild everything and regenerateindex.json
. - Dashboard not showing new sessions: confirm it points at the same
CODEXTENDO_SUMMARY_DIR
and restart after big rebuilds to bust Next.js file caches.
- The dashboard now ships in-repo at
dashboard/
and readsindex.json
plus all JSON summaries fromCODEXTENDO_SUMMARY_DIR
. - For development:
cd dashboard && npm install && npm run dev
to launchhttp://localhost:3000
with hot reloads. - For production:
npm run build && npm run start
, optionally withPORT
/HOST
. Deploy under PM2, systemd, or any Node process manager. - Remote/staging data: set
CODEXTENDO_SUMMARY_DIR
(or.env.local
) to point at a mounted drive, S3 sync folder, or shared volume. All paths resolve via Node, so absolute and~/
locations both work. - Sessions are grouped by project (derived from the stored cwd). Use
codextendo metadata
to backfill cwd metadata into existing summaries without spending tokens. - Session cards show cwd, latest activity, token counts, and provide
codex://
resume links so you can hop straight back into the CLI.
Open issues or pull requests if you have ideas for new workflows, integrations, or documentation upgrades. The helper scripts are Bash and Python; the dashboard is Next.js. Please run ./install.sh
after local changes to verify the installed copy stays in sync.
- Pull the latest changes and reinstall:
cd ~/tools/codextendo git pull ./install.sh source ~/.bashrc
MIT