Open-source, local-first memory for any tool-capable LLM agent.
Think OpenAI Chronicle - but open, model-agnostic, inspectable, and hackable.
Status: v0.1.0 · macOS only · early alpha
OpenChronicle gives AI agents a local, inspectable memory built from real screen and app context.
It runs on your Mac, captures structured context from what you're doing, and turns it into persistent Markdown memory: what you're working on, what you've decided, which tools you use, and which people or projects matter.
Any agent that can call tools can use it. MCP clients work especially well today, but OpenChronicle is meant to be a general memory layer for tool-using agents - not something tied to one protocol, one model provider, or one app.
OpenAI Chronicle points to an important future: agents that remember your real working context.
OpenChronicle is our open alternative:
- Local-first - memory stays on your machine
- Model-agnostic - use Ollama, LM Studio, OpenAI, Anthropic, or any LiteLLM-compatible provider
- Tool-friendly - usable by any tool-capable agent
- Inspectable - Markdown on disk, SQLite locally
- Open - MIT-licensed and built to be extended
OpenChronicle currently prioritizes AX Tree / accessibility-tree context as its primary signal, with screenshots as a secondary signal over time.
We think this is the right tradeoff for an early memory system:
- Lower cost - structured text is far cheaper to process than screenshot-heavy OCR / vision pipelines
- Better intent capture - AX is often better for active app, focused element, edited text, URL, and interaction state
- Smaller, cleaner memory - easier to deduplicate, normalize, index, and retain long-term
- Better foundation - screenshots can later enrich visual context where AX falls short
AX-first for accurate, compact, low-cost memory; screenshot-assisted for richer multimodal context.
| OpenAI Chronicle | OpenChronicle | |
|---|---|---|
| Source | Closed | MIT, open-source |
| Model choice | OpenAI-centric | Your choice |
| Who can use it | Product-specific workflow | Any tool-capable agent |
| Primary capture | Screenshot / OCR-heavy | AX Tree first, screenshot-assisted |
| Storage | Local generated memories | Markdown + SQLite on your machine |
| Extensibility | Limited | Hackable parsers, memory logic, integrations |
flowchart LR
W[mac-ax-watcher<br/>events]
S0["<b>S0</b> dispatcher<br/>dedup · debounce<br/>min-gap"]
S1["<b>S1</b> parser<br/>focused_element<br/>visible_text · url"]
BUF[(capture-buffer<br/>/*.json)]
TL["Timeline<br/>normalizer<br/>1-min · verbatim"]
TB[(timeline_blocks)]
SM["Session mgr<br/>idle 5m · app-switch 3m<br/>max 2h"]
S2["<b>S2</b> reducer"]
ED[(event-<br/>YYYY-MM-DD.md)]
CLF["Classifier<br/>→ user- / project- / tool- /<br/>topic- / person- / org-*.md"]
STORE[("SQLite FTS5<br/>+ Markdown")]
W --> S0 --> S1 --> BUF --> TL --> TB --> S2 --> ED --> CLF --> STORE
ED --> STORE
BUF -. pre_capture_hook<br/>(post-write · skipped on content-dedup) .-> SM
SM -. flush 5m / on_end .-> S2
TB -. grounding .-> CLF
The core idea is simple:
- capture context
- compress it into sessions
- extract durable facts
- store memory locally
- let agents query it through tools
- Event-driven capture from macOS AX events
- Session-aware memory writing instead of noisy per-snapshot logs
- Human-readable Markdown memory
- Local SQLite indexing
- Structured memory files like user-, project-, tool-, topic-, person-, org-, and daily event-
- Supersede-not-delete history
- Local or cloud model support
- Always-on agent-readable interface, with MCP as the best-supported path today
Requires macOS 13+ and Xcode Command Line Tools (xcode-select --install).
git clone https://github.com/openchronicle/openchronicle.git
cd openchronicle
bash install.shopenchronicle start
openchronicle start --foreground
openchronicle status
openchronicle pause
openchronicle resume
openchronicle stopUseful inspection commands:
openchronicle capture-once
openchronicle timeline tick
openchronicle timeline list
openchronicle writer run
openchronicle rebuild-indexOpenChronicle is designed for tool-calling agents.
The daemon hosts an MCP endpoint at:
http://127.0.0.1:8742/mcpSupported integration paths include:
- Claude Code
- Claude Desktop
- Codex
- opencode
- custom local agents
- and more...
See docs/mcp.md for setup details.
We especially want help in three areas:
App-specific parsing and normalization for browsers, terminals, editors, Slack, Notion, Cursor, Linear, Figma, and more.
Session reduction, durable-fact extraction, compaction, supersede / merge logic, and retrieval quality.
Support for more MCP clients, IDE agents, coding assistants, desktop agents, and local orchestration frameworks.
If you care about local-first agents, personal AI memory, or open context infrastructure, this project is for you.
Documentation
- docs/architecture.md - end-to-end pipeline and code layout
- docs/config.md - configuration and model setup
- docs/capture.md - event-driven capture and AX details
- docs/timeline.md - normalization and anti-hallucination design
- docs/session.md - session cutting rules
- docs/writer.md - reducer, classifier, and retry model
- docs/mcp.md - current tool surface and integrations
- docs/memory-format.md - file layout and supersede semantics
- docs/troubleshooting.md - common issues
uv sync --all-extras
uv run pytest
uv run ruff checkMIT.
Thanks goes to these wonderful people (emoji key):
Qianli Ren 💻 🚧 |
Bowen Wang 💻 🚧 |
CrazyCalvin 💻 |
Firefly 💻 |
校奥浆糊 💻 |
Houde Qian 💻 |
GiddensF97 🎨 |
SiyiZhu1 🎨 |
This project follows the all-contributors specification. Contributions of any kind welcome!
