A CLI toolkit for agentic AI. Turn any MCP server — or RSS, markdown, URLs — into a typed knowledge graph. Query it from your terminal, your agent, or over MCP.
# Any MCP server → graph. One command.
npx contextix ingest mcp ./hackernews-top.mjs
# Or plain sources
npx contextix ingest markdown ~/notes
npx contextix ingest rss https://www.coindesk.com/arc/outboundfeeds/rss/
# Query the graph
npx contextix why "AI export controls"
npx contextix connect "Federal Reserve" "Bitcoin"No Python. No Docker. No Neo4j. One npx command, your data stays local at ~/.contextix/graph.json.
Your agent is smart about code, dumb about the world. Document RAG gives it text; vector search gives it snippets. Neither tells it what happened, who's involved, and how it's connected.
Contextix builds a typed causal graph from sources you choose. Agents query it via CLI — the same way they already call git, rg, or curl — so it slots into Claude Code, Cursor, Codex, Aider, or any shell-capable agent. MCP mode is bundled for Claude Desktop and MCP-native clients.
The wedge: the MCP ecosystem has 10,000+ servers exposing structured data — CoinGecko, arXiv, HackerNews, GitHub, Notion, Linear, your own — and zero graph layer on top. Contextix is that layer.
npm install -g contextix
contextix --help
# or one-shot
npx contextix ingest markdown ~/notesRequires Node 20+. Optional: ANTHROPIC_API_KEY env var for agentic extraction (falls back to regex mode).
contextix ingest mcp <skill-file> # Any MCP server via a skill file (JS/TS)
contextix ingest rss <url> # RSS / Atom / RDF feed
contextix ingest markdown <dir> # Markdown vault (frontmatter + [[wikilinks]])
contextix ingest url <url> # Single page, OG/Twitter meta + body
contextix ingest json <file|dir> # Pre-formatted graph fragmentEach ingest run:
- Fetches / reads source (or runs a skill against an MCP server)
- Runs extraction — agentic (Haiku 4.5 with tool-use) when
ANTHROPIC_API_KEYis set, regex otherwise. MCP skills bypass the extractor since they produce structured output directly. - Dedups entities (
BTC/Bitcoin/bitcoin→ one canonical node) - Merges into
~/.contextix/graph.jsonwithvalid_fromtimestamps
Force a specific mode with --extractor agentic|regex|auto or env CONTEXTIX_EXTRACTOR.
A skill is a single .mjs or .js file that tells contextix how to talk to one MCP server and what to extract. Skills live anywhere — drop one next to your project, commit to a repo, or put it in ~/.contextix/skills/.
# Keyless: Hacker News top stories → 20 events + author entities
contextix ingest mcp ./examples/skills/hackernews-top.mjs
# With env var: CoinGecko market snapshot
COINGECKO_DEMO_API_KEY=CG-xxx \
contextix ingest mcp ./examples/skills/coingecko-markets.mjs
# Recent arXiv AI papers with author entities
contextix ingest mcp ./examples/skills/arxiv-ai.mjsSkill file anatomy (JS, exports one defineSkill object):
import { defineSkill } from "contextix/skill";
export default defineSkill({
name: "coingecko-markets",
description: "Top 20 coins + global market snapshot",
mcpServer: {
command: "npx",
args: ["-y", "@coingecko/coingecko-mcp"],
env: { COINGECKO_DEMO_API_KEY: "${COINGECKO_DEMO_API_KEY}" },
},
requiredEnv: ["COINGECKO_DEMO_API_KEY"],
async run({ mcp, emit, log }) {
const result = await mcp.callTool({ name: "get_coins_markets", arguments: { vs_currency: "usd", per_page: 20 } });
const coins = JSON.parse(result.content[0].text);
for (const c of coins) {
emit.entity({ entityType: "token", name: c.symbol.toUpperCase(), aliases: [c.name], domain: "crypto" });
emit.event({ title: `${c.name} 24h: ${c.price_change_percentage_24h}%`, sourceName: "CoinGecko", importance: "low", tags: ["market-data"] });
}
log(`emitted ${coins.length}`);
},
});Full skill reference: examples/skills/README.md.
- Walks recursively, skips
.git,node_modules,.obsidian,.trash,_templates - Parses YAML frontmatter (
date,domain,tags) — flat key/value + inline/block lists - Wikilinks
[[X]]becomeconceptentities withrelated_toedges from the note - File mtime is used as
detectedAtwhen frontmatter lacksdate
contextix signals # Recent events (24h default)
contextix signals --domain crypto -t 7d
contextix why "<event>" # Causal chain (BFS backward)
contextix connect "<a>" "<b>" # Shortest path between entities
contextix entities --search "fed" # Entity lookupOutput is human-readable by default. --json for piping.
contextix signals --json | jq '.events[] | select(.importance == "CRITICAL")'contextix serve # stdio MCP server (default)Same graph, exposed as 5 MCP tools: contextix_signals, contextix_why, contextix_connect, contextix_entities, contextix_graph.
contextix export --format json # Full graph dump
contextix export --format mermaid # Mermaid diagram (roadmap)
contextix export --format cypher # Cypher for Neo4j import (roadmap)Your agent calls contextix directly — no MCP needed:
"ingest my daily reads then tell me why the market moved"
Claude Code runs:
contextix ingest rss https://feeds.bloomberg.com/markets/news.rss
contextix why "S&P drop" --depth 3.mcp.json:
{
"mcpServers": {
"contextix": {
"command": "npx",
"args": ["contextix", "serve"]
}
}
}# Nightly ingest
0 2 * * * contextix ingest rss https://example.com/feed.xmlSignalEvent ──causes──▶ SignalEvent
│ │
involves involves
▼ ▼
Entity ──influences──▶ Entity
- Edge types:
causes,caused_by,correlates,involves,influences,precedes,contradicts - Bi-temporal: every edge has
valid_from/valid_until; invalidated edges are kept so you can reconstruct the graph at any point in time - Confidence: every edge carries a
[0,1]score + an evidence string - Entity resolution: fuzzy dedup via string similarity; canonical node + alias list
- Storage: one JSON file at
~/.contextix/graph.json. Portable, inspectable, git-friendly
Full schema: src/graph/types.ts.
| Contextix | GraphRAG / LightRAG | mcp-memory | Graphiti | |
|---|---|---|---|---|
| Install | npx contextix |
pip + indexing |
MCP only | pip + Neo4j |
| MCP ecosystem ingest | ✅ via skills | ❌ | ❌ | ❌ |
| File / feed ingest | ✅ RSS / md / URL | ❌ docs only | ❌ manual writes | ❌ SDK calls |
| CLI interface | ✅ primary | ❌ Python scripts | ❌ | ❌ Python |
| MCP server mode | ✅ bundled | ❌ | ✅ only | ✅ |
| Local file graph | ✅ graph.json |
❌ | ✅ jsonl | ❌ Neo4j |
Contextix is not a RAG system, not a vector database, not a memory store for conversations. It's an agentic CLI that turns MCP servers, feeds, and files into a queryable typed graph.
contextix.io runs contextix on live crypto and AI sources. See what the graph looks like in production before you run it yourself.
See ROADMAP.md. Top priorities:
- More skills — ship reference skills for Notion, Linear, GitHub, Slack, Gmail, Polymarket
- Skill distribution —
contextix skills install @contextix/crypto-packstyle registry - Bring-your-own-model — OpenAI, Ollama, local LLM support (currently Claude Haiku 4.5 + regex)
- Graph query depth — PageRank, temporal decay, contradiction detection
- Hosted graph — optional
--hostedmode pulls curated crypto/AI graph from contextix.io
See CONTRIBUTING.md. Highest-impact areas:
- New skills — one
.mjsfile per MCP server. Seeexamples/skills. No compile step, easy to write. - New connectors — non-MCP source types (Slack export, Readwise, RSS variants). Each one is a function in
src/ingest/. - Extraction prompts — improve agentic entity/relation extraction for the non-MCP paths
- Query algorithms —
src/graph/query.ts(PageRank, confidence propagation, temporal decay) - Seed graph — verified events in
data/seed-graph.json
Star the repo if this is the graph tool you wanted to exist. File issues for MCP servers you'd want a skill for.
MIT.