A collective brain for software fixes.
Every day, developers and AI agents solve the same errors that someone else already fixed yesterday — in another repo, another team, another company. That fix is lost. ShareXP makes it permanent.
ShareXP is a global resolution database. When any connected agent hits a failure, it searches the collective memory for a proven fix before trying to solve it from scratch. When it does solve something new, that fix goes back into the pool — anonymized, verified, and ranked by trust — so the next person who hits the same error gets the answer instantly.
The more people use it, the smarter it gets.
Developer A hits an error
→ ShareXP captures the failure context
→ Developer A fixes it
→ ShareXP verifies the fix and publishes it to the global hub
↓ anonymized, ranked by trust ↓
Developer B hits the same error (different repo, different company)
→ ShareXP retrieves Developer A's proven fix
→ Agent applies it — problem solved in seconds, not hours
Every resolution carries a confidence score that evolves. Successful reuse increases trust. Recurrence or reverts decrease it. The corpus self-corrects over time.
Dead ends are tracked too — when an approach fails, ShareXP records it so the next agent doesn't waste time repeating it. Multi-step fixes are captured as resolution chains (migration playbooks), and anomaly detection watches for spikes, new failure patterns, TTR regressions, and dead-end surges across the corpus.
| Tier | Scope | Storage | What it does |
|---|---|---|---|
| Local | Your machine | SQLite | Remembers your own fixes across sessions |
| Shared | Your team | PostgreSQL | Shares fixes across repos within your org |
| Global | Everyone | Central hub API | Community-wide resolution pool — the collective brain |
All three are searched in parallel on every failure. Local fixes rank highest, but when you hit something nobody on your team has seen before, the global pool has your back.
Nothing leaves your machine without being scrubbed. ShareXP strips secrets, PII, file paths, and high-entropy strings before publishing to the hub. You can audit what's stored locally with npm run sharexp:audit. See SECURITY.md for the full threat model.
- Node.js >= 20
git clone https://github.com/willytop8/ShareXP.git
cd ShareXP
npm install
npm run buildPoint your instance at the hub to start contributing to and searching the collective brain:
export ER_GLOBAL_API_URL=https://your-hub.fly.devThen run ShareXP as an MCP server:
# stdio mode (for Claude Code, Cursor, etc.)
node dist/index.js
# or HTTP mode
ER_TRANSPORT=streamable-http node dist/index.jsAdd ShareXP as an MCP server in your Claude Code settings, then copy the hook config:
cp .claude/settings.example.json .claude/settings.jsonThis registers hooks so failures are captured and fixes are published automatically — zero manual effort.
Any ShareXP instance can be the central hub. Deploy one in HTTP mode:
ER_TRANSPORT=streamable-http node dist/index.jsThis exposes open endpoints that any client can use:
POST /api/v1/global/publish— Accept anonymized resolutionsPOST /api/v1/global/search— Search by fingerprint and error message
Point your team's instances at it and you have a shared resolution network. Optionally set ER_GLOBAL_API_KEY on both hub and clients if you want to restrict access.
| Tool | Description |
|---|---|
find_similar_resolutions |
Search local + shared + global for past fixes matching a failure |
capture_resolution_candidate |
Record a failure event and a draft resolution |
finalize_resolution |
Promote a candidate to a verified resolution with evidence |
record_resolution_outcome |
Report reuse success/failure, recurrence, or feedback |
record_dead_end |
Capture a failed approach so others don't repeat it |
suggest_proactive_resolutions |
Surface known pitfalls based on files being edited |
report_ci_outcomes |
Batch verification outcomes from CI |
get_dashboard_data |
Failure/resolution graph data for visualization |
get_operational_signals |
TTR percentiles, trending fingerprints, slowest resolutions |
create_resolution_chain |
Build multi-step migration playbooks with ordered steps |
advance_chain_step |
Complete a step in a resolution chain, auto-finishing when done |
get_resolution_chains |
List or look up resolution chains by fingerprint |
detect_anomalies |
Scan for frequency spikes, new failures, TTR regressions, dead-end surges |
list_anomaly_alerts |
List active anomaly alerts, filtered by type or severity |
acknowledge_anomaly_alert |
Mark an alert as seen without resolving it |
resolve_anomaly_alert |
Close an anomaly alert |
# Hook handler (called automatically by Claude Code hooks)
npm run sharexp:hook -- < event.json
# Wrap a command for automatic capture/finalize
npm run sharexp:run -- -- npm test
# Record an outcome manually
npm run sharexp:outcome -- --state-key <key> --outcome-kind reuse_success
# CI batch outcomes
npm run sharexp:ci -- --repo <repo> --sha <sha> --outcomes '[...]'
# Publish to shared corpus
npm run sharexp:publish-shared -- --resolution-id <id> --approve
# Create a PR from a resolution's patch
npm run sharexp:auto-pr -- <resolution-id> --dry-run
# Audit local DB for leaked secrets
npm run sharexp:audit
# Import resolutions from GitHub
npm run sharexp:import-github -- --repo owner/name| Variable | Default | Description |
|---|---|---|
ER_GLOBAL_API_URL |
— | URL of the central ShareXP hub |
ER_GLOBAL_API_KEY |
— | Optional API key for restricted hubs |
ER_DB_PATH |
data/registry.db |
Local SQLite database path |
ER_TRANSPORT |
stdio |
stdio or streamable-http |
ER_HTTP_PORT |
8787 |
HTTP server port |
ER_SHARED_PG_URL |
— | PostgreSQL URL for team-level shared corpus |
ER_SHARED_DB_PATH |
— | SQLite path for shared corpus (alternative to PG) |
ER_AUTO_PUBLISH_SHARED |
false |
Auto-publish verified resolutions to shared corpus |
ER_REDACT_ON_INGEST |
false |
Redact detected secrets/PII instead of rejecting |
ER_REQUIRE_MANUAL_APPROVAL |
false |
Require manual approval before sharing redacted records |
ER_DASHBOARD_TOKEN |
— | Bearer token for the dashboard HTTP endpoint |
ShareXP uses hybrid search (full-text + vector embeddings) combined with deterministic trust scoring:
- Fingerprint matching — Normalized error signatures for exact and fuzzy matching
- Semantic similarity — Local vector embeddings via Transformers.js (all-MiniLM-L6-v2, fully offline)
- Trust scoring — Verification status, reuse count, confidence, recurrence, and community signals
- Outcome loop — Every reuse updates confidence (
reuse_success+0.12,recurred-0.18,reverted-0.3) - TTR enrichment — Results include typical time-to-resolution and difficulty scores for each fingerprint
- Dead-end awareness — Known failed approaches are surfaced alongside resolutions so agents avoid repeating them
- Resolution chains — Multi-step playbooks are returned when a fingerprint matches a chain pattern
- Pinned resolutions — Golden fixes that always rank first
- Vertical pools — Domain-scoped sharing (
saas-devops,data-pipeline, etc.)
Local and same-repo results rank above cross-repo. Harmful or reverted fixes are automatically filtered.
src/
├── tools/ MCP tool handlers (16 tools)
├── services/ Business logic (capture, finalize, search, trust, publish,
│ dead ends, operational signals, chains, anomaly detection)
├── workflows/ Claude Code hook handler and workflow state
├── search/ Hybrid search: FTS5 + vector embeddings
├── ranking/ Deterministic trust-based ranking
├── context/ Environment context capture (OS, runtime, toolchain)
├── privacy/ Secret/PII scanning, redaction, .sharexpignore
├── validation/ Zod input schemas
├── db/ SQLite/PostgreSQL abstraction, migrations (13 migrations)
├── sharing/ Cross-repo identity for shared corpus
├── observability/ Structured JSON logging
scripts/ CLI tools and workflow helpers
tests/ Vitest test suite (125 tests across 20 files)
evaluation/ Ranking evaluation corpus
public/ Failure Explorer dashboard (standalone HTML)
npm run typecheck # Type-check
npm run build # Compile TypeScript
npm test # Run tests
npm run test:watch # Watch modeMIT