AI memory layer for engineering leaders. Local-first. No backend. No account.
Keepr is a desktop app that turns your team's Slack and GitHub exhaust into cited weekly briefs and 1:1 prep. It runs on your laptop -- your data never touches a middleman server.
Point it at your Slack workspace and a handful of GitHub repos, pick an LLM provider, and Keepr produces an evidence-backed team pulse or 1:1 prep in about a minute. It is a desktop app, not a SaaS. No backend, no account, no analytics.
- Team pulse -- Monday-morning read of what happened across your team last week, evidence-backed and cited
- 1:1 prep -- Context for an upcoming 1:1 with recent work, open threads, and follow-up items
- Weekly engineering update -- Stakeholder-ready summary: shipped, in progress, blocked, upcoming
- Performance evaluation -- Evidence-organized eval with optional rubric mapping (scaffold, needs real-data tuning)
- Promo readiness -- Gap analysis against target level with cited evidence
- Evidence graph -- Interactive force-directed visualization of how evidence connects across sources. Zoom, pan, drag nodes, click for detail cards
- Team heatmap -- Activity grid by member and day with configurable 7/14/28-day range
- Follow-up tracker -- Kanban board for action items extracted from briefs. Open, carried, resolved columns
- Citation sync -- Click any citation to see the source evidence. Side panel with bidirectional highlighting
- Confidence indicators -- Per-section confidence badges based on evidence depth and source diversity
- Local memory -- Observed facts persisted as plain markdown files you can open in Obsidian, grep, or commit to a private repo
- Keyboard-first UI -- Command palette (Cmd+K), citation scroll, session history
- Zero telemetry -- Nothing phones home. Keepr cannot see your sessions.
| Layer | Technology |
|---|---|
| Desktop shell | Tauri 2 (Rust) |
| Frontend | React 19, TypeScript, Tailwind CSS |
| Database | SQLite via tauri-plugin-sql |
| Secrets | macOS Keychain via keyring crate |
| LLM providers | Anthropic, OpenAI, OpenRouter, or any OpenAI-compatible endpoint |
| Data sources | Slack Web API, GitHub REST API, Jira Cloud, Linear GraphQL |
| Build | Vite, Cargo |
You need Node 20+, Rust (stable), and npm.
git clone https://github.com/keeprlabs/keepr.git
cd keepr
npm install
npx tauri devThat launches the Tauri window with Vite HMR. First run walks you through:
- Pick an LLM provider (Anthropic, OpenAI, OpenRouter, or custom) and paste an API key
- Create a Slack app from the provided manifest and paste its bot token
- Connect GitHub (PAT is fastest; device flow works once you register an OAuth app)
- Optionally connect Jira and/or Linear
- Add team members (display name, GitHub handle, Slack user ID)
- Pick a memory directory (defaults to
~/Documents/Keepr) - Read and acknowledge the privacy posture
Then press Cmd+K and run team pulse, or type a team member's name for 1:1 prep.
npx tauri build --debug --no-bundle
./src-tauri/target/debug/keeprKeepr has a Claude Code plugin that lets you capture follow-ups, check status, and trigger team pulses without leaving your terminal.
# Install the desktop app first
brew install --cask keeprlabs/tap/keepr
# Then add the plugin marketplace and install
/plugin marketplace add keeprlabs/keepr
/plugin install keepr@keeprlabs-keeprAvailable skills:
/keepr:keepr-add-followup-- Capture a follow-up for a 1:1 or team conversation/keepr:keepr-status-- Check config, connected sources, last session/keepr:keepr-open-- Launch the desktop app/keepr:keepr-pulse-- Generate a team pulse
Skills also activate contextually -- mention wanting to track something for a 1:1 and Claude will suggest adding a follow-up.
See plugin/README.md for details.
fetch -> prune -> Haiku map -> Sonnet reduce -> write memory
src-tauri/-- Thin Rust shell. SQLite bridge, OS keychain bridge (secrets.rs), atomic file I/O with lock (fs_atomic.rs). No OAuth callback server, no background workers.src/services/-- TypeScript business logic. DB, secrets, GitHub, Slack, Jira, Linear, LLM providers, the map/reduce pipeline, and the memory layer.src/prompts/-- Prompt templates as plain markdown files, imported via?raw. Tune them by editing the file and reloading.src/components/andsrc/screens/-- React 19 UI. Command-palette-first navigation, keyboard shortcuts, bidirectional citation scrolling.~/Documents/Keepr/(or wherever you pointed it) -- Canonical memory. Plain markdown files you can open in Obsidian, grep, or commit to a private repo. Keepr's SQLite is metadata only; the memory itself is files on disk.
Evidence items get stable ev_N IDs. The LLM cites by ID only; the app resolves IDs to URLs at render time. Memory files persist observed facts only -- interpretations live in the session file for that run and never get appended to memory.
The honest version:
- Keepr operates no servers. There is no backend, no analytics, no telemetry. Keepr cannot see your sessions.
- Your data still leaves your laptop in two specific ways:
- To Slack and GitHub (and optionally Jira/Linear) -- the original sources. You already trust them with this data.
- To whichever LLM provider you configured. Raw Slack message content and PR descriptions flow into their API for synthesis. This is the main remaining trust surface.
- What local-first buys you: no middleman vendor holds your data. The number of parties who see your content is two instead of three. Your team's data is never pooled with other customers'.
- What it does not buy you: it does not eliminate the LLM provider from your trust model. If your company forbids sending Slack messages to Anthropic or OpenAI, Keepr cannot help you today.
Read the full version in PRIVACY.md before you connect a real work Slack.
v0.1.x shipped the foundation: five workflows, four data sources, local memory, demo mode. v0.2.0 added evidence auditability (cards, confidence, timeline, heatmap, graph) and the follow-up tracker. v0.2.1 added the CLI and Claude Code plugin. See ROADMAP.md for the full picture and what's next.
This is a dogfood-first project. Before opening a PR, read CONTRIBUTING.md. The short version: use the app on real data for at least one session before proposing changes to prompts or pipeline behavior.
Please also review our CODE_OF_CONDUCT.md.
If you discover a security vulnerability, please report it responsibly. See SECURITY.md for the process and what's in scope.
MIT. See LICENSE.
Built with Tauri, React, and TypeScript

