Skip to content

ITeachYouAI/chatlens

Repository files navigation

chatlens

A lens into your conversations. Track topics, extract action items, and generate intelligence reports from your chat history.

chatlens connects to your messaging platforms (WhatsApp, iMessage, Telegram), categorizes messages into topics using an LLM, and writes structured markdown reports — all locally on your machine.

Features

  • Multi-platform: WhatsApp (via linked device), iMessage (via imessage-exporter), and Telegram (via TDLib).
  • Pluggable LLM: Anthropic (Claude), OpenAI (GPT), or Ollama (fully local).
  • Topic tracking: Define topics with keywords. chatlens categorizes each message and summarizes per topic.
  • Action item extraction: Pulls out decisions, action items, and open questions.
  • Stale item alerts: Flags action items that haven't been resolved after 3+ days.
  • Trend detection: Shows which topics are heating up or going cold.
  • Dry-run mode: Preview analysis without writing files.
  • Structured observability: JSON logs, per-run metrics, cost tracking.
  • Eval framework: Test categorization accuracy and prompt injection resistance.
  • Webhook notifications: POST summaries to Slack, Discord, or any HTTP endpoint.

Quick Start

# Clone and install
git clone https://github.com/ITeachYouAI/chatlens.git
cd chatlens
npm install

# Interactive setup — creates chatlens.json
npx tsx bin/chatlens.ts init
Welcome to chatlens setup!

? Your name: Tim
? Chat platform: WhatsApp
? Contact name to track: Paul Shell
? Use default topics (General Business, Action Items, Decisions)? Yes
? LLM provider: Anthropic (Claude)
? Output directory: ./reports/paul-shell

Config written to chatlens.json

Next steps:
  1. chatlens server start    (start WhatsApp daemon)
  2. Scan QR code with phone
  3. chatlens list            (verify connection)
  4. chatlens track --dry-run  (preview analysis)
  5. chatlens track           (run full pipeline)

WhatsApp Setup

# Start the WhatsApp daemon (runs headless Chromium locally)
npx tsx bin/chatlens.ts server start
Starting WhatsApp daemon...
WhatsApp daemon started (PID: 12345)
PID file: /Users/you/.chatlens/whatsapp.pid
Check status: chatlens server status

Scan the QR code with WhatsApp on your phone (Settings > Linked Devices > Link a Device).

# Verify connection
npx tsx bin/chatlens.ts server status
Status: Connected
Phone: +1234567890
PID: 12345
# List available chats
npx tsx bin/chatlens.ts list
# Available Chats

- **Paul Shell** [DM] (2026-03-04) — "sounds good, let's sync tomorrow"
  ID: 12345678@c.us
- **Dev Team** [Group] (2026-03-03) — "PR is up for review"
  ID: 98765432@g.us
- **Seth Rubio** [DM] (2026-03-01) — "got it, will check the reports"
  ID: 55512345@c.us

47 chats total

Running the Pipeline

# Preview analysis without writing any files
npx tsx bin/chatlens.ts track --dry-run
Loaded 3 topics for "Paul Shell"
Last run: 2026-03-02T14:00:00.000Z
Found chat: Paul Shell (12345678@c.us)
Fetching messages after 2026-03-02T14:00:00.000Z...
Fetched 42 messages
New messages after dedup: 38
Period: 2026-03-02 to 2026-03-04
Stage 1: Categorizing messages...
Categorized: 3 active topics, 1 emerging

[DRY RUN] Categorization results:
  01_general-business: 15 messages
  02_action-items: 12 messages
  03_decisions: 8 messages
  Emerging: Hiring Strategy
# Run the full pipeline — writes markdown reports
npx tsx bin/chatlens.ts track
Loaded 3 topics for "Paul Shell"
Found chat: Paul Shell (12345678@c.us)
Fetching messages after 2026-03-02T14:00:00.000Z...
Fetched 42 messages
New messages after dedup: 38
Period: 2026-03-02 to 2026-03-04
Stage 1: Categorizing messages...
Categorized: 3 active topics, 1 emerging
Stage 2: Summarizing 3 topics in parallel...
  Summarizing 01_general-business (15 messages)...
  Summarizing 02_action-items (12 messages)...
  Summarizing 03_decisions (8 messages)...
Writing topic files...
  Wrote 01_general-business.md
  Wrote 02_action-items.md
  Wrote 03_decisions.md
  Wrote decisions.md
  Wrote _index.md
State saved.

Done! 38 messages processed across 3 topics.
Reports: ./reports/paul-shell/topics/

--- Run Report ---
Run ID:    2026-03-04T18-30-00-000Z
Adapter:   whatsapp
Provider:  anthropic (claude-haiku-4-5-20251001)
Messages:  42 total, 38 new, 35 categorized, 3 skipped
LLM calls: 4 (1 categorize + 3 summarize)
Tokens:    ~12,400 input / ~2,100 output
Duration:  8.2s
# Track a specific contact
npx tsx bin/chatlens.ts track --contact "Seth"

# Export raw chat to JSON (last 7 days)
npx tsx bin/chatlens.ts export "Paul Shell" --since 7d
Exporting chat: Paul Shell
Fetched 89 messages
Exported to: ./Paul_Shell_2026-03-04.json

iMessage Setup (macOS only)

# Install the exporter
brew install imessage-exporter

# Grant Full Disk Access:
# System Settings > Privacy & Security > Full Disk Access > add Terminal

Set "adapter": "imessage" in your chatlens.json. Use chatId with a phone number for DMs:

{
  "name": "Seth Rubio",
  "chatId": "+12125551234",
  "adapter": "imessage"
}

Telegram Setup

# Get API credentials from https://my.telegram.org/apps
# Then authenticate:
npx tsx bin/chatlens.ts auth telegram
Telegram requires API credentials from https://my.telegram.org/apps

? Telegram API ID: 12345678
? Telegram API Hash: abcdef1234567890

Connecting to Telegram...
? Enter the verification code sent to your Telegram: 54321
Telegram authenticated and session saved!
Session stored at: ~/.chatlens/telegram-session
You can now run: chatlens list

Configuration

chatlens.json in your project directory. Do not commit this file — it contains contact names and may contain API credentials.

{
  "owner": "Your Name",
  "contacts": [
    {
      "name": "Contact Name",
      "adapter": "whatsapp",
      "topics": [
        {
          "id": "01_product",
          "name": "Product Strategy",
          "description": "Product direction, roadmap, feature decisions",
          "keywords": ["product", "roadmap", "feature", "MVP"]
        }
      ],
      "outputDir": "./reports/contact-name"
    }
  ],
  "llm": {
    "provider": "anthropic",
    "model": "claude-haiku-4-5-20251001",
    "apiKey": "$ANTHROPIC_API_KEY"
  },
  "adapters": {
    "whatsapp": {
      "port": 7700,
      "sessionDir": "~/.chatlens/whatsapp-session"
    }
  }
}

Environment variables in config values (like $ANTHROPIC_API_KEY) are resolved at runtime.

See templates/config.example.json for a full example with all adapter options.

Output

After running chatlens track, you'll find:

reports/contact-name/
  topics/
    _index.md          # Dashboard with all topics, action items, trends
    01_product.md      # Per-topic file with chronological updates
    02_finance.md
    decisions.md       # Cross-topic decision log
  .chatlens/
    runs/
      2026-03-04T10-30-00.metrics.json   # Per-run metrics
      2026-03-04T10-30-00.jsonl          # Structured log
  .chatlens-state.json                    # Dedup state (seen message IDs)

The _index.md dashboard looks like:

# Paul Shell — Topic Tracker Dashboard
*Last updated: 2026-03-04*

## Stale Action Items
*2 items unresolved for 3+ days*

- [ ] **Product Strategy** (5d): Finalize pricing tiers for launch
- [ ] **Action Items** (3d): Send Seth the updated API docs

## Active Topics

| # | Topic | Messages | Trend | Status |
|---|-------|----------|-------|--------|
| 01 | General Business | 15 || Updated |
| 02 | Action Items | 12 || Updated |
| 03 | Decisions | 8 || Updated |

## Emerging Topics
*These topics were detected but not in the roster.*

- Hiring Strategy

LLM Providers

Anthropic (default)

{ "provider": "anthropic", "model": "claude-haiku-4-5-20251001", "apiKey": "$ANTHROPIC_API_KEY" }

OpenAI

{ "provider": "openai", "model": "gpt-4o-mini", "apiKey": "$OPENAI_API_KEY" }

Ollama (local, no API key needed)

{ "provider": "ollama", "model": "llama3.1", "apiKey": "" }

Observability

# Show run history
npx tsx bin/chatlens.ts stats
# Run History (last 5)

| Run | Messages | LLM Calls | Tokens | Cost | Duration |
|-----|----------|-----------|--------|------|----------|
| 2026-03-04 18:30 | 38 | 4 | 14,500 | $0.003 | 8.2s |
| 2026-03-02 14:00 | 22 | 3 | 8,200 | $0.002 | 5.1s |
| 2026-02-28 09:00 | 51 | 5 | 19,800 | $0.004 | 11.4s |
# Structured JSON logging for piping to other tools
npx tsx bin/chatlens.ts track --json-logs

Eval Framework

# Test categorization accuracy against fixtures
npx tsx bin/chatlens.ts eval categorize

# Test prompt injection resistance
npx tsx bin/chatlens.ts eval injection

# Save current output as golden baseline
npx tsx bin/chatlens.ts eval snapshot

# Diff current output against golden baseline
npx tsx bin/chatlens.ts eval diff

# Full eval scorecard
npx tsx bin/chatlens.ts eval all

Scheduling

# Run every 48 hours via cron
0 */48 * * * cd /path/to/chatlens && npx tsx bin/chatlens.ts track >> ~/.chatlens/logs/track.log 2>&1

Or use the provided setup scripts:

  • macOS: bash scripts/setup-macos.sh (generates launchd plist)
  • Linux: bash scripts/setup-linux.sh (generates systemd timer)

Webhook Notifications

Add to chatlens.json to POST summaries after each run:

{
  "notifications": {
    "webhook": {
      "url": "https://hooks.slack.com/services/YOUR/WEBHOOK/URL"
    }
  }
}

Works with Slack, Discord, or any endpoint that accepts JSON POST.

All Commands

chatlens init                              Interactive setup wizard
chatlens list                              List available chats
chatlens track                             Run topic tracker pipeline
chatlens track --dry-run                   Preview without writing files
chatlens track --contact "Name"            Track a specific contact
chatlens track --json-logs                 Structured JSON logging
chatlens track --no-notify                 Skip webhook notification
chatlens export "Name" --since 7d          Export raw chat to JSON
chatlens server start                      Start WhatsApp daemon
chatlens server status                     Check daemon health
chatlens server stop                       Stop WhatsApp daemon
chatlens auth telegram                     Authenticate Telegram
chatlens stats                             Show run history
chatlens stats --last 10                   Last N runs
chatlens eval categorize                   Categorization accuracy eval
chatlens eval injection                    Prompt injection resistance
chatlens eval snapshot                     Save golden output
chatlens eval diff                         Diff against golden output
chatlens eval all                          Full eval scorecard
chatlens help                              Show help
chatlens --version                         Show version

Security

chatlens feeds your chat messages to an LLM API. Read SECURITY.md for details on:

  • Prompt injection mitigation
  • Privacy and data handling
  • Blast radius analysis
  • Scraping ethics

TL;DR: Use Ollama for fully local processing. Use --dry-run with untrusted chats. The worst case is corrupted local reports — chatlens can never send messages or access files outside its config.

Development

git clone https://github.com/ITeachYouAI/chatlens.git
cd chatlens
npm install

# Run locally
npx tsx bin/chatlens.ts help

# Type check
npm run typecheck

# Run tests
npm test

# Lint + format
npm run lint:fix && npm run format

Requirements

  • Node.js 20+
  • macOS (primary) or Linux
  • For WhatsApp: Chromium (installed automatically by puppeteer)
  • For iMessage: brew install imessage-exporter + Full Disk Access
  • For Telegram: API credentials from https://my.telegram.org/apps

License

MIT

About

Multi-platform chat analytics — WhatsApp, iMessage, Telegram. AI-powered categorization and markdown reports.

Resources

License

Contributing

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors