Skip to content

feat: RAG pipeline for privacy-first document Q&A#21

Open
wydrox wants to merge 1 commit intomainfrom
feat/rag-pipeline
Open

feat: RAG pipeline for privacy-first document Q&A#21
wydrox wants to merge 1 commit intomainfrom
feat/rag-pipeline

Conversation

@wydrox
Copy link
Contributor

@wydrox wydrox commented Mar 26, 2026

Summary

  • Adds a complete RAG (Retrieval-Augmented Generation) pipeline for chatting with local documents -- the chore: add .gitignore entries for dev artifacts #1 use case for local LLMs
  • New ppmlx/rag.py module with document loader (30+ file types), recursive text chunker, SQLite-backed vector store, and RAG chain
  • Four new CLI commands: ppmlx rag ingest <path>, ppmlx rag chat, ppmlx rag list, ppmlx rag rm <collection>
  • Privacy-first: all embeddings and generation happen locally, no data leaves the machine
  • 37 new tests covering all components with mocked embeddings (no GPU required)

Key Design Decisions

  • Uses SQLite for vector storage (consistent with existing db.py patterns), with cosine similarity computed in pure Python
  • Supports collections so users can organize different document sets
  • Change detection via SHA-256 hashing skips unchanged files on re-ingestion
  • Embedding model is configurable per collection (defaults to embed:all-minilm)
  • No new dependencies beyond what ppmlx already uses

CLI Usage

# Ingest documents
ppmlx rag ingest ./my-docs --collection myproject

# Chat with your documents
ppmlx rag chat --collection myproject --model llama3

# List collections
ppmlx rag list

# Remove a collection
ppmlx rag rm myproject

Test plan

  • 37 unit tests pass: chunking, vector encoding, cosine similarity, document loading, file discovery, vector store CRUD, ingest pipeline, retrieval, prompt building, CLI integration
  • Full test suite (212 tests) passes with no regressions
  • Code review and cleanup completed (removed unused import, deduplicated test helper, fixed cursor-level row_factory)

🤖 Generated with Claude Code

Implements a complete retrieval-augmented generation pipeline that lets
users ingest documents and chat with them using local models. No data
leaves the machine.

Components:
- Document loader supporting 30+ file types (txt, md, py, pdf, etc.)
- Recursive character text splitter with configurable chunk size/overlap
- SQLite-backed vector store with cosine similarity search
- RAG chain: retrieve top-k chunks, inject into prompt, generate response
- CLI commands: ppmlx rag ingest/chat/list/rm

37 new tests covering chunking, vector encoding, cosine similarity,
document loading, vector store CRUD, ingest pipeline with change
detection, retrieval, prompt building, and CLI integration.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@chatgpt-codex-connector
Copy link

You have reached your Codex usage limits for code reviews. You can see your limits in the Codex usage dashboard.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant