Fast local RAG - search your documents with AI, no cloud needed.
pip install rapid-ragFor PDF support:
pip install rapid-rag[pdf]from rapid_rag import RapidRAG
# Create a RAG instance
rag = RapidRAG("my_documents")
# Add documents
rag.add("doc1", "The quick brown fox jumps over the lazy dog.")
rag.add_file("report.pdf")
rag.add_directory("./docs/")
# Semantic search
results = rag.search("fox jumping")
for r in results:
print(f"{r['score']:.3f}: {r['content'][:100]}")
# RAG query with LLM (requires Ollama)
answer = rag.query("What does the fox do?", model="qwen2.5:7b")
print(answer["answer"])# Initialize a collection
rapid-rag init my_docs
# Add documents
rapid-rag add ./documents/ -c my_docs -r
# Search
rapid-rag search "query here" -c my_docs
# RAG query (requires Ollama)
rapid-rag query "What is X?" -c my_docs -m qwen2.5:7b
# Info
rapid-rag info -c my_docsTrack every operation with cryptographic provenance:
from rapid_rag import RapidRAG, TIBETProvider
# Enable TIBET tracking
tibet = TIBETProvider(actor="my_app")
rag = RapidRAG("docs", tibet=tibet)
# All operations now create provenance tokens
rag.add_file("report.pdf")
results = rag.search("query")
answer = rag.query("Question?")
# Get provenance chain
tokens = tibet.get_tokens()
for t in tokens:
print(f"{t.token_type}: {t.erachter}")
print(f" ERIN: {t.erin}") # What happened
print(f" ERACHTER: {t.erachter}") # WhyTIBET uses Dutch provenance semantics:
- ERIN: What's IN the action (content)
- ERAAN: What's attached (references)
- EROMHEEN: Context around it
- ERACHTER: Intent behind it
- Local-first: Everything runs on your machine
- Fast: ChromaDB + sentence-transformers
- Simple API: Add, search, query in 3 lines
- File support: .txt, .md, .pdf
- Chunking: Automatic with overlap
- LLM integration: Works with Ollama
- TIBET: Cryptographic provenance for all operations
- Python 3.10+
- For LLM queries: Ollama running locally
MIT - Humotica