Skip to content

UniverseScripts/local-rag-api

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

3 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Local RAG API (Open Source Edition)

A strictly local, privacy-first RAG (Retrieval Augmented Generation) backend. It allows you to ingest PDF/TXT documents and chat with them using Ollama and ChromaDB.

Status: core-logic

Features

  • FastAPI: Async endpoints for /ingest and /chat.
  • ChromaDB: Local vector storage.
  • Ollama: Uses local LLMs (Llama 3, Mistral, etc).
  • Privacy: No data leaves your machine.

Prerequisites (Manual Setup)

Since this is the source version, you must manage the infrastructure yourself:

  1. Install Ollama: Download here and run ollama serve.
  2. Install ChromaDB: You must run a local Chroma instance or install the python client.
  3. Python 3.10+: Ensure your venv is active.

Quick Start (Source)

# 1. Install Dependencies
pip install -r requirements.txt

# 2. Pull the Embedding Model (Manual)
ollama pull llama3
ollama pull nomic-embed-text

# 3. Run the API
uvicorn src.main:app --reload

πŸš€ Want the "One-Click" Docker Version?

I maintain a Production-Ready Starter Kit that includes:

βœ… Full Docker Compose (Orchestrates API + Chroma + Ollama).

βœ… Production Dockerfile (Optimized, lightweight).

βœ… Environment Configs (Pre-set for Llama 3).

βœ… One-Command Run (docker-compose up).

πŸ‘‰ Get the Dockerized Starter Kit on Gumroad ($27)

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages