Skip to content

albertfj114/HomeAIKit

Repository files navigation

HomeAI Kit

A complete, self-hosted private AI stack in a single docker compose up -d.

Run your own ChatGPT, RAG pipelines, private search, and AI workflows — entirely on your own hardware. No cloud APIs, no subscriptions, no data leaving your network.

What's Inside

Service What it does
Ollama Run LLMs locally (Llama 3, Mistral, Gemma, etc.)
Open WebUI ChatGPT-like interface for your local models
ChromaDB Vector database — chat with your own documents
n8n Visual workflow automation connecting everything
SearXNG Private meta-search (Google, Bing, DuckDuckGo — no tracking)
PostgreSQL Reliable database backend for n8n

Optional extras (one flag to enable):

  • LibreTranslate — ML-powered translation API
  • Redis — Caching layer for API responses
  • ChromaDB Admin — Visual management for your vector database

Quick Start

# 1. Configure
cp .env.example .env
nano .env                    # change POSTGRES_PASSWORD at minimum

# 2. Launch
docker compose up -d

# 3. Pull an AI model
./scripts/pull-models.sh llama3.2:3b    # ~2 GB download, 10-40 min

# 4. Use it
# Open WebUI:  http://localhost:3000
# n8n:         http://localhost:5678
# SearXNG:     http://localhost:8080

Included Workflow Templates

5 pre-built n8n workflows ready to import:

  1. RAG Chat with Documents — Upload a PDF, ask questions, get answers citing your document
  2. Private Web Search — Search the web via SearXNG, get AI-summarized results
  3. Knowledge Base Ingest — Auto-embed documents into ChromaDB for later querying
  4. Web Scrape & Summarize — Give it a URL, get a clean AI summary back
  5. Translation Pipeline — Text in, translated text out (requires LibreTranslate)

See docs/WORKFLOWS.md for import instructions and usage.

Two Ways to Run

Option A: Full Stack (Recommended)

Everything starts with one command. Optional services via --profile extras.

docker compose up -d                          # Core services
docker compose --profile extras up -d         # + LibreTranslate, Redis, ChromaDB Admin

Option B: Pick & Choose

Run only the stacks you need. Each is independent.

docker network create homeai-net              # Shared network (once)
cd stacks/llm && docker compose up -d         # Just Ollama + Open WebUI
cd stacks/rag && docker compose up -d         # Just ChromaDB
cd stacks/automation && docker compose up -d  # Just n8n

See the stacks/ directory for all options.

Synology NAS

Running on a Synology NAS? See synology/SYNOLOGY-GUIDE.md for DSM-specific setup.

Hardware Requirements

Setup RAM CPU Storage
Minimum 8 GB 4 cores 20 GB
Recommended 16 GB 4+ cores 50 GB
Full (13B+ models) 32 GB+ 8+ cores 100 GB+

See docs/HARDWARE.md for details and Synology model compatibility.

Documentation

Support

Questions? Issues? Visit http://linkedbits.net and leave a message

About

Full guide on setting up HomeKit AI

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages