Skip to content

A self-hosted AI knowledge platform combining Obsidian, AnythingLLM, Ollama, and MCP agents to enable RAG, web search, and AI-augmented note workflows while keeping all data private.

Notifications You must be signed in to change notification settings

ftpotential/selfhosted-ai-knowledge-stack

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

1 Commit
Β 
Β 

Repository files navigation

🧠 Self-Hosted AI-Powered Knowledge Stack

A production-ready, self-hosted AI knowledge platform integrating Obsidian, AnythingLLM, Ollama, and MCP agents to enable intelligent note augmentation, RAG, web search, and contextual AI β€” all fully local and privacy-preserving.


βš™οΈ Overview

This system is a live deployment running on Unraid, designed to make my Obsidian vault AI-interactive.
AnythingLLM serves as the centralized AI frontend, orchestrating agents and LLM responses.
Ollama provides local inference with open-source models, ensuring data never leaves the server.
Through MCP agents, AnythingLLM connects directly to Obsidian and the web to enhance, summarize, and link knowledge intelligently.


🧰 Technology Stack

  • Unraid β€” Host and Docker orchestration
  • Obsidian (Dockerized) β€” Central Markdown knowledge base
  • AnythingLLM β€” AI orchestration and frontend interface
  • Ollama β€” Local LLM runtime (offline model serving)
  • MCP Server / Agents β€” Bridge connecting AnythingLLM to Obsidian, web tools, and RAG workflows

πŸ” Key Features

  • Fully operational deployment on Unraid integrating Obsidian, AnythingLLM, and Ollama
  • Obsidian MCP Server integration enabling AI-driven note access, summarization, and augmentation
  • Configured web search, web scraping, and RAG workflows to merge live data with internal notes
  • Automated semantic linking and structured documentation using local LLM processing
  • Private by design: all operations and model inference occur locally with no third-party dependencies

πŸ— Architecture Overview

User Device (Browser / Mobile)
 │
 ▼
AnythingLLM (UI & Agent Orchestrator)
 │
 ▼
MCP Agents ↔ Obsidian Vault (search, read, augment)
 │
 ▼
Ollama (Local LLM Inference)
 │
 ▼
Web Search / Scraper Tools β†’ RAG Context Integration

  • MCP servers registered via anythingllm_mcp_servers.json
  • AnythingLLM automatically starts MCP services when needed
  • Ollama hosts and manages open-source models locally for offline AI capability

πŸ” Security & Privacy

  • Fully self-contained within local Unraid infrastructure
  • Access routed through the Identity & Access Management (IAM) Suite:
    • Authentik for centralized authentication
    • Vaultwarden for credentials
    • 2FAuth for TOTP MFA
    • Nginx Reverse Proxy with Cloudflare DNS and Wildcard SSL
  • Remote administration via Tailscale VPN (WireGuard-based), with SSH access disabled

🧰 Deployment Notes

  1. All containers deployed via Docker on Unraid
  2. Volumes mapped for persistent data (Obsidian vaults, LLM models, logs)
  3. AnythingLLM configured with custom MCP server JSON entries
  4. Ollama running multiple local LLM models (e.g., Mistral, Llama3, Phi3)
  5. Reverse proxy configured using IAM Suite for secure subdomain access (link)

πŸš€ Use Cases

  • Local AI-assisted research, writing, and note organization
  • RAG-based contextual search across personal documents
  • Homelab deployment showcasing local LLM + automation integration
  • Private AI workspace with full user control over data and access

πŸ“¦ License

Licensed under the MIT License β€” feel free to adapt or build upon this setup.


🏷️ Tags

ai β€’ self-hosted β€’ knowledge-management β€’ obsidian β€’ anythingllm β€’ ollama β€’ mcp β€’ rag β€’ docker β€’ privacy β€’ homelab β€’ llm β€’ unraid

About

A self-hosted AI knowledge platform combining Obsidian, AnythingLLM, Ollama, and MCP agents to enable RAG, web search, and AI-augmented note workflows while keeping all data private.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors