Self-hosted AI content generation service powered by Ollama
Features β’ Quick Start β’ API β’ Architecture β’ Contributing
Local AI is a self-hosted, multilingual content generation service that runs on your local machine, providing AI-powered Twitter thread generation and OG image creation for blog posts. Built with Next.js and powered by Ollama's qwen2.5:7b model, it offers a beautiful dashboard with language switching (German/English) to monitor generations and system status.
- π Privacy-first: Your content never leaves your machine
- β‘ Fast: Local inference with automatic model loading/unloading
- π Transparent: Full generation history and performance metrics
- π¨ Beautiful: Modern UI with shadcn/ui v4 and Tailwind CSS v4
- π§ Production-ready: PM2 process management with automatic builds
- π€ AI Tweet Generation: Create engaging Twitter threads from blog content
- πΌοΈ OG Image Generation: AI-powered taglines for social media cards (1200Γ630)
- π Multilingual Support: German & English language support for both dashboard and API
- π Real-time Dashboard: Monitor Ollama status, RAM usage, and generation history
- πΎ SQLite Logging: Complete history of all generations with performance metrics
- π Auto-scaling: Model automatically loads/unloads based on usage
- π Zero-config Deployment: Automated build process with portable paths
- π¨ Modern UI: Built with shadcn/ui components and dark mode
- Node.js 22+
- Ollama installed and running
- macOS, Linux, or Windows WSL
# 1. Clone the repository
git clone https://github.com/utfcmac/local-ai.git
cd local-ai
# 2. Install dependencies (automatically sets up Git hooks)
npm install
# 3. Pull the Ollama model
ollama pull qwen2.5:7b
# 4. Start Ollama as a daemon
brew services start ollama # macOS
# OR
systemctl start ollama # Linux
# 5. Configure API key (see Configuration section below for details)
# Generate a secure key: openssl rand -hex 32
echo 'LOCAL_AI_API_KEY=your-secret-key-here' > .env.local
# 6. Build the application
npm run build
# 7. Start with PM2 (loads env vars first!)
set -a && source .env.local && set +a
npm run pm2:start
pm2 saveThe dashboard will be available at:
- German (default):
http://localhost:3100/orhttp://localhost:3100/de - English:
http://localhost:3100/en
Use the language switcher in the top-right corner to toggle between languages.
π‘ Important: The
LOCAL_AI_API_KEYmust match the key configured in your external systems (e.g., blog platform). See the Configuration section for detailed setup instructions.
npm run devGET /api/healthNo authentication required. Returns system and Ollama status.
Response:
{
"status": "ok",
"ollamaRunning": true,
"modelAvailable": true,
"modelLoaded": false,
"models": ["qwen2.5:7b"],
"stats": {
"total": 42,
"success": 40,
"error": 2,
"avgDurationMs": 8500
},
"systemMemory": {
"totalGb": 36,
"freeGb": 12,
"usedGb": 24
}
}POST /api/generate
Content-Type: application/json
x-api-key: your-secret-keyRequest:
{
"title": "My Blog Post Title",
"excerpt": "A brief description",
"content": "Full MDX content of your blog post...",
"blogUrl": "https://example.com/blog/my-post",
"blogPostId": "optional-uuid",
"language": "de"
}Parameters:
language(optional):"de"or"en"- Generates prompts in specified language (default:"de")
Response:
{
"success": true,
"mainTweet": "Engaging main tweet (max 280 chars)",
"replyTweet": "Follow-up thought (max 255 chars)",
"durationMs": 7500,
"model": "qwen2.5:7b"
}POST /api/generate-image
Content-Type: application/json
x-api-key: your-secret-keyRequest:
{
"title": "My Blog Post Title",
"excerpt": "A brief description",
"content": "Full MDX content...",
"blogPostId": "optional-uuid",
"language": "en"
}Parameters:
language(optional):"de"or"en"- Generates tagline in specified language (default:"de")
Response: PNG image (1200Γ630) with AI-generated tagline
βββββββββββββββββββββββ ββββββββββββββββββββββββ
β Blog Platform β β Local AI (Port 3100) β
β βββHTTPβββΊβ β
β ββββββββββββββββ β β βββββββββββββββββββ β
β β Content API βββββΌβββββββββΊβ β /api/generate β β
β ββββββββββββββββ β β βββββββββββββββββββ β
β β β β
β ββββββββββββββββ β β βββββββββββββββββββ β
β β OG Images βββββΌβββββββββΊβ β /api/generate- β β
β ββββββββββββββββ β β β image β β
β β β βββββββββββββββββββ β
βββββββββββββββββββββββ β β
β βββββββββββββββββββ β
β β Ollama :11434 β β
β β qwen2.5:7b β β
β βββββββββββββββββββ β
β β
β βββββββββββββββββββ β
β β SQLite DB β β
β β (History) β β
β βββββββββββββββββββ β
ββββββββββββββββββββββββ
| Category | Technology |
|---|---|
| Framework | Next.js 16 (App Router, Turbopack) |
| Language | TypeScript 5.7 |
| UI Library | shadcn/ui v4 |
| Styling | Tailwind CSS v4 |
| Internationalization | next-intl |
| LLM | Ollama (qwen2.5:7b) |
| Database | SQLite (better-sqlite3) |
| Process Manager | PM2 |
| Icons | Lucide React |
The build process is fully automated and portable across different machines:
When you run npm run build:
- Next.js Build - Creates optimized production bundle
- Post-build Script - Automatically copies:
- Static assets (
.next/static) to standalone build - Public files to standalone build
- Native modules (better-sqlite3) to standalone build
- Static assets (
- Dynamic Path Resolution - All paths are resolved at build time from
.next/required-server-files.json
# Works on any machine, any path!
git clone <repo> /any/path/you/want
npm install
npm run build # Fully automated
npm run pm2:startNo hardcoded paths - the build system automatically detects and configures all paths.
# Development
npm run dev # Start dev server (port 3100)
# Production
npm run build # Build with automatic asset copying
npm run start # Start Next.js server
npm run start:standalone # Start standalone server directly
# PM2 Management
npm run pm2:start # Start PM2 process
npm run pm2:restart # Restart PM2 process
npm run pm2:stop # Stop PM2 process
npm run pm2:logs # View PM2 logsAccess the dashboard at http://localhost:3100 (German) or http://localhost:3100/en (English) to view:
- Language Switcher: Toggle between German (DE) and English (EN) in the top-right corner
- Ollama Status: Running/stopped, model loaded/available
- System Metrics: RAM usage (total/free/used)
- Generation Stats: Success rate, error rate, average duration
- History: Last 20 generations with details and status
All dashboard text automatically updates when switching languages, including status messages, labels, and descriptions.
All generations are logged in data/generations.db:
CREATE TABLE generations (
id INTEGER PRIMARY KEY AUTOINCREMENT,
blog_post_id TEXT,
title TEXT NOT NULL,
type TEXT NOT NULL DEFAULT 'teaser', -- 'teaser' or 'image'
main_tweet TEXT,
reply_tweet TEXT,
tagline TEXT,
model TEXT NOT NULL DEFAULT 'qwen2.5:7b',
duration_ms INTEGER,
status TEXT NOT NULL DEFAULT 'pending', -- 'pending', 'success', 'error'
error TEXT,
created_at TEXT NOT NULL DEFAULT (datetime('now', 'localtime'))
);Create a .env.local file:
# Required: API key for authentication
# This is YOUR secret key - external systems must send this in the x-api-key header
# Choose a strong, random string (e.g., openssl rand -hex 32)
LOCAL_AI_API_KEY=your-secret-key-here
# Optional: Custom data directory
LOCAL_AI_DATA_DIR=/path/to/dataThe LOCAL_AI_API_KEY is your server-side secret that protects the API endpoints:
-
Generate a secure key:
openssl rand -hex 32 # Example output: a1b2c3d4e5f6g7h8i9j0k1l2m3n4o5p6... -
Set it locally (
.env.local):LOCAL_AI_API_KEY=a1b2c3d4e5f6g7h8i9j0k1l2m3n4o5p6
-
External systems must send it in requests:
curl -X POST http://your-server:3100/api/generate \ -H "x-api-key: a1b2c3d4e5f6g7h8i9j0k1l2m3n4o5p6" \ -H "Content-Type: application/json" \ -d '{"title": "...", "content": "..."}'
-
Same key on both sides:
- Your blog platform needs the same key configured
- Without the correct key in the header, requests will be rejected with
401 Unauthorized
The ecosystem.config.cjs is fully dynamic and portable:
// Automatically detects paths from Next.js build
const serverConfig = require('./.next/required-server-files.json');
// No hardcoded paths!Important: PM2 needs access to environment variables. Make sure to load .env.local before starting:
set -a && source .env.local && set +a
npm run pm2:start/api/generate- Requires validx-api-keyheader/api/generate-image- Requires validx-api-keyheader/api/health- Public (no authentication) - read-only system status
- Generate a strong, random API key (minimum 32 characters)
- Never commit
.env.localto version control (already in.gitignore) - Use the same key on your blog platform and Local AI instance
- Consider using HTTPS if exposing to the internet (e.g., with reverse proxy)
- Run on trusted networks only (LAN or VPN)
- All processing happens locally - no data sent to external services
- SQLite database stored in local
data/directory - Full control over your content and generation history
local-ai/
βββ src/
β βββ app/
β β βββ [locale]/
β β β βββ page.tsx # Dashboard (i18n)
β β β βββ layout.tsx # Locale layout
β β βββ layout.tsx # Root layout
β β βββ globals.css # Tailwind v4 + shadcn theme
β β βββ api/
β β βββ health/route.ts
β β βββ generate/route.ts
β β βββ generate-image/route.tsx
β βββ components/
β β βββ ui/ # shadcn/ui components
β β βββ LanguageSwitcher.tsx # DE/EN toggle
β βββ lib/
β β βββ auth.ts # API key validation
β β βββ db.ts # SQLite client
β β βββ ollama.ts # Ollama API wrapper (i18n prompts)
β β βββ utils.ts # Utilities
β βββ messages/
β β βββ de.json # German translations
β β βββ en.json # English translations
β βββ i18n.ts # next-intl config
β βββ middleware.ts # i18n routing
βββ scripts/
β βββ postbuild.js # Automated build copying
β βββ start-standalone.js # Dynamic server starter
βββ data/
β βββ generations.db # SQLite database
βββ docs/
β βββ dashboard-screenshot.png # README screenshot
βββ package.json
βββ next.config.ts
βββ tailwind.config.ts
βββ components.json # shadcn/ui config
βββ ecosystem.config.cjs # PM2 config (dynamic)
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Follow the Conventional Commits format
- Update
CHANGELOG.mdunder[Unreleased] - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
See CONTRIBUTING.md for detailed guidelines.
If you're an AI agent (Claude Code, Cursor, GitHub Copilot, etc.), please read:
- AI_DEVELOPMENT.md - Complete workflow guide
- .clinerules - Project-specific rules
Quick checklist:
- Use conventional commit messages
- Update
CHANGELOG.mdwith every change - Add i18n for all UI text (de.json + en.json)
- Test build before pushing
- Respond to git hook prompts appropriately
This project is licensed under the MIT License - see the LICENSE file for details.
TL;DR: You can do whatever you want with this code! Use it, modify it, distribute it, even commercially. No restrictions.
- Ollama - Local LLM runtime
- shadcn/ui - Beautiful UI components
- Next.js - React framework
- Tailwind CSS - Utility-first CSS
Made with β€οΈ for local AI inference
