Local AI image generation powered by Ollama. Generate images from text prompts, use reference images for img2img editing, and browse your generation history — all running on your own hardware.
- Text-to-image with Z-Image Turbo (6B parameter model)
- Image-to-image with FLUX.2 Klein — single and multi-reference editing
- Streaming progress — real-time step-by-step progress via SSE
- Generation history with full metadata (prompt, model, dimensions, seed)
- Drag-and-drop — drag history images into the prompt as references
- Fully local — no cloud APIs, no data leaves your machine
- Runtime: Bun
- API: Hono with SSE streaming
- Frontend: React + Vite + shadcn/ui + Tailwind CSS v4
- Database: PostgreSQL via Drizzle ORM
- Validation: Zod
- Image Gen: Ollama (local inference)
- Bun (v1.0+)
- Docker (for PostgreSQL, or bring your own)
- macOS with Apple Silicon (Ollama image generation is currently macOS-only)
Ollama's image generation feature is experimental and uses the MLX framework on Apple Silicon. Getting it working requires the correct installation method.
brew install ollamaHomebrew builds mlx-c from source for your architecture, which avoids the libmlxc.dylib architecture mismatch that can occur with the .app or install script.
Known issue: The Ollama
.appdownload andcurl -fsSL https://ollama.com/install.sh | shmay ship an x86_64libmlxc.dylibeven on ARM Macs, causingfailed to initialize MLX: libmlxc.dylib not foundorincompatible architectureerrors. If you hit this, uninstall and use Homebrew instead.
Homebrew installs libmlxc.dylib to /opt/homebrew/lib/, but Ollama's runner subprocess may not find it there. Fix by copying it next to the Ollama binary:
cp /opt/homebrew/lib/libmlxc.dylib /opt/homebrew/Cellar/ollama/$(brew info ollama --json | bun -e "console.log(JSON.parse(require('fs').readFileSync('/dev/stdin','utf8'))[0].versions.stable)")/bin/Then restart Ollama:
brew services restart ollamabrew services start ollama
# Text-to-image (12GB, recommended to start)
ollama pull x/z-image-turbo
# Image-to-image editing (6GB, optional)
ollama pull x/flux2-kleinVerify it works:
ollama run x/z-image-turbo "a red circle"You should see a progress bar. If you see an MLX error, revisit the troubleshooting steps above.
If you previously installed Ollama via the .app and then via Homebrew, you may have two ollama serve processes running. The CLI connects to Homebrew's instance but port 11434 may be bound to the old one:
# Check for multiple instances
ps aux | grep "ollama serve"
# Kill the stale one (the /Applications/Ollama.app one)
kill <PID># Clone and install
git clone https://github.com/oddlantern/imagery.git
cd imagery
bun install
# Start PostgreSQL
cp .env.example .env
bun run db:up
bun run db:migrate
# Start dev servers
bun run devOpen http://localhost:5173.
Run the full stack in containers (still requires Ollama on the host for GPU access):
cp .env.example .env
docker compose up --buildOpen http://localhost:3000.
Update OLLAMA_URL in .env so the containerized API can reach your host Ollama:
| Platform | OLLAMA_URL value |
|---|---|
| macOS | http://host.docker.internal:11434 |
| Windows (WSL2) | http://host.docker.internal:11434 |
| Linux | http://172.17.0.1:11434 (or use --network=host) |
imagery/
├── apps/
│ ├── api/ # Hono REST API
│ ├── web/ # React + Vite frontend
│ └── shared/ # Shared Zod schemas
├── docker-compose.yml
└── storage/images/ # Generated images on disk
| Endpoint | Method | Description |
|---|---|---|
/api/generate |
POST |
Generate image (SSE stream) |
/api/history |
GET |
Paginated generation history |
/api/images/:file |
GET |
Serve generated image |
/api/health |
GET |
Health check |
| Command | Description |
|---|---|
bun run dev |
Start API + frontend dev servers |
bun run dev:api |
Start API only |
bun run dev:web |
Start frontend only |
bun run db:up |
Start PostgreSQL container |
bun run db:down |
Stop PostgreSQL container |
bun run db:migrate |
Run database migrations |
bun run test |
Run all tests |
MIT