An open-source background coding agent. Designed to understand, reason about, and contribute to existing codebases.
Sets up isolated execution environments for AI agents to work on GitHub repositories with tools to understand code, edit files, and much more.
- GitHub OAuth sign-in via Better Auth
- GitHub App integration for repository access and installations
- Secure token management with automatic refresh
- Support for both personal access tokens (dev) and GitHub App (production)
- GitHub repository integration with branch management
- Pull request generation with AI-authored commits
- Real-time task status tracking and progress updates
- Automatic workspace setup and cleanup on Micro-VMs
- Kata QEMU containers for hardware-level isolation
- Multi-provider LLM support (Anthropic, OpenAI, OpenRouter)
- OpenRouter models: Grok Code Fast, Kimi K2, Codestral, Devstral 2 (FREE), DeepSeek R1, Qwen3 Coder
- Streaming chat interface with real-time responses
- Tool execution with file operations, terminal commands, and code search
- Memory system for repository-specific knowledge retention
- Semantic code search via Pinecone
- Morph SDK integration for fast code editing
- Custom rules for code generation
- Convex is the primary datastore for tasks/messages/todos/memories/tool calls.
- Phase 2 added Convex-native chat streaming (
streamChat,streamChatWithTools) plus presence/activity + tool-call tracking tables; hooks are available viauseConvexChatStreamingandusePresence. The task UI now callsstartStreamWithToolswhen Convex streaming is enabled. - Sidecar supports Convex-native mode (file changes, tool logs, terminal output, workspace status) via
USE_CONVEX_NATIVE=trueandCONVEX_URL. - Hybrid fallback remains: legacy Socket.IO sidecar events and REST are still available while UI wiring catches up. Use
NEXT_PUBLIC_USE_CONVEX_REALTIME=true(frontend) to opt into Convex streaming. - Provider routing now prefers OpenRouter (first-party), with Anthropic/OpenAI fallbacks and abortable cancellation; presence cleanup runs via
convex/crons.ts. - Streaming tools (file/terminal/search/memory/todos) execute through the tool API and are tracked in the
agentToolstable. Some OpenRouter models (notably Kimi) may emittool-input-*parts without streaming args;streamChatWithToolsincludes CLI-friendly fallbacks to recover args and still execute tools reliably. - Enhanced task creation with automatic Convex streaming integration, background initialization, and comprehensive error handling.
- Multi-provider API key management with secure server-side resolution for Anthropic, OpenAI, and OpenRouter.
- Set frontend env:
NEXT_PUBLIC_CONVEX_URL=<your convex>,NEXT_PUBLIC_USE_CONVEX_REALTIME=true,NEXT_PUBLIC_API_URL=http://localhost:4000. - Start services:
npx convex dev,npm run dev --filter=server, then restart frontendnpm run dev --filter=frontend(ornpm run dev:app). - Create a new task in the UI (ensures Prisma + Convex rows), send a message, and watch the stream.
- If you see “Could not find public function…”, run
npx convex dev --until-success --onceto regenerate/deploy functions.
- Deploy (prod URL):
npx convex deploy --url <your_convex_url> -y - Run:
npx convex run streaming:streamChatWithTools --deployment-name <deployment> '<json args>' - Recommended “cheap + reliable” OpenRouter model for tool-calling verification:
moonshotai/kimi-k2-thinking - If you need a deterministic workspace on the tool server for CLI tests, create a scratch task and set
workspacePathto/app(present in the Railway tool container), then run file/terminal tools against that task.
Opulent Code supports two execution modes through an abstraction layer:
- Direct filesystem execution on the host machine
- Hardware-isolated execution in Kata QEMU containers
- True VM isolation via QEMU hypervisor
- Kubernetes orchestration with bare metal nodes
Mode selection is controlled by NODE_ENV and AGENT_MODE environment variables.
Opulent Code is deployed with:
- Backend: Railway (Node.js server + PostgreSQL)
- Frontend: Railway or Vercel (Next.js)
- Frontend:
https://shadow-frontend-production-373f.up.railway.app - Backend:
https://shadow-clean-production.up.railway.app
# Login to Railway
railway login
# Deploy backend
railway up
# Or use automated script
./auto-deploy.shPrerequisites:
- Railway CLI:
npm install -g @railway/cli - Vercel CLI:
npm install -g vercel
- Frontend (
apps/frontend/) - Next.js application with real-time chat interface, terminal emulator, file explorer, and task management - Server (
apps/server/) - Node.js orchestrator handling LLM integration, WebSocket communication, task initialization, and API endpoints - Sidecar (
apps/sidecar/) - Express.js service providing REST APIs for file operations within isolated containers - Website (
apps/website/) - Marketing and landing page - Database (
packages/db/) - Prisma schema and PostgreSQL client with comprehensive data models - Types (
packages/types/) - Shared TypeScript type definitions for the entire platform - Command Security (
packages/command-security/) - Security utilities for command validation and sanitization - ESLint Config (
packages/eslint-config/) - Shared linting rules - TypeScript Config (
packages/typescript-config/) - Shared TypeScript configurations
- Node.js 22
- PostgreSQL
- Clone the repository and install dependencies:
git clone <repository-url>
cd shadow
npm install- Set up environment variables:
# Copy example environment files
cp apps/server/.env.template apps/server/.env
cp apps/frontend/.env.template apps/frontend/.env
cp packages/db/.env.template packages/db/.env- Configure the database:
# Create local PostgreSQL database
psql -U postgres -c "CREATE DATABASE shadow_dev;"
# Update packages/db/.env with your database URL
DATABASE_URL="postgres://postgres:@127.0.0.1:5432/shadow_dev"
# Generate Prisma client and push schema
npm run generate
npm run db:push- Start development servers:
# Start frontend + backend together (recommended for local dev)
npm run dev:app
# Or start specific services
npm run dev --filter=frontend
npm run dev --filter=server
npm run dev --filter=sidecarSet variables in the following files:
- Frontend:
apps/frontend/.env.local - Server:
apps/server/.env - Database:
packages/db/.env
NEXT_PUBLIC_SERVER_URL=http://localhost:4000
NEXT_PUBLIC_BYPASS_AUTH=false
# Better Auth
BETTER_AUTH_SECRET=<generate-with-openssl-rand-hex-32>
# GitHub OAuth App (create at github.com/settings/developers)
GITHUB_CLIENT_ID=
GITHUB_CLIENT_SECRET=
# GitHub App (for repo installations)
GITHUB_APP_ID=
GITHUB_PRIVATE_KEY=<base64-encoded-pem>
GITHUB_APP_SLUG=opulent-code
# Database
DATABASE_URL=postgresql://...DATABASE_URL=postgresql://...
NODE_ENV=development
AGENT_MODE=local
API_PORT=4000
WORKSPACE_DIR=/path/to/workspace
# GitHub credentials (must match frontend)
GITHUB_CLIENT_ID=
GITHUB_CLIENT_SECRET=
GITHUB_APP_ID=
GITHUB_PRIVATE_KEY=
GITHUB_APP_SLUG=opulent-code
# Optional integrations
PINECONE_API_KEY=
PINECONE_INDEX_NAME=opulentcode
MORPH_API_KEY=DATABASE_URL=postgresql://...
DIRECT_URL=postgresql://...Set NEXT_PUBLIC_BYPASS_AUTH=true in frontend to skip GitHub OAuth during local development.
# Lint all packages and apps
npm run lint
# Format code with Prettier
npm run format
# Type checking
npm run check-typesSee agents.md for the current agent flow, Convex chat integration, workspace initialization, and requirements for the real-time terminal/file editor.
# Generate Prisma client from schema
npm run generate
# Push schema changes to database (for development)
npm run db:push
# Reset database and push schema (destructive)
npm run db:push:reset
# Open Prisma Studio GUI
npm run db:studio
# Run migrations in development
npm run db:migrate:dev# Build all packages and apps
npm run build
# Build specific app
npm run build --filter=frontend
npm run build --filter=server
npm run build --filter=sidecarShadow provides a comprehensive set of tools for AI agents:
read_file- Read file contents with line range supportedit_file- Write and modify filessearch_replace- Precise string replacementdelete_file- Safe file deletionlist_dir- Directory exploration
grep_search- Pattern matching with regexfile_search- Fuzzy filename searchsemantic_search- AI-powered semantic code search
run_terminal_cmd- Command execution with real-time output- Command validation and security checks
todo_write- Structured task managementadd_memory- Repository-specific knowledge storagelist_memories- Retrieve stored knowledge
- TypeScript throughout with strict type checking
- Shared configurations via packages
- Clean separation between execution modes
- WebSocket event compatibility across frontend/backend
- Command validation in all execution modes
- Path traversal protection
- Workspace boundary enforcement
- Container isolation in remote mode
- Always test both local and remote modes for production features
- Keep initialization steps mode-aware and properly abstracted
- Maintain WebSocket event compatibility across frontend/backend changes
- Remote mode requires Amazon Linux 2023 nodes for Kata Containers compatibility
- Fork the repository
- Create a feature branch
- Make your changes with proper TypeScript types
- Test in both local and remote modes
- Submit a pull request
We're excited to see what you've built with Shadow!
Ishaan Dey — X