Run AI agents in disposable sandboxes with full privacy
A local-first web application for running AI agents in disposable sandboxes. Built with a custom agent execution engine from the ground up.
🚀 Quick Start • 📖 Docs • 🐳 Docker • ☁️ Deploy
If you find AgentDocks useful, give it a star to show your support and help others discover it!
- 5-Step Onboarding: Welcome → Choose AI Provider → Configure API Keys → Select Sandbox → Confirm
- Chat Interface: Interact with AI agents through a familiar chat-like UI
- Multiple AI Providers: Support for Anthropic, OpenRouter, and Ollama
- Flexible Sandboxes: Run agents in E2B (cloud) or Docker (local) environments
- Local-First: Your data stays on your machine
- Real-Time Streaming: Watch your agents work step-by-step
- Custom Agent Engine: Fully self-contained, zero external agent framework dependencies
The core agent loop executes tasks with the following flow:
- Receive user query + optional files
- Create a sandbox (E2B cloud OR local Docker container)
- Define tools the AI can use:
bash,write,read,edit,glob,grep - Send query + tool definitions to AI provider
- Execute tool calls in sandbox and collect results
- Stream every step to frontend as SSE events
- Repeat until task is complete
- Cleanup and destroy sandbox
- Next.js - React framework with App Router
- React - UI components
- Tailwind CSS - Styling
- TypeScript - Type safety
- FastAPI - Python web framework
- Custom Agent Engine - Built from scratch
- Anthropic SDK - Claude API integration
- httpx - HTTP client for OpenRouter/Ollama
- E2B Code Interpreter - Cloud sandbox execution
- Docker SDK - Local container management
- Anthropic - Claude models (Opus, Sonnet, Haiku)
- OpenRouter - Access to 300+ models through one API
- Ollama - Run open-source models locally
- E2B - Fast, secure cloud-based code execution
- Docker - Run sandboxes locally on your machine
agentdocks/
├── frontend/ # Next.js application
│ ├── src/
│ │ ├── app/ # App router pages
│ │ ├── components/# React components
│ │ ├── lib/ # Utilities and helpers
│ │ ├── hooks/ # Custom React hooks
│ │ └── types/ # TypeScript types
│ └── package.json
├── backend/ # FastAPI application
│ ├── app/
│ │ ├── main.py # FastAPI entry point
│ │ └── api/ # API routes
│ │ ├── agent.py # Agent execution endpoints
│ │ ├── config.py # Configuration endpoints
│ │ └── health.py # Health check endpoints
│ ├── core/ # AgentDocks Engine
│ │ ├── agent_runner.py # Main agent loop
│ │ ├── providers.py # AI provider abstraction
│ │ ├── sandbox.py # Sandbox abstraction
│ │ ├── tools.py # Tool definitions
│ │ ├── stream.py # SSE streaming helpers
│ │ └── system_prompt.py # Agent system prompt
│ ├── models/ # Pydantic models
│ │ └── schemas.py
│ └── requirements.txt
└── README.md
Choose your preferred installation method:
curl -fsSL https://raw.githubusercontent.com/LoFiTerminal/AgentDocks/main/scripts/install.sh | bashThen run:
agentdocks# Using docker-compose (recommended)
git clone https://github.com/LoFiTerminal/AgentDocks.git
cd AgentDocks
docker-compose up -d
# Or using Docker directly
docker run -d \
-p 3000:3000 -p 8000:8000 \
-v ~/.agentdocks:/root/.agentdocks \
-v /var/run/docker.sock:/var/run/docker.sock \
ghcr.io/lofiterminal/agentdocks:latestOpen http://localhost:3000 in your browser.
Prerequisites:
- Node.js 18+ and npm
- Python 3.10+
- Docker (for local sandbox mode)
# Clone repository
git clone https://github.com/LoFiTerminal/AgentDocks.git
cd AgentDocks
# Install dependencies
make install
# Start both servers
make devAlternative (without make):
Frontend:
cd frontend
npm install
npm run devBackend:
cd backend
python3 -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -r requirements.txt
uvicorn app.main:app --reload- Anthropic: Requires API key from console.anthropic.com
- OpenRouter: Requires API key from openrouter.ai
- Ollama: Requires local Ollama installation
- E2B: Cloud-based sandboxes (requires E2B API key from e2b.dev)
- Docker: Local Docker containers (requires Docker daemon)
The AI agent has access to these tools in the sandbox:
- bash: Execute shell commands
- write: Create or overwrite files
- read: Read file contents
- edit: Edit files with string replacement
- glob: List files matching a pattern
- grep: Search file contents for patterns
POST /api/config- Save onboarding configurationGET /api/config- Get current configuration
POST /api/agent/run- Run an agent task (SSE stream)POST /api/agent/run-with-files- Run agent with file uploads (SSE stream)
GET /- API statusGET /health- Health check
The agent streams these events during execution:
status- Status updates ("Creating sandbox...", etc.)tool_use- AI is using a tooltool_result- Tool execution resulttext- AI text responsefile- File was created/modifiederror- Error occurreddone- Task complete, sandbox destroyed
make dev # Start both servers in development mode
make build # Production build
make docker # Build Docker image
make docker-run # Run Docker container
make clean # Remove build artifacts
make lint # Run linters
make test # Run tests
make stop # Stop all services# Run frontend
cd frontend && npm run dev
# Run backend
cd backend && source venv/bin/activate && uvicorn app.main:app --reloadThe easiest way to deploy AgentDocks:
docker-compose up -dConfiguration is persisted in ~/.agentdocks/.
Frontend (Vercel):
- Fork the repository on GitHub
- Import to Vercel
- Set environment variables:
NEXT_PUBLIC_API_URL- Your backend API URLNEXT_PUBLIC_SITE_URL- Your frontend URL
- Deploy
Backend (Railway/Fly.io/Any Docker host):
- Deploy from Docker image or source
- Set environment variables:
SITE_URL- Your frontend URLANTHROPIC_API_KEY,OPENROUTER_API_KEY(optional)E2B_API_KEY(optional, for cloud sandboxes)
- Expose port 8000
Shared Runs Cloud API (Optional):
See cloud-api/README.md for deploying the shared runs service to Railway with Cloudflare R2 storage.
# One-liner install users
curl -fsSL https://raw.githubusercontent.com/LoFiTerminal/AgentDocks/main/scripts/uninstall.sh | bash
# Manual uninstall
rm -rf ~/agentdocks
rm /usr/local/bin/agentdocks # or ~/.local/bin/agentdocks
rm -rf ~/.agentdocks # (optional) removes config and shared runsMIT
Contributions are welcome! This is a fully open-source project with a custom-built agent engine.
We'd love your help making AgentDocks better. Here's how to contribute:
- Check existing issues: Browse GitHub Issues to see if your idea or bug is already reported
- Fork the repository: github.com/LoFiTerminal/AgentDocks
- Create your feature branch:
git checkout -b feature/amazing-feature - Commit your changes:
git commit -m 'Add amazing feature' - Push to the branch:
git push origin feature/amazing-feature - Open a Pull Request: Submit your PR with a clear description of changes
- 🐛 Report bugs: Found an issue? Open a bug report
- 💡 Suggest features: Have an idea? Open a feature request
- 📝 Improve docs: Documentation improvements are always appreciated
- 🧪 Add tests: Help us increase code coverage
- 🎨 UI/UX improvements: Make the interface even better
- 🔧 Code contributions: Fix bugs, add features, optimize performance
Built by the AgentDocks team as a fully self-contained, privacy-first agent execution platform.
Repository: github.com/LoFiTerminal/AgentDocks