|
|
When you start a new chat, a dedicated Docker container is automatically created just for that conversation. Your code runs in complete isolation - safe, secure, and reproducible.
| Languages | Package Managers | Build Tools |
|---|---|---|
| 🐍 Python 3.11 | 📦 pip / pip3 | 🔧 gcc / g++ |
| 🟢 Node.js 18 | 📦 npm / yarn | 🔧 make / cmake |
| 🦀 Rust (rustc) | 📦 cargo | 🔧 git |
| 💎 Ruby | 📦 gem | 🔧 curl / wget |
| 🐹 Go | 📦 go mod | 🔧 vim / nano |
| ☕ Java (OpenJDK) | 📦 maven | 🔧 jq / yq |
| Feature | Description |
|---|---|
| 💻 Full Bash Shell | Complete Linux environment |
| 🌐 Internet Access | Download packages, fetch data |
| 📁 Persistent Files | Files persist during conversation |
| 📦 Exportable | Save entire container as portable image |
| Resource | Range | Default |
|---|---|---|
| CPU Cores | 1-16 | 2 |
| Memory | 512MB - 32GB | 8GB |
| Storage | 1GB - 100GB | 10GB |
| Timeout | 0-3600s | 30s (0 = unlimited) |
Full terminal access directly in your browser - no SSH needed, no port forwarding, just click and type.
root@container:~$ python --version
Python 3.11.9
root@container:~$ node --version
v18.19.0
root@container:~$ pip install pandas numpy matplotlib
Successfully installed pandas-2.2.0 numpy-1.26.4 matplotlib-3.8.3
root@container:~$ ls -la
total 16
drwxr-xr-x 1 root root 4096 Jan 15 10:30 .
-rw-r--r-- 1 root root 2048 Jan 15 10:30 script.py
-rw-r--r-- 1 root root 8192 Jan 15 10:30 data.csvTerminal Features:
- 🪟 Multi-tab support - One terminal per conversation
↔️ Drag & resize - Floating window you can move around- 🎨 Full color support - Syntax highlighting, colored output
- ⌨️ Keyboard shortcuts - Ctrl+C, Ctrl+D, arrow keys, tab completion
- 📜 Scrollback history - Review previous commands
- 🔄 Persistent session - Terminal stays open while chatting
Take your work anywhere - Export any conversation's container as a portable Docker image.
Built something cool? Export it and run it on another machine, share it with colleagues, or keep it as a backup.
- Click the 🐳 button on any conversation
- Confirm the export
- Download the
.tarfile from the Images panel
# Load the exported image
docker load < my-project_2025-01-15_143052.tar
# Run it
docker run -it my-project_2025-01-15_143052:latest bash
# You're back in your exact environment!
root@container:/workspace$ ls
script.py data.csv results/| Feature | Description |
|---|---|
| 🐳 One-Click Export | Export button on every conversation |
| 📁 Image Manager | View, download, delete exported images |
| ⚙️ Custom Path | Configure export location in Settings |
| 📦 Full Environment | Includes all files, packages, and state |
Set custom export path in Settings → Docker → Image Export Path
Or via environment variable: DOCKER_EXPORT_PATH=./docker_images_exported
Run AI completely locally - no API keys, no costs, no data leaving your machine.
# Install Ollama (one command)
curl -fsSL https://ollama.ai/install.sh | sh # Linux
brew install ollama # macOS
# Pull models
ollama pull llama3 # General purpose (8B)
ollama pull llama3:70b # More powerful (70B)
ollama pull codellama # Optimized for code
ollama pull deepseek-coder # Code specialist
ollama pull mistral # Fast & efficient
# Models auto-detected in AI Code Executor!Ollama Features:
- ✅ Auto-detection - Models appear in dropdown automatically
- ✅ No API key needed - Just install and go
- ✅ Offline capable - Works without internet
- ✅ Privacy first - Your code never leaves your machine
- ✅ Custom models - Import any GGUF model
Talk instead of type - Whisper transcribes your voice to text in real-time.
| Option | Description |
|---|---|
| 🖥️ Local Whisper | Runs on your machine, requires Python + openai-whisper, works offline |
| 🚀 Remote GPU Server | Point to your Whisper server for faster GPU-accelerated transcription |
Configuration: Settings → Features → Whisper Server URL
Example: http://192.168.1.100:9000
When your code fails, AI Code Executor doesn't just show you the error - it automatically fixes it.
You: "Create a stock analysis dashboard"
| Step | Action | Result |
|---|---|---|
| 1 | 🤖 AI generates code | Code created |
| 2 | ⚡ Executing in Docker | ❌ ModuleNotFoundError: No module named 'pandas' |
| 3 | 🔧 Auto-fix 1/10 | Installing pandas... |
| 4 | ⚡ Re-executing | ❌ ModuleNotFoundError: No module named 'yfinance' |
| 5 | 🔧 Auto-fix 2/10 | Installing yfinance... |
| 6 | ⚡ Re-executing | ❌ ModuleNotFoundError: No module named 'plotly' |
| 7 | 🔧 Auto-fix 3/10 | Installing plotly... |
| 8 | ⚡ Re-executing | ✅ SUCCESS! Dashboard displayed |
| Setting | Description | Range | Default |
|---|---|---|---|
| 🔧 Max Attempts | How many times to retry fixing errors | 1-20 | 10 |
| ⏱️ Execution Timeout | Maximum time per code execution | 0-3600s | 30s (0 = unlimited) |
Location: Settings → Features
Full control over how AI analyzes and fixes errors:
The code execution failed with the following error:
{errors}
Analyze the error carefully and provide ONLY the fixed code.
Do not explain - just provide working code.
If dependencies are missing, install them first with pip/npm.
💡 Use
{errors}placeholder - it gets replaced with actual error output.
Location: Settings → Prompts → Auto-Fix Prompt Template
Shape how AI writes code for you:
Default prompt (optimized for code execution):
You are a professional coder who provides complete, executable code solutions.
Present only code, no explanatory text. Present code blocks in execution order.
If dependencies are needed, install them first using a bash script.
Example customizations you can add:
- "Always use Python 3.11 features"
- "Prefer async/await patterns"
- "Include comprehensive error handling"
- "Add logging to all functions"
- "Use type hints everywhere"
- "Write unit tests for all code"
Location: Settings → Prompts → System Prompt
| Provider | Key Format | Get Your Key |
|---|---|---|
| 🟣 Anthropic (Claude) | sk-ant-api03-... |
console.anthropic.com |
| 🟢 OpenAI (GPT) | sk-... |
platform.openai.com |
| 🔵 Google (Gemini) | AIza... |
makersuite.google.com |
| ⚫ Ollama | Auto-detected | ollama.ai |
| 🎤 Whisper (Optional) | Server URL | Self-hosted |
Location: Settings → API Keys
| Setting | Range | Default | Description |
|---|---|---|---|
| CPU Cores | 1-16 | 2 | CPU cores per container |
| Memory | 512m-32g | 8g | RAM limit per container |
| Storage | 1g-100g | 10g | Disk space per container |
| Network | On/Off | On | Allow internet access |
Actions:
- 🗑️ Stop All Containers - Stop all running containers
- 🧹 Cleanup Unused - Remove stopped containers
Location: Settings → Docker
# ═══════════════════════════════════════════════════════════════════════════
# 🔑 API KEYS
# ═══════════════════════════════════════════════════════════════════════════
ANTHROPIC_API_KEY=sk-ant-... # Claude models
OPENAI_API_KEY=sk-... # GPT models
GEMINI_API_KEY=AIza... # Gemini models
# ═══════════════════════════════════════════════════════════════════════════
# 🦙 OLLAMA (Local AI)
# ═══════════════════════════════════════════════════════════════════════════
OLLAMA_HOST=http://localhost:11434 # Local or remote Ollama server
# ═══════════════════════════════════════════════════════════════════════════
# 🎤 WHISPER (Voice Input)
# ═══════════════════════════════════════════════════════════════════════════
WHISPER_SERVER_URL= # Remote Whisper GPU server (optional)
# ═══════════════════════════════════════════════════════════════════════════
# ⚡ EXECUTION SETTINGS
# ═══════════════════════════════════════════════════════════════════════════
DOCKER_EXECUTION_TIMEOUT=30 # Seconds (0 = unlimited)
AUTO_FIX_MAX_ATTEMPTS=10 # Retry attempts (1-20)
# ═══════════════════════════════════════════════════════════════════════════
# 🐳 DOCKER RESOURCE LIMITS
# ═══════════════════════════════════════════════════════════════════════════
DOCKER_CPU_CORES=2 # 1-16 cores
DOCKER_MEMORY_LIMIT=8g # 512m-32g RAM
DOCKER_STORAGE_LIMIT=10g # 1g-100g disk
DOCKER_EXPORT_PATH=./docker_images_exported # Where exported images are saved
# ═══════════════════════════════════════════════════════════════════════════
# 📝 PROMPTS (Customize AI behavior)
# ═══════════════════════════════════════════════════════════════════════════
SYSTEM_PROMPT=You are a professional coder...
AUTO_FIX_PROMPT=The code failed with:\n\n{errors}\n\nProvide fixed code only.
# ═══════════════════════════════════════════════════════════════════════════
# 🌐 SERVER
# ═══════════════════════════════════════════════════════════════════════════
HOST=0.0.0.0
PORT=8000| File | Size | Actions |
|---|---|---|
| 📄 script.py | 2.4 KB | 👁️ View · ⬇️ Download |
| 📄 data.csv | 156 KB | 👁️ View · ⬇️ Download |
| 📄 requirements.txt | 0.3 KB | 👁️ View · ⬇️ Download |
| 📄 output.json | 12 KB | 👁️ View · ⬇️ Download |
| 📁 results/ | — | → Browse |
| 📄 chart.png | 89 KB | 👁️ View · ⬇️ Download |
Actions: 📤 Upload Files · 📥 Download All as ZIP
Features:
- 📤 Drag & drop upload into containers
- 👁️ Syntax-highlighted file preview
- ⬇️ Download individual files
- 📥 Bulk download as ZIP
- ↩️ Send output to AI input (one-click)
- 🔒 Large file protection (>1MB shows warning)
Mobile Features:
- 📱 Touch-optimized interface
- 🍔 Collapsible sidebar
- ⌨️ Keyboard-aware input area
- 🎤 Voice input support
# Clone
git clone https://github.com/Ark0N/AI-Code-Executor.git
or
wget https://github.com/Ark0N/AI-Code-Executor/archive/refs/heads/main.zip
unzip main.zip
cd AI-Code-Executor
# Install (auto-detects OS & container runtime)
chmod +x INSTALL.sh && ./INSTALL.sh
# Start
./start.sh🌐 Open http://localhost:8000
Run AI Code Executor entirely in Docker - no local Python installation required!
# Clone the repository
git clone https://github.com/Ark0N/AI-Code-Executor.git
cd AI-Code-Executor
# Create .env file with your API keys
cp .env.example .env
# Edit .env and add your API keys (ANTHROPIC_API_KEY, OPENAI_API_KEY, etc.)
# Start with Docker Compose
docker compose up -d
# View logs
docker compose logs -f
# Open http://localhost:8000| File | Purpose |
|---|---|
Dockerfile |
Sandbox container image (where user code runs) |
Dockerfile.app |
Main application container |
docker-compose.yml |
Complete deployment configuration |
.dockerignore |
Files excluded from build context |
The application uses Docker-in-Docker (via socket mounting):
┌─────────────────────────────────────────────────────────────┐
│ Host Machine │
│ ┌───────────────────────────────────────────────────────┐ │
│ │ AI Code Executor Container │ │
│ │ - FastAPI Backend │ │
│ │ - Web Frontend │ │
│ │ - Docker CLI (talks to host Docker) │ │
│ └───────────────────────────────────────────────────────┘ │
│ │ │
│ │ /var/run/docker.sock │
│ ▼ │
│ ┌───────────────────────────────────────────────────────┐ │
│ │ Docker Daemon │ │
│ │ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │ │
│ │ │ Sandbox #1 │ │ Sandbox #2 │ │ Sandbox #N │ │ │
│ │ │ (Conv. 1) │ │ (Conv. 2) │ │ (Conv. N) │ │ │
│ │ └─────────────┘ └─────────────┘ └─────────────┘ │ │
│ └───────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────┘
Create a .env file in the project root:
# Required - At least one AI provider
ANTHROPIC_API_KEY=sk-ant-...
OPENAI_API_KEY=sk-...
GEMINI_API_KEY=AIza...
# Optional - Server settings
PORT=8000
# Optional - Docker resource limits for code execution
DOCKER_CPU_CORES=2
DOCKER_MEMORY_LIMIT=8g
DOCKER_STORAGE_LIMIT=10g
DOCKER_EXECUTION_TIMEOUT=30
# Optional - Ollama (if running on host)
OLLAMA_HOST=http://host.docker.internal:11434
# Optional - Remote Whisper server
WHISPER_SERVER_URL=# Start in background
docker compose up -d
# Start with build (after code changes)
docker compose up -d --build
# View logs
docker compose logs -f
# View logs for specific service
docker compose logs -f ai-code-executor
# Stop containers
docker compose down
# Stop and remove volumes (deletes data!)
docker compose down -v
# Restart
docker compose restart
# Check status
docker compose ps
# Shell into running container
docker exec -it ai-code-executor bash
# Build sandbox image manually
docker build -t ai-code-executor:latest .
# Build app image manually
docker build -f Dockerfile.app -t ai-code-executor-app:latest .If you're running Ollama on your host machine:
macOS / Windows (Docker Desktop):
# Ollama is accessible at host.docker.internal automatically
OLLAMA_HOST=http://host.docker.internal:11434Linux:
# The docker-compose.yml includes extra_hosts for this
OLLAMA_HOST=http://host.docker.internal:11434
# Or use your machine's IP
OLLAMA_HOST=http://192.168.1.100:11434Docker Compose creates named volumes for persistent data:
| Volume | Purpose | Path in Container |
|---|---|---|
ai-executor-data |
Database (conversations, settings) | /app/data |
ai-executor-exports |
Exported Docker images | /app/docker_images_exported |
To backup your data:
# Backup database
docker run --rm -v ai-executor-data:/data -v $(pwd):/backup alpine \
tar cvf /backup/ai-executor-backup.tar /data
# Restore database
docker run --rm -v ai-executor-data:/data -v $(pwd):/backup alpine \
tar xvf /backup/ai-executor-backup.tar -C /Permission denied on Docker socket:
# On Linux, add your user to docker group
sudo usermod -aG docker $USER
newgrp docker
# Or run with sudo
sudo docker compose up -dSandbox image not building:
# Build manually
docker build -t ai-code-executor:latest .
# Check if image exists
docker images | grep ai-code-executorContainer can't reach Ollama:
# Verify Ollama is running
curl http://localhost:11434/api/tags
# Test from inside container
docker exec -it ai-code-executor curl http://host.docker.internal:11434/api/tagsPort already in use:
# Change port in .env
PORT=8001
# Or stop conflicting service
sudo lsof -i :8000| Platform | Status | Notes |
|---|---|---|
| Ubuntu / Debian | ✅ | apt |
| Fedora / RHEL | ✅ | dnf |
| Arch / Manjaro | ✅ | pacman |
| macOS Intel | ✅ | Homebrew |
| macOS Apple Silicon | ✅ | M1/M2/M3/M4 |
| Windows WSL2 | ✅ | Ubuntu recommended |
| Runtime | Status |
|---|---|
| Docker Desktop | ✅ Recommended |
| Docker Engine | ✅ |
| Podman | ✅ |
| Colima | ✅ |
# Install Homebrew (if needed)
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
# Install Docker
brew install --cask docker # Docker Desktop
# OR
brew install colima docker && colima start # Colima (lightweight)# Install Docker
curl -fsSL https://get.docker.com | sh
sudo usermod -aG docker $USER
newgrp docker- ✅ Isolated Containers - Each chat runs in separate Docker container
- ✅ Resource Limits - CPU, memory, storage caps prevent abuse
- ✅ API Key Encryption - Keys stored encrypted in database
- ✅ No Host Access - Code cannot escape container sandbox
- ✅ Auto Cleanup - Containers removed when done
- ✅ Network Control - Optional internet access restriction
AI-Code-Executor/
├── backend/
│ ├── main.py # FastAPI app, auto-fix logic, endpoints
│ ├── code_executor.py # Docker container management
│ ├── anthropic_client.py # Claude API integration
│ ├── openai_client.py # GPT API integration
│ ├── gemini_client.py # Gemini API integration
│ ├── ollama_client.py # Local Ollama integration
│ ├── whisper_client.py # Local Whisper voice input
│ ├── whisper_remote.py # Remote Whisper GPU server
│ └── database.py # SQLite async ORM
│
├── frontend/
│ ├── index.html # Main UI
│ ├── app.js # Application logic
│ └── style.css # Styling
│
├── whisper/ # Standalone Whisper server
├── docs/ # Documentation
├── scripts/ # Utility scripts
│
├── Dockerfile # Sandbox container (code execution)
├── Dockerfile.app # Application container
├── docker-compose.yml # Docker Compose deployment
├── .dockerignore # Docker build exclusions
├── INSTALL.sh # Universal installer
├── start.sh # Start server
├── requirements.txt # Python dependencies
└── .env.example # Configuration template
- Fork the repository
- Create feature branch
- Make changes
- Submit pull request
See CONTRIBUTING.md
MIT License - see LICENSE
