ClawBrowser is a local-first browser automation layer for OpenClaw/OpenKrab ecosystem. It provides safe, containerized web automation without visible browser windows, featuring domain isolation, content validation, and learning integration.
- Container Isolation: Docker/Podman sandbox with no host filesystem access (containers/browser.Dockerfile).
- Domain Security: Whitelist/blacklist validation with network restrictions (safety_guard.py).
- Smart Automation: Natural language commands → Playwright actions → structured results (run_task.py).
- Content Validation: HTML/JS scanning, file quarantine, size limits.
- Learning Integration: ClawSelfImprove pattern learning + ClawMemory snapshot storage.
- Thai Optimization: Enhanced support for Thai websites and language commands.
browser/tasks
containers/browser.Dockerfile
safety_guard.py
run_task.py
- ClawSelfImprove: Action pattern learning from execution logs (learn_from_run.py).
- ClawMemory: Screenshot and content storage for semantic search (store_memory.py).
- Privacy First: 100% local execution, no cloud browser services required.
- Optional Stealth Mode: Anti-detection techniques for challenging websites.
flowchart TD
A([Natural Language Command]) --> B{Parse Command}
B -->|Thai/English| C[Action Generation]
B -->|YAML Task| D[Task Validation]
C --> E[Safety Guard Check]
D --> E
E -->|Domain Blocked| F[Security Violation]
E -->|Safe| G[Docker Container Launch]
G --> H[Playwright Browser Session]
H --> I{Action Type}
I -->|Navigate| J[URL Load]
I -->|Interact| K[Click/Type/Scroll]
I -->|Extract| L[Text/HTML/Table]
I -->|Capture| M[Screenshot]
I -->|Download| N[File Validation]
J --> O[Content Scan]
K --> O
L --> O
M --> O
N --> O
O --> P[Result Processing]
P --> Q[ClawMemory Storage]
P --> R[ClawSelfImprove Logging]
P --> S[Structured Output]
F --> T[Error Response]
- Python 3.8+
- Docker or Podman
- 2GB RAM minimum
- Internet connection
# Clone repository
git clone https://github.com/openkrab/ClawBrowser.git
cd ClawBrowser
# Install Python dependencies
pip install -r requirements.txt
# Build browser container
docker build -t claw-browser containers/
# Install Playwright browsers
python -c "from playwright.sync_api import sync_playwright; p = sync_playwright().start(); p.chromium.download(); p.firefox.download(); p.webkit.download(); p.stop()"
# Test installation
python scripts/run_task.py --test safety# One-click install
clawflow install claw-browser
# Verify installation
clawflow list | grep claw-browserAdd to your OpenClaw configuration:
plugins:
- name: claw-browser
path: ~/.openclaw/skills/claw-browser
enabled: true
config:
container_runtime: docker
default_timeout: 30
safety_level: highexport CLAW_BROWSER_CONFIG=~/.openclaw/skills/claw-browser/config.yaml
export CLAW_BROWSER_DOCKER_IMAGE=claw-browser:latest
export PLAYWRIGHT_BROWSERS_PATH=~/.cache/ms-playwright# Natural language commands
python scripts/run_task.py "ไปที่ set.or.th แล้ว screenshot ตารางหุ้น"
python scripts/run_task.py "scrape wikipedia.org for AI content"
# YAML task execution
python scripts/run_task.py --task examples/trading-check.yml
python scripts/run_task.py --task examples/login-form.yml
# Dry run mode (validation only)
python scripts/run_task.py --dry-run "navigate to example.com"# Test safety guardrails
python scripts/run_task.py --test safety
# Test command parsing
python scripts/run_task.py --test parsing
# Test container availability
python scripts/run_task.py --test container# Store browser snapshot in ClawMemory
python scripts/store_memory.py --store results.json
# Search browser history
python scripts/store_memory.py --search "stock prices"
# Process learning from execution logs
python scripts/learn_from_run.py --recent 24# Automatic storage after browser task
python scripts/run_task.py "scrape news site" --store-memory
# Search historical browser sessions
python scripts/store_memory.py --search "what did I browse about SET50 last week?"# Analyze execution patterns
python scripts/learn_from_run.py --analyze execution_log.json
# Update local learning database
python scripts/learn_from_run.py --recent 48 --promote
# View learning insights
python scripts/learn_from_run.py --stats# ClawFlow.yaml integration
schedules:
- name: "Daily market check"
cron: "0 9 * * 1-5" # Weekdays 9 AM
command: "cd ~/.openclaw/skills/claw-browser && python scripts/run_task.py --task examples/trading-check.yml"
description: "Monitor stock market daily"
- name: "Weekly learning update"
cron: "0 2 * * 1" # Mondays 2 AM
command: "cd ~/.openclaw/skills/claw-browser && python scripts/learn_from_run.py --recent 168 --promote"
description: "Process week's browser learning"task: "Check SET50 stock prices"
description: "Monitor Thailand stock market index"
url: "https://www.set.or.th/th/market/product/stock/quote"
actions:
- type: "navigate"
url: "https://www.set.or.th/th/market/product/stock/quote"
- type: "wait_for_selector"
selector: ".stock-table"
- type: "screenshot"
filename: "set50_prices.png"
- type: "extract_table"
selector: ".stock-table"task: "Bank login and balance check"
description: "Automated banking login with security confirmation"
url: "https://demo.testfire.net/login.jsp"
safety:
confirm_login: true
allowed_domains: ["*.testfire.net"]
actions:
- type: "navigate"
url: "https://demo.testfire.net/login.jsp"
- type: "type"
text: "admin"
selector: "#uid"
- type: "type"
text: "admin"
selector: "#passw"
- type: "click"
selector: "input[type='submit']"
- type: "extract_text"
selector: ".balance"
name: "account_balance"task: "Academic paper extraction"
description: "Extract research paper content for analysis"
url: "https://arxiv.org/abs/2301.12345"
actions:
- type: "navigate"
url: "https://arxiv.org/abs/2301.12345"
- type: "extract_text"
selector: ".abstract"
name: "paper_abstract"
- type: "extract_text"
selector: ".authors"
name: "paper_authors"
- type: "download"
url: "https://arxiv.org/pdf/2301.12345.pdf"
filename: "research_paper.pdf"- File-based storage (logs/, screenshots/, downloads/)
- SQLite index for search (logs/browser_index.db)
- No external dependencies
- Chroma vector database integration
- Sentence-transformers for embeddings
- Semantic search capabilities
# Enable vector mode in config.yaml
memory:
vector_enabled: true
vector_db_path: "./memory/vectors"
embedding_model: "sentence-transformers/all-MiniLM-L6-v2"claw-browser/
├── config.yaml # Safety and behavior settings
├── requirements.txt # Python dependencies
├── ClawFlow.yaml # Installation and scheduling
├── containers/
│ └── browser.Dockerfile # Container image (180MB)
├── scripts/
│ ├── run_task.py # Main CLI entrypoint
│ ├── safety_guard.py # Security validation
│ ├── learn_from_run.py # ClawSelfImprove integration
│ └── store_memory.py # ClawMemory integration
├── templates/
│ └── action_prompt.md # AI prompt templates
├── examples/
│ ├── trading-check.yml # Stock monitoring
│ ├── login-form.yml # Authentication
│ └── file-download.yml # File operations
├── logs/ # Execution logs
├── screenshots/ # Captured images
├── downloads/ # Downloaded files
└── temp/ # Temporary files
# Test safety guard functionality
python -m pytest tests/test_safety_guard.py -v
# Test command parsing
python -m pytest tests/test_parsing.py -v
# Test container operations
python -m pytest tests/test_container.py -v# Full workflow test
python -m pytest tests/test_integration.py -v
# End-to-end browser test
python -m pytest tests/test_e2e.py -v# Dry run all examples
for example in examples/*.yml; do
echo "Testing $example"
python scripts/run_task.py --task "$example" --dry-run
done
# Safety validation test
python scripts/run_task.py --test safety
python scripts/run_task.py --test parsing# Clone for development
git clone https://github.com/openkrab/ClawBrowser.git
cd ClawBrowser
# Create virtual environment
python -m venv venv
source venv/bin/activate # Linux/Mac
# or
venv\Scripts\activate # Windows
# Install dev dependencies
pip install -r requirements-dev.txt
# Run tests
python -m pytest tests/ -v
# Format code
black scripts/ tests/
isort scripts/ tests/- Safety First: All changes must maintain security guardrails
- Container Isolation: Never break container boundaries
- Thai Support: Test with Thai websites and commands
- Integration: Maintain compatibility with ClawMemory/ClawSelfImprove
- Fork the repository
- Create feature branch (
git checkout -b feature/amazing-feature) - Add tests for new functionality
- Ensure all tests pass
- Update documentation if needed
- Commit changes (
git commit -m 'Add amazing feature') - Push to branch (
git push origin feature/amazing-feature) - Open Pull Request
ClawBrowser - Safe browser automation without windows 🦞🌐
MIT License - see LICENSE file for details.
Version: 1.0.0 Status: MVP Ready Compatibility: Python 3.8+, Docker/Podman Safety: Container isolation, domain validation, content scanning