AI-guided browser automation for posting and replying on X and Facebook with persistent per-avatar browser profiles.
‼️ Note: This is an experimental project intended to highlight current platform limitations. It is not meant to encourage users to violate any platform's Terms of Service (ToS) or other agreements
- Post personal updates on X using AI-guided browser actions.
- Reply to X posts using AI-guided browser actions.
- Post personal updates on Facebook.
- Comment on Facebook posts.
- Parse visible posts from a Facebook profile or page.
client / n8n -> REST API -> profile registry -> Playwright or Dolphin -> X/Facebook
- No browser stays open permanently. A browser launches per request, does the job, and closes.
- AI vision (Anthropic / OpenAI / Ollama / OpenRouter) drives browser actions.
- Each avatar + platform pair has its own persistent cookie/profile directory.
- Node.js 20+
- npm
- One supported LLM provider (Anthropic/OpenAI/OpenRouter) or local Ollama
npm install
npx playwright install chromiumcp .env.example .envSet at least:
API_SECRET=change-me-to-a-strong-random-secret
LLM_PROVIDER=anthropic
LLM_API_KEY=your-anthropic-or-openai-api-key-here
DATA_DIR=/absolute/path/to/puppeteer/dataFor OpenRouter:
LLM_PROVIDER=openai
LLM_API_KEY=your-openrouter-api-key-here
LLM_BASE_URL=https://openrouter.ai/api/v1
LLM_MODEL_PRIMARY=anthropic/claude-sonnet-4
LLM_MODEL_FALLBACK=anthropic/claude-sonnet-4For Ollama:
LLM_PROVIDER=ollama
LLM_BASE_URL=http://localhost:11434/v1
LLM_MODEL_PRIMARY=qwen2.5vl:7b
LLM_MODEL_FALLBACK=qwen2.5vl:7bnpm startAPI runs on http://localhost:3001 by default.
Import cookies once for each profile (x-alice, facebook-bob, etc.):
curl -X POST http://localhost:3001/profiles/x-alice/cookies \
-H "x-api-key: change-me-to-a-strong-random-secret" \
-H "Content-Type: application/json" \
-d '{
"cookies": [
{
"name": "session-cookie-name",
"value": "session-cookie-value",
"domain": ".x.com",
"path": "/"
}
],
"proxy": "http://username:password@host:port"
}'The proxy field is optional but recommended for account stability.
- Post to X:
POST /postwithplatform: "x" - Reply on X:
POST /replywithplatform: "x" - Post to Facebook:
POST /postwithplatform: "facebook" - Comment on Facebook:
POST /replywithplatform: "facebook" - Parse visible Facebook posts:
POST /scrapewithplatform: "facebook"
New avatars need session import before automation can run reliably.
X and Facebook often challenge scripted login flows. The stable path is:
- Log in manually in normal Chrome.
- Export cookies.
- Import cookies into this service.
Platforms track session origin. If login and automation come from different IPs, accounts can be challenged. Use the same residential proxy for login and automation when possible.
- Install Cookie-Editor or export from browser devtools.
- Log in normally to the target account.
- Export cookies as JSON.
- Import with
POST /profiles/:id/cookies.
Cookie fields needed per platform are documented in API_REFERENCES.md.
1. Start service -> npm start (local) or docker compose up (VPS)
2. Add avatar -> manual login in normal browser
3. Import cookies -> POST /profiles/:id/cookies
4. Check readiness -> GET /profiles
5. Post or reply -> POST /post or POST /reply
6. Session expires -> re-export and re-import cookies
7. Retire avatar -> DELETE /profiles/:id
Dolphin Anty provides anti-detect browser fingerprints.
If a profile has dolphinProfileId, the service connects to Dolphin via CDP instead of launching plain Playwright.
Use Dolphin only when plain Playwright sessions are repeatedly flagged. If no Dolphin profile is configured, built-in Playwright is used automatically.
- Dolphin Anty app installed and running
- Active Dolphin account
- API token from
https://dolphin-anty.com/panel
DOLPHIN_API_URL=http://localhost:3001
DOLPHIN_API_TOKEN=your-dolphin-api-token-hereDolphin profile creation is done in the Dolphin desktop app. Assign one Dolphin profile per avatar to avoid fingerprint reuse.
Attach dolphinProfileId in data/registry.json:
{
"profiles": {
"x-alice": {
"id": "x-alice",
"platform": "x",
"avatar": "alice",
"dolphinProfileId": "123456",
"status": "needs_login"
}
}
}All endpoints and payloads are documented in API_REFERENCES.md.
cp .env.example .env
# Edit .env (keep DATA_DIR=/app/data in Docker)
docker compose up --buildStop:
docker compose downData is stored in Docker volume-backed /app/data and survives restarts.
Use Dockerfile.runpod for GPU deployments with local Ollama models.
Build and push:
docker build --platform linux/amd64 -f Dockerfile.runpod -t yourdockerhubuser/avatar-worker:latest .
docker login
docker push yourdockerhubuser/avatar-worker:latestRunPod request body example:
{
"cloudType": "COMMUNITY",
"gpuCount": 1,
"gpuTypeIds": [
"NVIDIA GeForce RTX 3090",
"NVIDIA GeForce RTX 4090",
"NVIDIA GeForce RTX 3090 Ti",
"NVIDIA RTX A5000",
"NVIDIA RTX A6000"
],
"imageName": "yourdockerhubuser/avatar-worker:latest",
"dataCenterIds": [
"EU-RO-1",
"EU-SE-1",
"EUR-IS-1",
"EU-CZ-1",
"EUR-IS-2",
"EUR-IS-3",
"EUR-NO-1",
"EU-FR-1"
],
"containerDiskInGb": 30,
"volumeInGb": 0,
"ports": ["3001/http", "11434/http"],
"name": "Avatar-Worker-REST",
"env": {
"SSH_PRIVATE_KEY": "{{ $vars.GithubAuthToken}}",
"API_SECRET": "change-me-to-a-strong-random-secret",
"LLM_PROVIDER": "ollama",
"LLM_BASE_URL": "http://localhost:11434/v1",
"LLM_MODEL_PRIMARY": "qwen3-vl:8b",
"LLM_MODEL_FALLBACK": "qwen3-vl:8b",
"PORT": "3001",
"DATA_DIR": "/app/data"
},
"dockerStartCmd": ["/entrypoint.sh"]
}Runtime startup (runpod-entrypoint.sh):
- Writes SSH key from
SSH_PRIVATE_KEY - Clones repo into
/app - Writes
/app/.envfrom env vars - Starts Xvfb
- Starts Ollama
- Starts API with
npm run start
- Valid platform values:
xandfacebook - Keep profile IDs aligned:
x-{avatar}andfacebook-{avatar} - Facebook posting uses top-feed composer (not messenger/edit UI)
- Facebook
image_urlflow attaches media before typing text
Per avatar+platform:
| Limit | Default | Env var |
|---|---|---|
| Minimum interval between posts | 60 seconds | RATE_LIMIT_MIN_INTERVAL |
| Maximum posts per day | 20 | RATE_LIMIT_DAILY_MAX |
| Variable | Required | Default | Description |
|---|---|---|---|
API_SECRET |
Yes | — | API key for protected requests |
LLM_PROVIDER |
Yes | — | anthropic, openai, or ollama |
LLM_API_KEY |
Yes* | — | Cloud LLM API key. Optional when LLM_PROVIDER=ollama |
LLM_MODEL_PRIMARY |
No* | claude-haiku-4-5-20251001 |
Fast model. Required for ollama |
LLM_MODEL_FALLBACK |
No | claude-sonnet-4-20250514 |
Fallback model |
LLM_BASE_URL |
No* | — | Base URL. Required for ollama |
LLM_FALLBACK_PROVIDER |
No | same as primary | Optional fallback provider |
LLM_FALLBACK_API_KEY |
No | — | API key for fallback provider |
PORT |
No | 3001 |
API port |
DATA_DIR |
No | repo-root data/ |
Persistent storage root. Docker examples use /app/data |
MAX_BROWSER_TIMEOUT |
No | 600 |
Max browser runtime per request (seconds). .env.example recommends 120 |
MAX_AI_RETRIES |
No | 3 |
AI retries per action |
RATE_LIMIT_MIN_INTERVAL |
No | 60 |
Min seconds between posts |
RATE_LIMIT_DAILY_MAX |
No | 20 |
Max daily posts per avatar |
DOLPHIN_API_URL |
No | http://localhost:3001 |
Dolphin local API URL |
DOLPHIN_API_TOKEN |
No | — | Dolphin API bearer token |
SAVE_DEBUG_SCREENSHOTS |
No | true |
Save failure screenshots to /data/debug/ |
NODE_ENV |
No | production |
Runtime mode |
See CONTRIBUTING.md.
See SECURITY.md.
MIT. See LICENSE.
