AI-powered social media automation stack with multi-provider LLM support
AutoMeta is a containerized automation platform that generates content using various LLM providers and posts to social media platforms using browser automation.
┌─────────────────┐
│ Figma Make UI │ ← Frontend (in development)
└────────┬────────┘
│
┌────┴─────────────────────────────┐
│ │
┌───▼────────┐ ┌──────────▼─────┐
│ LLM Gateway│◄─────────────┤ MCP Server │
│ │ │ (Orchestrator) │
│ • Groq │ └────────┬───────┘
│ • Gemini │ │
│ • OpenRouter │
│ • LM Studio│ ┌────────▼────────┐
└────────────┘ │ Puppeteer Runner│
│ │
│ • Twitter │
│ • LinkedIn │
└─────────────────┘
- Multi-provider support: Groq, Gemini, OpenRouter, LM Studio
- Auto-fallback: Automatically switches providers if one fails
- Priority routing: Local-first, then cloud providers
- Health monitoring: Real-time provider status checking
- Browser automation: Posts to social platforms
- Remote debugging: Chrome debugging on port 9222
- Platform support: Twitter, LinkedIn (extensible)
- MCP integration: Job status reporting
- Workflow orchestration: Define multi-step automation workflows
- Tool coordination: Manages LLM and Puppeteer services
- Scheduling: Cron-based and webhook triggers
- Docker and Docker Compose
- API keys for your chosen LLM providers (optional)
- Social media credentials (for posting)
git clone https://github.com/yourusername/AutoMeta.git
cd AutoMetacp .env.example .env
# Edit .env with your API keyscd docker
docker-compose up -d# Check LLM Gateway
curl http://localhost:8000/health
# Check Puppeteer Runner
curl http://localhost:3000/health
# Check MCP Server
curl http://localhost:3003/healthCreate a .env file in the project root:
# Groq Configuration
GROQ_API_KEY=your_groq_api_key
GROQ_MODEL=llama-3.1-70b-versatile
# Gemini Configuration
GEMINI_API_KEY=your_gemini_api_key
GEMINI_MODEL=gemini-1.5-flash
# OpenRouter Configuration
OPENROUTER_API_KEY=your_openrouter_api_key
OPENROUTER_MODEL=anthropic/claude-3.5-sonnet
# LM Studio (local)
LMSTUDIO_URL=http://host.docker.internal:1234/v1
LMSTUDIO_MODEL=local-model
# Other
LOG_LEVEL=infoBy default, providers are tried in this order:
- LM Studio (local, free)
- Groq (fast, cheap)
- Gemini (good quality)
- OpenRouter (most flexible)
curl -X POST http://localhost:8000/generate \\
-H "Content-Type: application/json" \\
-d '{
"prompt": "Write about the future of AI",
"platform": "twitter",
"max_tokens": 280
}'curl -X POST http://localhost:3000/run \\
-H "Content-Type: application/json" \\
-d '{
"jobId": "test-001",
"platforms": ["twitter"],
"prompt": "Share an interesting tech fact",
"credentials": {
"twitter": {
"username": "your_username",
"password": "your_password"
}
}
}'AutoMeta/
├── docker/
│ ├── docker-compose.yml # Service orchestration
│ ├── puppeteer/ # Puppeteer container config
│ └── llm-gateway/ # LLM gateway container config
├── src/
│ ├── automation/
│ │ └── poster.js # Puppeteer automation script
│ ├── llm/
│ │ └── gateway.py # Multi-provider LLM gateway
│ └── mcp/
│ ├── workflows.json # Workflow definitions
│ └── tools.json # Tool configurations
└── figma-make/ # Frontend UI (coming soon)
- Create a provider class in
src/llm/gateway.py - Implement
generate()andcheck_health()methods - Add to
providersdict andPROVIDER_PRIORITY - Add environment variables to docker-compose.yml
- Add platform credentials to config
- Implement posting function in
src/automation/poster.js - Add platform-specific selectors (update as platforms change)
- Update platform context in LLM gateway
Connect to Chrome DevTools at: chrome://inspect or http://localhost:9222
- Never commit credentials - Use .env files (gitignored)
- Rotate API keys regularly
- Use environment-specific configs for production
- Platform selectors change - Update automation scripts as needed
All services expose /health endpoints:
- LLM Gateway: http://localhost:8000/health
- Puppeteer: http://localhost:3000/health
- MCP Server: http://localhost:3003/health
# Check provider status
curl http://localhost:8000/health
# View logs
docker logs autometa-llm-gateway# Check service health
curl http://localhost:3000/health
# Connect to remote debugging
open chrome://inspect
# View logs
docker logs autometa-puppeteerIf your preferred provider fails, the gateway automatically tries the next available provider in priority order.
- Add more social platforms (Facebook, Instagram, TikTok)
- Implement scheduling UI
- Add analytics and reporting
- Content moderation hooks
- Multi-account support
- A/B testing capabilities
- Figma Make frontend implementation
Contributions welcome! Please:
- Fork the repository
- Create a feature branch
- Add tests if applicable
- Submit a pull request
MIT License - see LICENSE file for details
- Issues: https://github.com/yourusername/AutoMeta/issues
- Docs: Coming soon
Built with:
- Puppeteer for browser automation
- FastAPI for the LLM gateway
- Docker for containerization
- MCP for orchestration
Building it back to front - Backend ready, frontend coming from Figma!