cd form_bricks_task
python -m venv venv .\venv\Scripts\activate
pip install -r requirements.txt
python setup_windows.py
### 2️⃣ Configure Environment
```powershell
# Copy environment template
copy .env.example .env
# Edit .env and add your LLM API key
notepad .env
Add one of these to .env:
# Option 1: OpenAI
OPENAI_API_KEY=sk-your-key-here
# Option 2: Anthropic Claude (recommended for best results)
ANTHROPIC_API_KEY=sk-ant-your-key-here
# Option 3: Local Ollama
OLLAMA_BASE_URL=http://localhost:11434python main.py formbricks upThis will:
- Download and start PostgreSQL, Redis, and Formbricks containers
- Auto-generate security secrets
- Wait for services to be ready
- Display next steps
- Open http://localhost:3000 in your browser
- Complete the setup wizard (create admin account)
- Navigate to Settings > API Keys
- Click Create API Key for production environment
- Copy the API key immediately (you won't see it again!)
- Add to
.env:FORMBRICKS_API_KEY=your-api-key-here
- Get your environment ID from the URL (e.g.,
clxxx...) and add to.env:FORMBRICKS_ENVIRONMENT_ID=your-environment-id
python main.py formbricks generateThis will create:
- 5 unique surveys with varied question types
- 10 users (5 Managers, 5 Owners)
- At least 1 response per survey (5+ total responses)
All data is saved to data/generated/ as JSON files.
python main.py formbricks seedThis will:
- Upload users via Client API
- Create surveys via Management API
- Submit responses via Client API
- Display progress with colored output
Open http://localhost:3000 and explore:
- Surveys - View all created surveys
- Responses - See response data and analytics
- People - Browse created users
- Settings - Manage team members and permissions
# Stop containers (keep data)
python main.py formbricks down
# Stop and remove all data
python main.py formbricks down
# Then answer 'y' when promptedformbricks-challenge/
├── main.py # CLI entry point
├── setup_windows.py # Windows setup script
├── requirements.txt # Python dependencies
├── .env # Configuration (create from .env.example)
│
├── commands/ # CLI commands
│ ├── up.py # Start Formbricks
│ ├── down.py # Stop Formbricks
│ ├── generate.py # Generate data with LLM
│ └── seed.py # Seed via APIs
│
├── utils/ # Utilities
│ ├── api_client.py # Formbricks API wrapper
│ └── llm_client.py # Multi-provider LLM client
│
├── data/
│ └── generated/ # Generated JSON files
│ ├── surveys.json
│ ├── users.json
│ └── responses.json
│
└── docker/
└── docker-compose.yml # Formbricks Docker setup
- Validates Docker installation
- Auto-generates security secrets
- Starts PostgreSQL, Redis, and Formbricks
- Waits for services with health checks
- Displays clear setup instructions
- Detects available LLM provider (OpenAI/Claude/Ollama)
- Generates 5 diverse, realistic surveys:
- Product Feedback
- Employee Satisfaction
- Customer Service Experience
- Event Feedback
- User Research
- Creates 10 users with varied attributes
- Generates realistic survey responses
- Saves as structured JSON
- Tests API connection
- Validates environment configuration
- Seeds users via Client API
- Creates surveys via Management API
- Submits responses with proper question mapping
- Provides detailed progress tracking
- Includes error handling and retry logic
- Gracefully stops all containers
- Optional data removal
- Clean teardown
| Variable | Required | Description |
|---|---|---|
WEBAPP_URL |
No | Formbricks URL (default: http://localhost:3000) |
OPENAI_API_KEY |
One of | OpenAI API key |
ANTHROPIC_API_KEY |
One of | Anthropic Claude API key |
OLLAMA_BASE_URL |
One of | Ollama endpoint (default: http://localhost:11434) |
FORMBRICKS_API_KEY |
Yes* | Formbricks API key (from Settings) |
FORMBRICKS_ENVIRONMENT_ID |
Yes* | Environment ID |
*Required after initial setup
# Install Docker Desktop for Windows
# https://www.docker.com/products/docker-desktop# Stop the service using port 3000, or modify docker-compose.yml
# Change "3000:3000" to "3001:3000" in the ports section- Verify Formbricks is running:
docker compose -f docker/docker-compose.yml ps - Check API key in
.env - Verify environment ID in
.env - View logs:
docker compose -f docker/docker-compose.yml logs -f
- OpenAI: Verify API key and quota
- Claude: Verify API key
- Ollama: Ensure Ollama is running locally
Each survey includes:
- Unique, descriptive name
- 3-5 varied questions:
- Open text questions
- Rating scales (1-5)
- Multiple choice (single/multi)
- Professional question text
- Proper end screens
- 5 Managers with varied attributes
- 5 Owners with varied attributes
- Realistic names, emails
- Department, location, title attributes
- At least 1 per survey
- Realistic, coherent answers
- Properly mapped to question types
- Marked as finished
- ✅ Locally run Formbricks via
python main.py formbricks up - ✅ Stop Formbricks via
python main.py formbricks down - ✅ Generate data via
python main.py formbricks generate - ✅ Seed via APIs only using
python main.py formbricks seed - ✅ 5 unique surveys with realistic questions
- ✅ At least 1 response per survey
- ✅ 10 users with Manager/Owner access
- ✅ Clean, well-structured code
- ✅ No direct database manipulation
This implementation demonstrates:
- Modular architecture - Separated concerns (CLI, API, LLM, Docker)
- Error handling - Comprehensive try/catch with helpful messages
- Type hints - Clear function signatures
- Documentation - Docstrings and comments
- User experience - Colored output, progress tracking, clear instructions
- Cross-platform - Windows-compatible paths and commands
- Configuration - Environment-based, not hardcoded
- API best practices - Rate limiting, retries, proper headers