Skip to content

div-dev/formbricks_task

Repository files navigation

Formbricks Challenge

1️⃣ Setup Project

Navigate to project root

cd form_bricks_task

Create and activate virtual environment

python -m venv venv .\venv\Scripts\activate

Install dependencies

pip install -r requirements.txt

Run initial Windows configuration

python setup_windows.py


### 2️⃣ Configure Environment

```powershell
# Copy environment template
copy .env.example .env

# Edit .env and add your LLM API key
notepad .env

Add one of these to .env:

# Option 1: OpenAI
OPENAI_API_KEY=sk-your-key-here

# Option 2: Anthropic Claude (recommended for best results)
ANTHROPIC_API_KEY=sk-ant-your-key-here

# Option 3: Local Ollama
OLLAMA_BASE_URL=http://localhost:11434

3️⃣ Start Formbricks

python main.py formbricks up

This will:

  • Download and start PostgreSQL, Redis, and Formbricks containers
  • Auto-generate security secrets
  • Wait for services to be ready
  • Display next steps

4️⃣ Complete Formbricks Setup

  1. Open http://localhost:3000 in your browser
  2. Complete the setup wizard (create admin account)
  3. Navigate to Settings > API Keys
  4. Click Create API Key for production environment
  5. Copy the API key immediately (you won't see it again!)
  6. Add to .env:
    FORMBRICKS_API_KEY=your-api-key-here
  7. Get your environment ID from the URL (e.g., clxxx...) and add to .env:
    FORMBRICKS_ENVIRONMENT_ID=your-environment-id

5️⃣ Generate Data

python main.py formbricks generate

This will create:

  • 5 unique surveys with varied question types
  • 10 users (5 Managers, 5 Owners)
  • At least 1 response per survey (5+ total responses)

All data is saved to data/generated/ as JSON files.

6️⃣ Seed Database

python main.py formbricks seed

This will:

  • Upload users via Client API
  • Create surveys via Management API
  • Submit responses via Client API
  • Display progress with colored output

7️⃣ Verify & Explore

Open http://localhost:3000 and explore:

  • Surveys - View all created surveys
  • Responses - See response data and analytics
  • People - Browse created users
  • Settings - Manage team members and permissions

8️⃣ Stop Formbricks

# Stop containers (keep data)
python main.py formbricks down

# Stop and remove all data
python main.py formbricks down
# Then answer 'y' when prompted

📁 Project Structure

formbricks-challenge/
├── main.py                    # CLI entry point
├── setup_windows.py           # Windows setup script
├── requirements.txt           # Python dependencies
├── .env                       # Configuration (create from .env.example)
│
├── commands/                  # CLI commands
│   ├── up.py                 # Start Formbricks
│   ├── down.py               # Stop Formbricks
│   ├── generate.py           # Generate data with LLM
│   └── seed.py               # Seed via APIs
│
├── utils/                     # Utilities
│   ├── api_client.py         # Formbricks API wrapper
│   └── llm_client.py         # Multi-provider LLM client
│
├── data/
│   └── generated/            # Generated JSON files
│       ├── surveys.json
│       ├── users.json
│       └── responses.json
│
└── docker/
    └── docker-compose.yml    # Formbricks Docker setup

🎨 Features

Command: up

  • Validates Docker installation
  • Auto-generates security secrets
  • Starts PostgreSQL, Redis, and Formbricks
  • Waits for services with health checks
  • Displays clear setup instructions

Command: generate

  • Detects available LLM provider (OpenAI/Claude/Ollama)
  • Generates 5 diverse, realistic surveys:
    • Product Feedback
    • Employee Satisfaction
    • Customer Service Experience
    • Event Feedback
    • User Research
  • Creates 10 users with varied attributes
  • Generates realistic survey responses
  • Saves as structured JSON

Command: seed

  • Tests API connection
  • Validates environment configuration
  • Seeds users via Client API
  • Creates surveys via Management API
  • Submits responses with proper question mapping
  • Provides detailed progress tracking
  • Includes error handling and retry logic

Command: down

  • Gracefully stops all containers
  • Optional data removal
  • Clean teardown

🔧 Configuration

Environment Variables

Variable Required Description
WEBAPP_URL No Formbricks URL (default: http://localhost:3000)
OPENAI_API_KEY One of OpenAI API key
ANTHROPIC_API_KEY One of Anthropic Claude API key
OLLAMA_BASE_URL One of Ollama endpoint (default: http://localhost:11434)
FORMBRICKS_API_KEY Yes* Formbricks API key (from Settings)
FORMBRICKS_ENVIRONMENT_ID Yes* Environment ID

*Required after initial setup

🐛 Troubleshooting

Docker not found

# Install Docker Desktop for Windows
# https://www.docker.com/products/docker-desktop

Port 3000 already in use

# Stop the service using port 3000, or modify docker-compose.yml
# Change "3000:3000" to "3001:3000" in the ports section

API connection fails

  1. Verify Formbricks is running: docker compose -f docker/docker-compose.yml ps
  2. Check API key in .env
  3. Verify environment ID in .env
  4. View logs: docker compose -f docker/docker-compose.yml logs -f

LLM generation fails

  • OpenAI: Verify API key and quota
  • Claude: Verify API key
  • Ollama: Ensure Ollama is running locally

📊 Data Seeding Details

Surveys (5 total)

Each survey includes:

  • Unique, descriptive name
  • 3-5 varied questions:
    • Open text questions
    • Rating scales (1-5)
    • Multiple choice (single/multi)
  • Professional question text
  • Proper end screens

Users (10 total)

  • 5 Managers with varied attributes
  • 5 Owners with varied attributes
  • Realistic names, emails
  • Department, location, title attributes

Responses (5+ total)

  • At least 1 per survey
  • Realistic, coherent answers
  • Properly mapped to question types
  • Marked as finished

🏆 Challenge Requirements

  • ✅ Locally run Formbricks via python main.py formbricks up
  • ✅ Stop Formbricks via python main.py formbricks down
  • ✅ Generate data via python main.py formbricks generate
  • ✅ Seed via APIs only using python main.py formbricks seed
  • ✅ 5 unique surveys with realistic questions
  • ✅ At least 1 response per survey
  • ✅ 10 users with Manager/Owner access
  • ✅ Clean, well-structured code
  • ✅ No direct database manipulation

🎓 Code Quality

This implementation demonstrates:

  • Modular architecture - Separated concerns (CLI, API, LLM, Docker)
  • Error handling - Comprehensive try/catch with helpful messages
  • Type hints - Clear function signatures
  • Documentation - Docstrings and comments
  • User experience - Colored output, progress tracking, clear instructions
  • Cross-platform - Windows-compatible paths and commands
  • Configuration - Environment-based, not hardcoded
  • API best practices - Rate limiting, retries, proper headers

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages