A comprehensive web application for generating feature plans, user stories, and engineering tasks using AI (OpenAI ChatGPT).
-
AI-Powered Feature Generation: Uses OpenAI's GPT-4 Turbo to generate:
- User Stories with acceptance criteria
- Engineering tasks grouped by category (Frontend, Backend, Database, Infrastructure)
- Risks and mitigation strategies
-
Task Management:
- Edit and reorder engineering tasks
- View last 5 feature plans
- Export results as markdown
-
System Health Monitoring:
- Real-time backend status check
- Database connection verification
- LLM service connectivity test
-
Production-Ready:
- Docker containerization
- Environment variable configuration
- Comprehensive error handling
- Structured logging
- Input validation
- Framework: FastAPI (Python)
- Database: SQLite with SQLAlchemy ORM
- API: RESTful with Pydantic validation
- LLM: OpenAI Chat Completion API
- Web Server: Uvicorn
- Framework: React 18 with Vite
- HTTP Client: Axios
- Styling: CSS3
- Build Tool: Vite
- Containerization: Docker & Docker Compose
- Configuration: Environment variables (.env)
- Python 3.11+
- Node.js 18+
- OpenAI API key
- Docker (optional)
# Navigate to backend directory
cd backend
# Create virtual environment
python -m venv venv
# Activate virtual environment
# Windows
venv\Scripts\activate
# macOS/Linux
source venv/bin/activate
# Install dependencies
pip install -r requirements.txt
# Create .env file
cp ../.env.example ../.env
# Edit .env and add your OpenAI API key# Navigate to frontend directory
cd frontend
# Install dependencies
npm install
# Create .env file
cp .env.example .envTerminal 1 - Backend:
cd backend
source venv/bin/activate # or venv\Scripts\activate on Windows
python -m uvicorn app.main:app --reload --port 8000Terminal 2 - Frontend:
cd frontend
npm run devAccess the app at http://localhost:5173
# Create .env file with your OpenAI API key
cp .env.example .env
# Edit .env and add OPENAI_API_KEY
# Build and run with Docker Compose
docker-compose up --build
# Or build individual containers
docker build -t tasks-generator-backend .
docker build -t tasks-generator-frontend ./frontend -f ./frontend/DockerfileAccess the app at http://localhost:3000
- POST
/api/features/generate- Generate new feature plan- Input: goal (string), users (array), constraints (array)
- Returns: Complete feature plan with stories, tasks, and risks
- GET
/api/features/recent?limit=5- Get last N feature plans - GET
/api/features/{planId}- Get specific feature plan - PUT
/api/features/{planId}/tasks- Update engineering tasks - GET
/api/features/{planId}/export- Export as markdown
- GET
/api/health/status- System health check - GET
/api/health/ping- Simple ping endpoint
Task_Generators/
βββ backend/
β βββ app/
β β βββ __init__.py
β β βββ main.py # FastAPI entry point
β β βββ config.py # Settings & env vars
β β βββ database.py # Database setup
β β βββ models.py # SQLAlchemy models
β β βββ schemas.py # Pydantic schemas
β β βββ routes/
β β β βββ __init__.py
β β β βββ features.py # Feature endpoints
β β β βββ health.py # Health check endpoints
β β βββ services/
β β β βββ __init__.py
β β β βββ feature_service.py # Business logic
β β βββ utils/
β β βββ __init__.py
β β βββ logger.py # Logging setup
β β βββ llm.py # OpenAI integration
β β βββ validators.py # Input validation
β βββ requirements.txt
βββ frontend/
β βββ src/
β β βββ pages/
β β β βββ Home.jsx # Main page
β β β βββ Home.css
β β βββ components/
β β β βββ FeatureForm.jsx # Input form
β β β βββ FeatureForm.css
β β β βββ PlanView.jsx # Plan display & edit
β β β βββ PlanView.css
β β β βββ Health.jsx # Health status
β β β βββ Health.css
β β β βββ RecentPlans.jsx # Recent plans list
β β β βββ RecentPlans.css
β β βββ services/
β β β βββ api.js # API client
β β βββ App.jsx
β β βββ App.css
β β βββ main.jsx
β β βββ index.css
β βββ index.html
β βββ package.json
β βββ vite.config.js
β βββ Dockerfile
β βββ .env.example
βββ Dockerfile
βββ docker-compose.yml
βββ .env.example
OPENAI_API_KEY=sk-your-api-key-here
OPENAI_MODEL=gpt-4-turbo
DATABASE_URL=sqlite:///./tasks_generator.db
DEBUG=False
LOG_LEVEL=INFO
ALLOWED_ORIGINS=http://localhost:5173,http://localhost:3000VITE_API_BASE_URL=http://localhost:8000/api- Clean, responsive design - Works on desktop and tablet
- Real-time form validation - Immediate user feedback
- Color-coded task priorities - Quick visual scanning
- Dark mode health indicator - System status at a glance
- Markdown export - Share plans easily
- Recent plans sidebar - Quick access to previous work
- Input validation on all endpoints
- Error handling with meaningful messages
- Structured logging
- Environment variables for secrets
- Database migrations ready
- Health check endpoints
- CORS configuration
- Docker support
- Pydantic schema validation
- SQLAlchemy ORM with proper sessions
The system monitors three critical components:
- Backend Service - Application is running
- Database Connection - SQLite/database is accessible
- LLM Service - OpenAI API is reachable
Status is shown in the UI with real-time updates every 30 seconds.
id: Integer (Primary Key)goal: String (500 chars max)users: JSON arrayconstraints: JSON arrayuser_stories: JSON array with acceptance criteriaengineering_tasks: JSON object grouped by categoryrisks: JSON array with mitigationscreated_at: DateTimeupdated_at: DateTime
The system uses OpenAI's Chat Completion API with:
- Model: GPT-4 Turbo (configurable)
- Role: Senior Product Manager
- Output: Strict JSON format with validation
- Retry Logic: Up to 3 attempts for JSON parsing failures
Generated plans are exportable as markdown with sections:
- Feature Goal
- User Stories (with acceptance criteria)
- Engineering Tasks (grouped by category)
- Risks (with severity and mitigation)
- Validation errors return 400 with detailed messages
- Not found errors return 404
- Server errors return 500 with logging
- LLM failures gracefully handled with retries
- Database connection errors are monitored
This project is provided as-is for production use.
For issues or questions, refer to the inline code documentation.