Skip to content

Aanya18/Task_Genrated

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

9 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Tasks Generator - Production-Ready Full-Stack Web App

A comprehensive web application for generating feature plans, user stories, and engineering tasks using AI (OpenAI ChatGPT).

🎯 Features

  • AI-Powered Feature Generation: Uses OpenAI's GPT-4 Turbo to generate:

    • User Stories with acceptance criteria
    • Engineering tasks grouped by category (Frontend, Backend, Database, Infrastructure)
    • Risks and mitigation strategies
  • Task Management:

    • Edit and reorder engineering tasks
    • View last 5 feature plans
    • Export results as markdown
  • System Health Monitoring:

    • Real-time backend status check
    • Database connection verification
    • LLM service connectivity test
  • Production-Ready:

    • Docker containerization
    • Environment variable configuration
    • Comprehensive error handling
    • Structured logging
    • Input validation

πŸ“‹ Tech Stack

Backend

  • Framework: FastAPI (Python)
  • Database: SQLite with SQLAlchemy ORM
  • API: RESTful with Pydantic validation
  • LLM: OpenAI Chat Completion API
  • Web Server: Uvicorn

Frontend

  • Framework: React 18 with Vite
  • HTTP Client: Axios
  • Styling: CSS3
  • Build Tool: Vite

Deployment

  • Containerization: Docker & Docker Compose
  • Configuration: Environment variables (.env)

πŸš€ Quick Start

Prerequisites

  • Python 3.11+
  • Node.js 18+
  • OpenAI API key
  • Docker (optional)

Local Development Setup

1. Clone and Setup Backend

# Navigate to backend directory
cd backend

# Create virtual environment
python -m venv venv

# Activate virtual environment
# Windows
venv\Scripts\activate
# macOS/Linux
source venv/bin/activate

# Install dependencies
pip install -r requirements.txt

# Create .env file
cp ../.env.example ../.env
# Edit .env and add your OpenAI API key

2. Setup Frontend

# Navigate to frontend directory
cd frontend

# Install dependencies
npm install

# Create .env file
cp .env.example .env

3. Run Locally

Terminal 1 - Backend:

cd backend
source venv/bin/activate  # or venv\Scripts\activate on Windows
python -m uvicorn app.main:app --reload --port 8000

Terminal 2 - Frontend:

cd frontend
npm run dev

Access the app at http://localhost:5173

Docker Deployment

# Create .env file with your OpenAI API key
cp .env.example .env
# Edit .env and add OPENAI_API_KEY

# Build and run with Docker Compose
docker-compose up --build

# Or build individual containers
docker build -t tasks-generator-backend .
docker build -t tasks-generator-frontend ./frontend -f ./frontend/Dockerfile

Access the app at http://localhost:3000

πŸ“š API Endpoints

Feature Generation

  • POST /api/features/generate - Generate new feature plan
    • Input: goal (string), users (array), constraints (array)
    • Returns: Complete feature plan with stories, tasks, and risks

Feature Retrieval

  • GET /api/features/recent?limit=5 - Get last N feature plans
  • GET /api/features/{planId} - Get specific feature plan
  • PUT /api/features/{planId}/tasks - Update engineering tasks
  • GET /api/features/{planId}/export - Export as markdown

Health & Status

  • GET /api/health/status - System health check
  • GET /api/health/ping - Simple ping endpoint

πŸ“ Project Structure

Task_Generators/
β”œβ”€β”€ backend/
β”‚   β”œβ”€β”€ app/
β”‚   β”‚   β”œβ”€β”€ __init__.py
β”‚   β”‚   β”œβ”€β”€ main.py                 # FastAPI entry point
β”‚   β”‚   β”œβ”€β”€ config.py               # Settings & env vars
β”‚   β”‚   β”œβ”€β”€ database.py             # Database setup
β”‚   β”‚   β”œβ”€β”€ models.py               # SQLAlchemy models
β”‚   β”‚   β”œβ”€β”€ schemas.py              # Pydantic schemas
β”‚   β”‚   β”œβ”€β”€ routes/
β”‚   β”‚   β”‚   β”œβ”€β”€ __init__.py
β”‚   β”‚   β”‚   β”œβ”€β”€ features.py         # Feature endpoints
β”‚   β”‚   β”‚   └── health.py           # Health check endpoints
β”‚   β”‚   β”œβ”€β”€ services/
β”‚   β”‚   β”‚   β”œβ”€β”€ __init__.py
β”‚   β”‚   β”‚   └── feature_service.py  # Business logic
β”‚   β”‚   └── utils/
β”‚   β”‚       β”œβ”€β”€ __init__.py
β”‚   β”‚       β”œβ”€β”€ logger.py           # Logging setup
β”‚   β”‚       β”œβ”€β”€ llm.py              # OpenAI integration
β”‚   β”‚       └── validators.py       # Input validation
β”‚   └── requirements.txt
β”œβ”€β”€ frontend/
β”‚   β”œβ”€β”€ src/
β”‚   β”‚   β”œβ”€β”€ pages/
β”‚   β”‚   β”‚   β”œβ”€β”€ Home.jsx            # Main page
β”‚   β”‚   β”‚   └── Home.css
β”‚   β”‚   β”œβ”€β”€ components/
β”‚   β”‚   β”‚   β”œβ”€β”€ FeatureForm.jsx     # Input form
β”‚   β”‚   β”‚   β”œβ”€β”€ FeatureForm.css
β”‚   β”‚   β”‚   β”œβ”€β”€ PlanView.jsx        # Plan display & edit
β”‚   β”‚   β”‚   β”œβ”€β”€ PlanView.css
β”‚   β”‚   β”‚   β”œβ”€β”€ Health.jsx          # Health status
β”‚   β”‚   β”‚   β”œβ”€β”€ Health.css
β”‚   β”‚   β”‚   β”œβ”€β”€ RecentPlans.jsx     # Recent plans list
β”‚   β”‚   β”‚   └── RecentPlans.css
β”‚   β”‚   β”œβ”€β”€ services/
β”‚   β”‚   β”‚   └── api.js              # API client
β”‚   β”‚   β”œβ”€β”€ App.jsx
β”‚   β”‚   β”œβ”€β”€ App.css
β”‚   β”‚   β”œβ”€β”€ main.jsx
β”‚   β”‚   └── index.css
β”‚   β”œβ”€β”€ index.html
β”‚   β”œβ”€β”€ package.json
β”‚   β”œβ”€β”€ vite.config.js
β”‚   β”œβ”€β”€ Dockerfile
β”‚   └── .env.example
β”œβ”€β”€ Dockerfile
β”œβ”€β”€ docker-compose.yml
└── .env.example

πŸ”‘ Environment Variables

Backend (.env)

OPENAI_API_KEY=sk-your-api-key-here
OPENAI_MODEL=gpt-4-turbo
DATABASE_URL=sqlite:///./tasks_generator.db
DEBUG=False
LOG_LEVEL=INFO
ALLOWED_ORIGINS=http://localhost:5173,http://localhost:3000

Frontend (.env)

VITE_API_BASE_URL=http://localhost:8000/api

🎨 UI Features

  • Clean, responsive design - Works on desktop and tablet
  • Real-time form validation - Immediate user feedback
  • Color-coded task priorities - Quick visual scanning
  • Dark mode health indicator - System status at a glance
  • Markdown export - Share plans easily
  • Recent plans sidebar - Quick access to previous work

πŸ›‘οΈ Production Checklist

  • Input validation on all endpoints
  • Error handling with meaningful messages
  • Structured logging
  • Environment variables for secrets
  • Database migrations ready
  • Health check endpoints
  • CORS configuration
  • Docker support
  • Pydantic schema validation
  • SQLAlchemy ORM with proper sessions

🚦 Health Check System

The system monitors three critical components:

  1. Backend Service - Application is running
  2. Database Connection - SQLite/database is accessible
  3. LLM Service - OpenAI API is reachable

Status is shown in the UI with real-time updates every 30 seconds.

πŸ“Š Data Models

FeaturePlan

  • id: Integer (Primary Key)
  • goal: String (500 chars max)
  • users: JSON array
  • constraints: JSON array
  • user_stories: JSON array with acceptance criteria
  • engineering_tasks: JSON object grouped by category
  • risks: JSON array with mitigations
  • created_at: DateTime
  • updated_at: DateTime

πŸ”„ LLM Integration

The system uses OpenAI's Chat Completion API with:

  • Model: GPT-4 Turbo (configurable)
  • Role: Senior Product Manager
  • Output: Strict JSON format with validation
  • Retry Logic: Up to 3 attempts for JSON parsing failures

πŸ“ Export Format

Generated plans are exportable as markdown with sections:

  • Feature Goal
  • User Stories (with acceptance criteria)
  • Engineering Tasks (grouped by category)
  • Risks (with severity and mitigation)

πŸ› Error Handling

  • Validation errors return 400 with detailed messages
  • Not found errors return 404
  • Server errors return 500 with logging
  • LLM failures gracefully handled with retries
  • Database connection errors are monitored

πŸ“œ License

This project is provided as-is for production use.

πŸ“§ Support

For issues or questions, refer to the inline code documentation.

About

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors