diff --git a/.dockerignore b/.dockerignore new file mode 100644 index 0000000..9c1186e --- /dev/null +++ b/.dockerignore @@ -0,0 +1,65 @@ +# Git +.git +.gitignore + +# Documentation +documentation/ +README.md +*.md + +# Environment files +.env +.env.local +.env.*.local + +# Node.js (if any frontend dependencies exist) +node_modules/ +npm-debug.log* +yarn-debug.log* +yarn-error.log* + +# Python cache +__pycache__/ +*.py[cod] +*$py.class +*.so +.Python +build/ +develop-eggs/ +dist/ +downloads/ +eggs/ +.eggs/ +lib/ +lib64/ +parts/ +sdist/ +var/ +wheels/ +*.egg-info/ +.installed.cfg +*.egg + +# IDE files +.vscode/ +.idea/ +*.swp +*.swo +*~ + +# OS files +.DS_Store +.DS_Store? +._* +.Spotlight-V100 +.Trashes +ehthumbs.db +Thumbs.db + +# Logs +logs +*.log + +# Docker +docker-compose.override.yml +Dockerfile.dev \ No newline at end of file diff --git a/.env.example b/.env.example index 3026cf1..0f5998f 100644 --- a/.env.example +++ b/.env.example @@ -1,14 +1,36 @@ -# Clerk Authentication -NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY=pk_test_xxxxxx -CLERK_SECRET_KEY=sk_test_xxxxxx +# Database Configuration +POSTGRES_DB=dms_db +POSTGRES_USER=dms_user +POSTGRES_PASSWORD=dms_password +DATABASE_URL=postgresql://dms_user:dms_password@database:5432/dms_db -# Supabase -NEXT_PUBLIC_SUPABASE_URL=your_project_url -NEXT_PUBLIC_SUPABASE_ANON_KEY=your_anon_key +# Redis Configuration +REDIS_URL=redis://cache:6379/0 +REDIS_HOST=cache +REDIS_PORT=6379 -# Stripe (Optional) -STRIPE_SECRET_KEY=sk_test_xxxxx -NEXT_PUBLIC_STRIPE_PUBLISHABLE_KEY=pk_test_xxxxx +# Celery Configuration +CELERY_BROKER_URL=redis://cache:6379/0 +CELERY_RESULT_BACKEND=redis://cache:6379/0 -# OpenAI -OPENAI_API_KEY=sk_test_xxxxx +# Frontend (Laravel) Configuration +APP_ENV=local +APP_DEBUG=true +APP_URL=http://localhost:8000 +DB_HOST=database +DB_PORT=5432 +DB_DATABASE=dms_db +DB_USERNAME=dms_user +DB_PASSWORD=dms_password + +# Backend API Configuration +BACKEND_URL=http://backend:8000 +API_HOST=0.0.0.0 +API_PORT=8000 + +# Shared Storage Paths +INPUT_FOLDER_PATH=/app/shared/input +OUTPUT_FOLDER_PATH=/app/shared/output + +# Docker Compose Project Name +COMPOSE_PROJECT_NAME=dms diff --git a/CLAUDE.md b/CLAUDE.md new file mode 100644 index 0000000..641a901 --- /dev/null +++ b/CLAUDE.md @@ -0,0 +1,64 @@ +# Codespace Task Management Guide + +## Documentation Available + +📚 **Project Documentation**: Check the documentation files in this directory for project-specific setup instructions and guides. +**Project Tasks**: Check the tasks directory in documentation/tasks for the list of tasks to be completed. Use the CLI commands below to interact with them. + +## MANDATORY Task Management Workflow + +🚨 **YOU MUST FOLLOW THIS EXACT WORKFLOW - NO EXCEPTIONS** 🚨 + +### **STEP 1: DISCOVER TASKS (MANDATORY)** +You MUST start by running this command to see all available tasks: +```bash +task-manager list-tasks +``` + +### **STEP 2: START EACH TASK (MANDATORY)** +Before working on any task, you MUST mark it as started: +```bash +task-manager start-task +``` + +### **STEP 3: COMPLETE OR CANCEL EACH TASK (MANDATORY)** +After finishing implementation, you MUST mark the task as completed, or cancel if you cannot complete it: +```bash +task-manager complete-task "Brief description of what was implemented" +# or +task-manager cancel-task "Reason for cancellation" +``` + +## Task Files Location + +📁 **Task Data**: Your tasks are organized in the `documentation/tasks/` directory: +- Task JSON files contain complete task information +- Use ONLY the `task-manager` commands listed above +- Follow the mandatory workflow sequence for each task + +## MANDATORY Task Workflow Sequence + +🔄 **For EACH individual task, you MUST follow this sequence:** + +1. 📋 **DISCOVER**: `task-manager list-tasks` (first time only) +2. 🚀 **START**: `task-manager start-task ` (mark as in progress) +3. 💻 **IMPLEMENT**: Do the actual coding/implementation work +4. ✅ **COMPLETE**: `task-manager complete-task "What was done"` (or cancel with `task-manager cancel-task "Reason"`) +5. 🔁 **REPEAT**: Go to next task (start from step 2) + +## Task Status Options + +- `pending` - Ready to work on +- `in_progress` - Currently being worked on +- `completed` - Successfully finished +- `blocked` - Cannot proceed (waiting for dependencies) +- `cancelled` - No longer needed + +## CRITICAL WORKFLOW RULES + +❌ **NEVER skip** the `task-manager start-task` command +❌ **NEVER skip** the `task-manager complete-task` command (use `task-manager cancel-task` if a task is not planned, not required, or you must stop it) +❌ **NEVER work on multiple tasks simultaneously** +✅ **ALWAYS complete one task fully before starting the next** +✅ **ALWAYS provide completion details in the complete command** +✅ **ALWAYS follow the exact 3-step sequence: list → start → complete (or cancel if not required)** \ No newline at end of file diff --git a/README.md b/README.md index 2d5f6a4..4d331ad 100644 --- a/README.md +++ b/README.md @@ -1,145 +1,241 @@ -[![CodeGuide](/codeguide-backdrop.svg)](https://codeguide.dev) +# Document Management System (DMS) +A multi-container document management system built with Docker Compose, featuring Laravel/Filament frontend and FastAPI backend with Celery task processing. -# CodeGuide Starter Pro +## Architecture -A modern web application starter template built with Next.js 14, featuring authentication, database integration, and payment processing capabilities. +This system simulates a document processing workflow that could be used with Synology NAS file shares: -## Tech Stack +- **Frontend**: Laravel with Filament v3 admin panel (Port 8000) +- **Backend**: FastAPI application (Port 8001) +- **Worker**: Celery task processor for async operations like OCR +- **Database**: PostgreSQL for data persistence +- **Cache**: Redis for caching and Celery message broker +- **Shared Storage**: Simulated NAS file shares via Docker volumes -- **Framework:** [Next.js 14](https://nextjs.org/) (App Router) -- **Authentication:** [Clerk](https://clerk.com/) -- **Database:** [Supabase](https://supabase.com/) -- **Styling:** [Tailwind CSS](https://tailwindcss.com/) -- **Payments:** [Stripe](https://stripe.com/) -- **UI Components:** [shadcn/ui](https://ui.shadcn.com/) +## Quick Start -## Prerequisites +### Prerequisites +- Docker and Docker Compose installed +- Git -Before you begin, ensure you have the following: -- Node.js 18+ installed -- A [Clerk](https://clerk.com/) account for authentication -- A [Supabase](https://supabase.com/) account for database -- A [Stripe](https://stripe.com/) account for payments (optional) -- Generated project documents from [CodeGuide](https://codeguide.dev/) for best development experience +### Setup -## Getting Started - -1. **Clone the repository** +1. **Clone and setup environment:** ```bash git clone - cd codeguide-starter-pro + cd document-management-system + cp .env.example .env ``` -2. **Install dependencies** +2. **Build and start all services:** ```bash - npm install - # or - yarn install - # or - pnpm install + docker-compose up --build -d ``` -3. **Environment Variables Setup** - - Copy the `.env.example` file to `.env`: - ```bash - cp .env.example .env - ``` - - Fill in the environment variables in `.env` (see Configuration section below) +3. **Check service status:** + ```bash + docker-compose ps + ``` -4. **Start the development server** +4. **View logs:** ```bash - npm run dev - # or - yarn dev - # or - pnpm dev + docker-compose logs -f ``` -5. **Open [http://localhost:3000](http://localhost:3000) with your browser to see the result.** +### Access Points -## Configuration +- **Frontend (Laravel/Filament)**: http://localhost:8000 +- **Backend API**: http://localhost:8001 +- **API Documentation**: http://localhost:8001/docs +- **Database**: localhost:5432 +- **Redis**: localhost:6379 -### Clerk Setup -1. Go to [Clerk Dashboard](https://dashboard.clerk.com/) -2. Create a new application -3. Go to API Keys -4. Copy the `NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY` and `CLERK_SECRET_KEY` +## Development Commands -### Supabase Setup -1. Go to [Supabase Dashboard](https://app.supabase.com/) -2. Create a new project -3. Go to Project Settings > API -4. Copy the `Project URL` as `NEXT_PUBLIC_SUPABASE_URL` -5. Copy the `anon` public key as `NEXT_PUBLIC_SUPABASE_ANON_KEY` +### Docker Compose Commands -### Stripe Setup (Optional) -1. Go to [Stripe Dashboard](https://dashboard.stripe.com/) -2. Get your API keys from the Developers section -3. Add the required keys to your `.env` file +```bash +# Build all services +docker-compose build -## Environment Variables +# Start all services in background +docker-compose up -d + +# Start with logs +docker-compose up -Create a `.env` file in the root directory with the following variables: +# Stop all services +docker-compose down -```env -# Clerk Authentication -NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY=your_publishable_key -CLERK_SECRET_KEY=your_secret_key +# Stop and remove volumes +docker-compose down -v -# Supabase -NEXT_PUBLIC_SUPABASE_URL=your_supabase_url -NEXT_PUBLIC_SUPABASE_ANON_KEY=your_supabase_anon_key +# Rebuild specific service +docker-compose build backend -# Stripe (Optional) -STRIPE_SECRET_KEY=your_stripe_secret_key -NEXT_PUBLIC_STRIPE_PUBLISHABLE_KEY=your_stripe_publishable_key +# View logs for specific service +docker-compose logs -f backend ``` -## Features +### Service Management -- 🔐 Authentication with Clerk -- 📦 Supabase Database -- 💳 Stripe Payments Integration -- 🎨 Modern UI with Tailwind CSS -- 🚀 App Router Ready -- 🔄 Real-time Updates -- 📱 Responsive Design +```bash +# Access backend container +docker-compose exec backend bash -## Project Structure +# Access frontend container +docker-compose exec frontend bash +# Access database +docker-compose exec database psql -U dms_user -d dms_db + +# Access Redis +docker-compose exec cache redis-cli ``` -codeguide-starter/ -├── app/ # Next.js app router pages -├── components/ # React components -├── utils/ # Utility functions -├── public/ # Static assets -├── styles/ # Global styles -├── documentation/ # Generated documentation from CodeGuide -└── supabase/ # Supabase configurations and migrations -``` -## Documentation Setup +## Shared Storage + +The system uses Docker named volumes to simulate NAS file shares: + +- **input_folder**: `/app/shared/input` - Drop scanned PDFs here +- **output_folder**: `/app/shared/output` - Processed files appear here -To implement the generated documentation from CodeGuide: +### Adding Files for Processing + +1. Copy files to the input volume: + ```bash + docker cp document.pdf dms-backend:/app/shared/input/ + ``` -1. Create a `documentation` folder in the root directory: +2. Process via API: ```bash - mkdir documentation + curl -X POST http://localhost:8001/process/document.pdf ``` -2. Place all generated markdown files from CodeGuide in this directory: +3. Check results: ```bash - # Example structure - documentation/ - ├── project_requirements_document.md - ├── app_flow_document.md - ├── frontend_guideline_document.md - └── backend_structure_document.md + curl http://localhost:8001/files/output ``` -3. These documentation files will be automatically tracked by git and can be used as a reference for your project's features and implementation details. +## API Endpoints + +### Backend API (FastAPI) + +- `GET /` - Root endpoint +- `GET /health` - Health check +- `GET /files/input` - List input files +- `GET /files/output` - List output files +- `POST /process/{filename}` - Process a file +- `GET /task/{task_id}` - Check task status +- `GET /docs` - Interactive API documentation + +### Celery Tasks + +Available background tasks: +- `process_document_ocr` - OCR processing +- `rename_document_based_on_content` - Smart renaming +- `batch_process_documents` - Process all input files + +## Environment Variables + +Key environment variables in `.env`: + +```bash +# Database +POSTGRES_DB=dms_db +POSTGRES_USER=dms_user +POSTGRES_PASSWORD=dms_password + +# Redis/Celery +REDIS_URL=redis://cache:6379/0 +CELERY_BROKER_URL=redis://cache:6379/0 + +# Storage paths +INPUT_FOLDER_PATH=/app/shared/input +OUTPUT_FOLDER_PATH=/app/shared/output +``` + +## Service Dependencies + +``` +database (PostgreSQL) + ↓ +cache (Redis) + ↓ +backend (FastAPI) → worker (Celery) + ↓ +frontend (Laravel/Filament) +``` + +## Development Workflow + +1. **File Upload**: Place PDFs in input folder +2. **API Request**: Frontend calls backend to process files +3. **Task Queue**: Backend queues OCR task to Celery +4. **Processing**: Worker performs OCR and file operations +5. **Results**: Processed files appear in output folder +6. **Notification**: Frontend can check task status and display results + +## Monitoring + +### Health Checks + +- **Backend**: http://localhost:8001/health +- **Database**: Automatic Docker health check +- **Redis**: Automatic Docker health check + +### Logs + +Monitor different services: +```bash +# Backend API logs +docker-compose logs -f backend + +# Worker processing logs +docker-compose logs -f worker + +# Frontend web server logs +docker-compose logs -f frontend +``` + +## Production Considerations + +For production deployment: + +1. **Security**: Change default passwords and credentials +2. **Persistence**: Ensure proper volume backup strategies +3. **Scaling**: Consider multiple worker instances +4. **Monitoring**: Add proper logging and monitoring +5. **SSL**: Configure HTTPS termination +6. **Network**: Use private networks where appropriate + +## Troubleshooting + +### Common Issues + +1. **Services not starting**: Check `docker-compose logs` for errors +2. **Database connection**: Ensure database is healthy first +3. **File permissions**: Check volume mount permissions +4. **Port conflicts**: Ensure ports 8000, 8001, 5432, 6379 are available + +### Reset System + +To completely reset the system: +```bash +docker-compose down -v +docker system prune -f +docker-compose up --build -d +``` ## Contributing -Contributions are welcome! Please feel free to submit a Pull Request. +1. Fork the repository +2. Create a feature branch +3. Make your changes +4. Test with `docker-compose up --build` +5. Submit a pull request + +## License + +This project is licensed under the MIT License. diff --git a/dms/backend/.dockerignore b/dms/backend/.dockerignore new file mode 100644 index 0000000..1da9819 --- /dev/null +++ b/dms/backend/.dockerignore @@ -0,0 +1,23 @@ +__pycache__ +*.pyc +*.pyo +*.pyd +.Python +env +pip-log.txt +pip-delete-this-directory.txt +.tox +.coverage +.coverage.* +.cache +nosetests.xml +coverage.xml +*.cover +*.log +.git +.mypy_cache +.pytest_cache +.hypothesis +.DS_Store +.vscode +.idea \ No newline at end of file diff --git a/dms/backend/Dockerfile b/dms/backend/Dockerfile new file mode 100644 index 0000000..261d7db --- /dev/null +++ b/dms/backend/Dockerfile @@ -0,0 +1,43 @@ +# Python-based Backend Service Dockerfile (Shared by FastAPI and Celery Worker) +FROM python:3.11-slim-bookworm + +# Set environment variables +ENV PYTHONDONTWRITEBYTECODE=1 \ + PYTHONUNBUFFERED=1 \ + TZ=UTC + +# Install system dependencies +RUN apt-get update && apt-get install -y \ + gcc \ + g++ \ + libpq-dev \ + curl \ + postgresql-client \ + && apt-get clean \ + && rm -rf /var/lib/apt/lists/* + +# Set work directory +WORKDIR /app + +# Create non-root user +RUN groupadd -r appuser && useradd -r -g appuser appuser + +# Install Python dependencies +COPY requirements.txt . +RUN pip install --no-cache-dir -r requirements.txt + +# Copy application code +COPY . . + +# Create necessary directories with proper permissions +RUN mkdir -p /app/shared/input /app/shared/output \ + && chown -R appuser:appuser /app /app/shared + +# Switch to non-root user +USER appuser + +# Expose port for FastAPI +EXPOSE 8000 + +# Default command for FastAPI server +CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000", "--reload"] \ No newline at end of file diff --git a/dms/backend/celery_app.py b/dms/backend/celery_app.py new file mode 100644 index 0000000..a811ec9 --- /dev/null +++ b/dms/backend/celery_app.py @@ -0,0 +1,24 @@ +from celery import Celery +import os + +# Create Celery instance +celery_app = Celery( + "dms_worker", + broker=os.getenv("CELERY_BROKER_URL", "redis://cache:6379/0"), + backend=os.getenv("CELERY_RESULT_BACKEND", "redis://cache:6379/0"), + include=["tasks"] +) + +# Configure Celery +celery_app.conf.update( + task_serializer="json", + accept_content=["json"], + result_serializer="json", + timezone="UTC", + enable_utc=True, + task_track_started=True, + task_time_limit=30 * 60, # 30 minutes + task_soft_time_limit=25 * 60, # 25 minutes + worker_prefetch_multiplier=1, + worker_max_tasks_per_child=1000, +) \ No newline at end of file diff --git a/dms/backend/main.py b/dms/backend/main.py new file mode 100644 index 0000000..309f5d4 --- /dev/null +++ b/dms/backend/main.py @@ -0,0 +1,109 @@ +from fastapi import FastAPI, HTTPException +from fastapi.responses import JSONResponse +import os +import redis +from celery import Celery +import uvicorn + +# Initialize FastAPI app +app = FastAPI( + title="Document Management System API", + description="FastAPI backend for document processing and management", + version="1.0.0" +) + +# Celery configuration +celery_app = Celery( + "worker", + broker=os.getenv("CELERY_BROKER_URL", "redis://cache:6379/0"), + backend=os.getenv("CELERY_RESULT_BACKEND", "redis://cache:6379/0") +) + +# Redis client +redis_client = redis.Redis.from_url(os.getenv("REDIS_URL", "redis://cache:6379/0")) + +@app.get("/") +async def root(): + """Root endpoint""" + return {"message": "Document Management System API", "version": "1.0.0"} + +@app.get("/health") +async def health_check(): + """Health check endpoint for Docker health checks""" + try: + # Test Redis connection + redis_client.ping() + + # Test Celery connection + celery_app.control.inspect().stats() + + return { + "status": "healthy", + "redis": "connected", + "celery": "connected" + } + except Exception as e: + raise HTTPException(status_code=503, detail=f"Service unhealthy: {str(e)}") + +@app.get("/files/input") +async def list_input_files(): + """List files in the input directory""" + input_path = os.getenv("INPUT_FOLDER_PATH", "/app/shared/input") + try: + files = os.listdir(input_path) + return {"input_files": files, "count": len(files)} + except Exception as e: + raise HTTPException(status_code=500, detail=f"Error reading input directory: {str(e)}") + +@app.get("/files/output") +async def list_output_files(): + """List files in the output directory""" + output_path = os.getenv("OUTPUT_FOLDER_PATH", "/app/shared/output") + try: + files = os.listdir(output_path) + return {"output_files": files, "count": len(files)} + except Exception as e: + raise HTTPException(status_code=500, detail=f"Error reading output directory: {str(e)}") + +# Celery task example for OCR processing +@celery_app.task +def process_document_ocr(file_path: str): + """Example Celery task for OCR processing""" + # This would integrate with OCR libraries in a real implementation + return { + "file": file_path, + "status": "processed", + "extracted_text": "Sample OCR text processing result" + } + +@app.post("/process/{filename}") +async def process_file(filename: str): + """Trigger document processing via Celery""" + input_path = os.getenv("INPUT_FOLDER_PATH", "/app/shared/input") + file_path = os.path.join(input_path, filename) + + if not os.path.exists(file_path): + raise HTTPException(status_code=404, detail=f"File {filename} not found in input directory") + + # Queue the OCR processing task + task = process_document_ocr.delay(file_path) + + return { + "message": f"File {filename} queued for processing", + "task_id": task.id, + "status": "queued" + } + +@app.get("/task/{task_id}") +async def get_task_status(task_id: str): + """Check status of a Celery task""" + result = celery_app.AsyncResult(task_id) + + return { + "task_id": task_id, + "status": result.status, + "result": result.result if result.ready() else None + } + +if __name__ == "__main__": + uvicorn.run(app, host="0.0.0.0", port=8000) \ No newline at end of file diff --git a/dms/backend/requirements.txt b/dms/backend/requirements.txt new file mode 100644 index 0000000..256cfe4 --- /dev/null +++ b/dms/backend/requirements.txt @@ -0,0 +1,30 @@ +# FastAPI and ASGI Server +fastapi==0.104.1 +uvicorn[standard]==0.24.0 + +# Celery for Task Processing +celery==5.3.4 +redis==5.0.1 + +# Database +psycopg2-binary==2.9.9 +alembic==1.12.1 +sqlalchemy==2.0.23 + +# HTTP Client +httpx==0.25.2 +requests==2.31.0 + +# File Processing (for OCR and document handling) +PyPDF2==3.0.1 +Pillow==10.1.0 +python-multipart==0.0.6 + +# Utilities +python-dotenv==1.0.0 +pydantic==2.5.0 +pydantic-settings==2.1.0 + +# Development and Debugging +pytest==7.4.3 +pytest-asyncio==0.21.1 \ No newline at end of file diff --git a/dms/backend/tasks.py b/dms/backend/tasks.py new file mode 100644 index 0000000..d77b522 --- /dev/null +++ b/dms/backend/tasks.py @@ -0,0 +1,152 @@ +from celery import Celery +from celery_app import celery_app +import os +import shutil +from typing import Dict, Any + +@celery_app.task +def process_document_ocr(file_path: str) -> Dict[str, Any]: + """ + Process a document through OCR and save the results. + + Args: + file_path: Path to the file to process + + Returns: + Dictionary with processing results + """ + try: + input_path = os.getenv("INPUT_FOLDER_PATH", "/app/shared/input") + output_path = os.getenv("OUTPUT_FOLDER_PATH", "/app/shared/output") + + filename = os.path.basename(file_path) + + # Check if file exists + if not os.path.exists(file_path): + return { + "error": f"File not found: {file_path}", + "status": "failed" + } + + # Simulate OCR processing (in real implementation, use OCR libraries) + # This is where you would integrate with Tesseract, PaddleOCR, etc. + processed_text = f"OCR processed text for {filename}" + + # Create output filename (could be renamed based on content) + output_filename = f"processed_{filename.replace('.pdf', '.txt')}" + output_file_path = os.path.join(output_path, output_filename) + + # Write processed text to output file + with open(output_file_path, 'w', encoding='utf-8') as f: + f.write(processed_text) + + # Optionally move original file to processed folder + # processed_folder = os.path.join(input_path, "processed") + # os.makedirs(processed_folder, exist_ok=True) + # shutil.move(file_path, os.path.join(processed_folder, filename)) + + return { + "file": filename, + "status": "completed", + "output_file": output_filename, + "extracted_text": processed_text[:200] + "..." if len(processed_text) > 200 else processed_text, + "message": "Document processed successfully" + } + + except Exception as e: + return { + "file": os.path.basename(file_path) if file_path else "unknown", + "status": "failed", + "error": str(e), + "message": "Document processing failed" + } + +@celery_app.task +def rename_document_based_on_content(file_path: str, extracted_name: str) -> Dict[str, Any]: + """ + Rename a document based on extracted content. + + Args: + file_path: Original file path + extracted_name: New name based on content analysis + + Returns: + Dictionary with renaming results + """ + try: + input_path = os.getenv("INPUT_FOLDER_PATH", "/app/shared/input") + output_path = os.getenv("OUTPUT_FOLDER_PATH", "/app/shared/output") + + filename = os.path.basename(file_path) + file_extension = os.path.splitext(filename)[1] + + # Create new filename + new_filename = f"{extracted_name}{file_extension}" + new_file_path = os.path.join(output_path, new_filename) + + # Copy file to output with new name + shutil.copy2(file_path, new_file_path) + + return { + "original_file": filename, + "new_file": new_filename, + "status": "completed", + "message": f"Document renamed from {filename} to {new_filename}" + } + + except Exception as e: + return { + "original_file": os.path.basename(file_path) if file_path else "unknown", + "status": "failed", + "error": str(e), + "message": "Document renaming failed" + } + +@celery_app.task +def batch_process_documents() -> Dict[str, Any]: + """ + Process all documents in the input folder. + + Returns: + Dictionary with batch processing results + """ + try: + input_path = os.getenv("INPUT_FOLDER_PATH", "/app/shared/input") + + if not os.path.exists(input_path): + return { + "status": "failed", + "error": f"Input directory not found: {input_path}" + } + + # Get list of files to process + files = [f for f in os.listdir(input_path) + if os.path.isfile(os.path.join(input_path, f)) and not f.startswith('.')] + + if not files: + return { + "status": "completed", + "files_found": 0, + "message": "No files to process" + } + + # Queue individual processing tasks + task_ids = [] + for filename in files: + file_path = os.path.join(input_path, filename) + task = process_document_ocr.delay(file_path) + task_ids.append(task.id) + + return { + "status": "queued", + "files_found": len(files), + "task_ids": task_ids, + "message": f"Queued {len(files)} files for processing" + } + + except Exception as e: + return { + "status": "failed", + "error": str(e), + "message": "Batch processing failed" + } \ No newline at end of file diff --git a/dms/frontend/.dockerignore b/dms/frontend/.dockerignore new file mode 100644 index 0000000..152c543 --- /dev/null +++ b/dms/frontend/.dockerignore @@ -0,0 +1,16 @@ +node_modules +npm-debug.log +yarn-error.log +.DS_Store +.env.local +.env.development.local +.env.test.local +.env.production.local +.phpunit.result.cache +vendor +storage/logs +storage/framework/cache +storage/framework/sessions +storage/framework/views +bootstrap/cache +*.log \ No newline at end of file diff --git a/dms/frontend/.env.example b/dms/frontend/.env.example new file mode 100644 index 0000000..e69de29 diff --git a/dms/frontend/Dockerfile b/dms/frontend/Dockerfile new file mode 100644 index 0000000..5658136 --- /dev/null +++ b/dms/frontend/Dockerfile @@ -0,0 +1,111 @@ +# Multi-stage Dockerfile for Laravel Frontend with Filament v3 +FROM debian:bookworm-slim as base + +# Set environment variables +ENV DEBIAN_FRONTEND=noninteractive +ENV TZ=UTC +ENV APACHE_RUN_USER=www-data +ENV APACHE_RUN_GROUP=www-data +ENV APACHE_LOG_DIR=/var/log/apache2 + +# Install system dependencies +RUN apt-get update && apt-get install -y \ + apache2 \ + libapache2-mod-php8.2 \ + php8.2 \ + php8.2-cli \ + php8.2-fpm \ + php8.2-pdo \ + php8.2-pgsql \ + php8.2-redis \ + php8.2-xml \ + php8.2-dom \ + php8.2-mbstring \ + php8.2-tokenizer \ + php8.2-bcmath \ + php8.2-curl \ + php8.2-zip \ + php8.2-gd \ + php8.2-intl \ + curl \ + wget \ + gnupg \ + ca-certificates \ + lsb-release \ + unzip \ + git \ + && apt-get clean \ + && rm -rf /var/lib/apt/lists/* + +# Install Node.js 18.x +RUN curl -fsSL https://deb.nodesource.com/setup_18.x | bash - \ + && apt-get install -y nodejs \ + && apt-get clean \ + && rm -rf /var/lib/apt/lists/* + +# Install Composer +RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer + +# Configure Apache +RUN a2enmod rewrite \ + && a2enmod headers \ + && a2enmod proxy \ + && a2enmod proxy_http \ + && a2enmod proxy_fcgi \ + && a2dissite 000-default.conf + +# Create Apache configuration for Laravel +COPY < + ServerName localhost + DocumentRoot /var/www/html/public + + + Options -Indexes +FollowSymLinks + AllowOverride All + Require all granted + + + ErrorLog \${APACHE_LOG_DIR}/error.log + CustomLog \${APACHE_LOG_DIR}/access.log combined + + # Proxy API requests to backend service + ProxyPreserveHost On + ProxyPass /api/ http://backend:8000/ + ProxyPassReverse /api/ http://backend:8000/ + +EOF + +# Enable Laravel site +RUN a2ensite laravel.conf + +# Set working directory +WORKDIR /var/www/html + +# Create www-data user home directory for Composer cache +RUN mkdir -p /var/www/.composer && chown -R www-data:www-data /var/www/.composer + +# Set proper permissions +RUN chown -R www-data:www-data /var/www/html + +# Switch to www-data user +USER www-data + +# Copy Laravel application files (placeholder structure will be created later) +COPY --chown=www-data:www-data . /var/www/html + +# Install Composer dependencies (when composer.json exists) +# RUN composer install --no-interaction --prefer-dist --optimize-autoloader + +# Install Node.js dependencies (when package.json exists) +# RUN npm install +# RUN npm run build + +# Set permissions for Laravel storage and cache +# RUN chmod -R 775 storage bootstrap/cache + +# Expose port +EXPOSE 8000 + +# Start Apache in foreground +CMD ["apache2-foreground"] \ No newline at end of file diff --git a/dms/frontend/README.md b/dms/frontend/README.md new file mode 100644 index 0000000..e69de29 diff --git a/dms/frontend/public/index.php b/dms/frontend/public/index.php new file mode 100644 index 0000000..ccd817b --- /dev/null +++ b/dms/frontend/public/index.php @@ -0,0 +1,62 @@ + + */ + +define('LARAVEL_START', microtime(true)); + +/* +|-------------------------------------------------------------------------- +| Check If The Application Is Under Maintenance +|-------------------------------------------------------------------------- +| +| If the application is in maintenance / demo mode via the "down" command we +| will load this view so that the end user knows the app is down for +| maintenance. This file provides a convenient way to customize the +| maintenance mode template for your application. +| +*/ + +if (file_exists(__DIR__.'/../storage/framework/maintenance.php')) { + require __DIR__.'/../storage/framework/maintenance.php'; +} + +/* +|-------------------------------------------------------------------------- +| Register The Auto Loader +|-------------------------------------------------------------------------- +| +| Composer provides a convenient, automatically generated class loader +| for our application. We just need to utilize it! We'll require it +| into the script here so that we do not have to worry about the +| loading of any of our classes "manually". Feels great to relax. +| +*/ + +require __DIR__.'/../vendor/autoload.php'; + +/* +|-------------------------------------------------------------------------- +| Run The Application +|-------------------------------------------------------------------------- +| +| Once we have the application, we can handle the incoming request +| through the kernel, and send the associated response back to +| the client's browser allowing them to enjoy the creative +| and wonderful application we have prepared for them. +| +*/ + +$app = require_once __DIR__.'/../bootstrap/app.php'; + +$kernel = $app->make(Illuminate\Contracts\Http\Kernel::class); + +$response = $kernel->handle( + $request = Illuminate\Http\Request::capture() +)->send(); + +$kernel->terminate($request, $response); \ No newline at end of file diff --git a/docker-compose.yml b/docker-compose.yml new file mode 100644 index 0000000..5315a24 --- /dev/null +++ b/docker-compose.yml @@ -0,0 +1,153 @@ +version: '3.8' + +services: + # Frontend - Laravel Filament Admin Panel + frontend: + build: + context: ./dms/frontend + dockerfile: Dockerfile + container_name: dms-frontend + ports: + - "8000:8000" + environment: + - APP_ENV=local + - APP_DEBUG=true + - DB_HOST=database + - DB_PORT=5432 + - DB_DATABASE=dms_db + - DB_USERNAME=dms_user + - DB_PASSWORD=dms_password + - REDIS_HOST=cache + - REDIS_PORT=6379 + - BACKEND_URL=http://backend:8000 + volumes: + - ./dms/frontend:/var/www/html + depends_on: + - database + - cache + - backend + networks: + - dms-network + restart: unless-stopped + + # Backend - FastAPI Application + backend: + build: + context: ./dms/backend + dockerfile: Dockerfile + container_name: dms-backend + ports: + - "8001:8000" + environment: + - DATABASE_URL=postgresql://dms_user:dms_password@database:5432/dms_db + - REDIS_URL=redis://cache:6379/0 + - CELERY_BROKER_URL=redis://cache:6379/0 + - CELERY_RESULT_BACKEND=redis://cache:6379/0 + - INPUT_FOLDER_PATH=/app/shared/input + - OUTPUT_FOLDER_PATH=/app/shared/output + volumes: + - input_folder:/app/shared/input + - output_folder:/app/shared/output + - ./dms/backend:/app + depends_on: + - database + - cache + networks: + - dms-network + restart: unless-stopped + healthcheck: + test: ["CMD", "curl", "-f", "http://localhost:8000/health"] + interval: 30s + timeout: 10s + retries: 3 + + # Worker - Celery Task Processor + worker: + build: + context: ./dms/backend + dockerfile: Dockerfile + container_name: dms-worker + command: celery -A app.celery worker --loglevel=info + environment: + - DATABASE_URL=postgresql://dms_user:dms_password@database:5432/dms_db + - REDIS_URL=redis://cache:6379/0 + - CELERY_BROKER_URL=redis://cache:6379/0 + - CELERY_RESULT_BACKEND=redis://cache:6379/0 + - INPUT_FOLDER_PATH=/app/shared/input + - OUTPUT_FOLDER_PATH=/app/shared/output + volumes: + - input_folder:/app/shared/input + - output_folder:/app/shared/output + - ./dms/backend:/app + depends_on: + - database + - cache + - backend + networks: + - dms-network + restart: unless-stopped + + # Database - PostgreSQL + database: + image: postgres:latest + container_name: dms-database + environment: + - POSTGRES_DB=dms_db + - POSTGRES_USER=dms_user + - POSTGRES_PASSWORD=dms_password + volumes: + - postgres-data:/var/lib/postgresql/data + ports: + - "5432:5432" + networks: + - dms-network + restart: unless-stopped + healthcheck: + test: ["CMD-SHELL", "pg_isready -U dms_user -d dms_db"] + interval: 10s + timeout: 5s + retries: 5 + + # Cache - Redis (Celery Broker) + cache: + image: redis:latest + container_name: dms-cache + ports: + - "6379:6379" + volumes: + - redis-data:/data + networks: + - dms-network + restart: unless-stopped + healthcheck: + test: ["CMD", "redis-cli", "ping"] + interval: 10s + timeout: 3s + retries: 3 + +# Docker Network for Inter-Service Communication +networks: + dms-network: + driver: bridge + name: dms-network + +# Named Volumes +volumes: + # PostgreSQL Data Persistence + postgres-data: + name: dms-postgres-data + driver: local + + # Redis Data Persistence + redis-data: + name: dms-redis-data + driver: local + + # Shared Storage Volumes (Simulating NAS/NFS) + input_folder: + name: dms-input-folder + driver: local + + output_folder: + name: dms-output-folder + driver: local \ No newline at end of file diff --git a/documentation/app_flow_document.md b/documentation/app_flow_document.md new file mode 100644 index 0000000..c789e6c --- /dev/null +++ b/documentation/app_flow_document.md @@ -0,0 +1,31 @@ +# Application Flow Document + +## Onboarding and Sign-In/Sign-Up +When a new user first arrives at the Document Management System, they land on a simple welcome page that offers a choice to log in or create a new account. To sign up, the user clicks on the "Register" link and is taken to a registration form that asks for their name, email address, and a password. After submitting this form, an account is created in the system and a verification email is sent. The user confirms their email by clicking a link in that message, and then they are prompted to log in. + +For returning users, the “Log In” page asks for an email address and password. Upon entering valid credentials, the user is redirected to their main dashboard. If a user forgets their password, they click the “Forgot Password” link, enter the email they used to register, and receive a reset link by email. That link brings them to a secure page where they choose a new password. Once the password is reset successfully, they can log in normally. A “Log Out” button is available in the header of every page, which clears the session and returns the user to the welcome screen. + +## Main Dashboard or Home Page +After signing in, the user lands on the Dashboard, which provides an overview of system activity. The top of the page features a header containing the application logo, the user’s name, and a dropdown menu for quick access to profile settings and log out. A vertical navigation sidebar on the left lists all major sections: Dashboard, Documents, User Management, and Settings. The central area of the dashboard displays summary widgets such as total documents processed this week, number of pending tasks, and recent activity logs. Each widget is clickable to drill down into the corresponding section of the application. This layout ensures the user can navigate seamlessly to any part of the app at any time. + +## Detailed Feature Flows and Page Transitions +### Document Listing and Search +When the user selects “Documents” in the sidebar, they arrive at a paginated list of all uploaded files. The list view shows columns for document name, upload date, processing status, and owner. A search bar at the top allows the user to filter by name or date range, and status filters refine the view to show only pending, in-progress, or completed items. Clicking any document row brings the user to the Document Detail page. + +### Document Upload Workflow +From the Document list, the user clicks an “Upload Document” button, which opens the Upload page. This page presents a drag-and-drop area or a file picker. Once the user selects one or more files, they click “Start Upload.” The system immediately returns the user to the Document list, where the new items appear with a “Pending” status. Behind the scenes, a background task is queued for each file. As soon as the Celery worker picks up a task, the status changes to “Processing,” and when work finishes, it updates to “Completed.” The user may continue working in other sections while processing runs. + +### Document Detail View and Actions +On the Document Detail page, details such as file metadata, size, upload timestamp, and a log of processing steps appear. If processing is complete, download links for the original and the processed version are visible. If a document is still in progress, the page displays a real-time status indicator and an estimate of remaining time. The user can choose to cancel an in-progress task or re-run processing on a completed document if adjustments are needed. A “Delete Document” button allows removal of both metadata and stored files after a confirmation prompt. + +### User Management for Administrators +When a user with administrator privileges clicks “User Management,” they see a list of all registered users. This page mirrors the document list design, showing user name, email, role, and status. Administrators can invite a new user by entering an email and selecting a role in a modal form. Existing accounts can be edited to change roles or deactivate access. Editing a user record brings up a form for name, email, role, and account status, and changes take effect immediately upon saving. Attempting to access this section without the proper role displays an “Access Denied” message. + +## Settings and Account Management +Selecting “Settings” in the sidebar leads to a page with tabs for Personal Profile, Notification Preferences, and Integrations. In the Personal Profile tab, the user updates their display name, email, or password. Changing the password requires entering the current password and then a new one twice for confirmation. The Notification Preferences tab contains toggles for email alerts on upload completion, processing errors, or new user registrations. The Integrations tab shows configuration options for external storage or processing services, where API keys and endpoint URLs can be entered and tested. After making changes in any tab, the user clicks “Save” to apply settings, upon which a confirmation banner appears briefly. + +## Error States and Alternate Paths +If a user attempts to log in with incorrect credentials, the login page reloads with a clear error message above the form. During file upload, if connectivity is lost or the file type is unsupported, the upload form shows an inline error and the user can retry. On any page, if the backend service becomes unreachable, a full-page notification indicates there is a network issue and suggests checking the connection or contacting support. When a user navigates to a restricted area without enough privileges, an “Access Denied” screen replaces the content area and offers a link back to the Dashboard. In cases where an unexpected error occurs, a friendly generic message appears with an invitation to refresh the page or submit a support ticket. + +## Conclusion and Overall App Journey +From the moment a user visits the system and creates a new account, they can quickly verify their email and log in to the Dashboard. Navigating through a clear sidebar, they manage documents by uploading files, tracking their processing status in real time, and interacting with detailed views to download or delete results. Administrators extend control through user management screens, while all users customize their profile and notification settings in a dedicated settings area. Error pages and messages guide users back to the normal flow when things go wrong. This journey, from sign-up through daily operations, ensures every action is connected, intuitive, and leads the user toward their end goal of efficient document management and retrieval. \ No newline at end of file diff --git a/documentation/app_flowchart.md b/documentation/app_flowchart.md new file mode 100644 index 0000000..9a63320 --- /dev/null +++ b/documentation/app_flowchart.md @@ -0,0 +1,9 @@ +flowchart TD + Frontend[Laravel Filament Admin Panel] -->|API request| Backend[FastAPI Backend API] + Backend -->|Query metadata| Database[PostgreSQL Database] + Backend -->|Enqueue processing task| Cache[Redis Message Broker] + Cache -->|Deliver task| Worker[Celery Worker] + Worker -->|Read file from input\nshared volume| InputFolder[Input Folder Volume] + Worker -->|Write result to output\nshared volume| OutputFolder[Output Folder Volume] + Worker -->|Update processing status| Database + Backend -->|Return response| Frontend \ No newline at end of file diff --git a/documentation/backend_structure_document.md b/documentation/backend_structure_document.md new file mode 100644 index 0000000..e57e059 --- /dev/null +++ b/documentation/backend_structure_document.md @@ -0,0 +1,188 @@ +# Backend Structure Document + +This document outlines the backend setup for our Document Management System (DMS). It covers the architecture, database, APIs, hosting, infrastructure, security, and maintenance in clear, everyday language. + +## 1. Backend Architecture + +We use a decoupled, container-based design where each major piece runs in its own Docker container. This approach: + +- Separates concerns: frontend, API, background tasks, and database each do one job. +- Makes scaling easier: you can add more API or worker containers when load increases. +- Helps keep the code clean and maintainable. + +Key frameworks and patterns: + +- FastAPI for the backend service, following an API-first design with OpenAPI documentation. +- Pydantic models for data validation and clear request/response contracts. +- Celery for asynchronous, long-running tasks (e.g., OCR or thumbnail generation). +- Docker Compose to orchestrate all services with a single configuration file. + +How this supports our goals: + +- Scalability: Add more API or worker containers as demand grows. +- Maintainability: Clear separation means developers can work independently on each service. +- Performance: Lightweight FastAPI and asynchronous tasks keep responses fast. + +## 2. Database Management + +We store structured data in PostgreSQL and use Redis as a message broker and cache. + +Database technology: + +- PostgreSQL (relational SQL database) for users, documents, and task metadata. +- Alembic for version-controlled database migrations. + +Data storage and access: + +- SQLAlchemy or the built-in FastAPI async driver to talk to PostgreSQL. +- Redis for queuing Celery tasks and optionally caching frequent queries. +- Shared Docker volumes for raw file storage (input) and processed output files. + +Best practices: + +- Keep database credentials in environment variables, not in code. +- Version‐control all schema changes with Alembic migration scripts. +- Use indexes on fields that are commonly searched (e.g., document status, creation date). + +## 3. Database Schema + +### Human-Readable Overview + +We have three main tables: + +1. **users**: Stores user credentials and profiles. +2. **documents**: Tracks each file’s metadata, file paths, and processing status. +3. **processing_tasks**: Logs each asynchronous job related to a document. + +### SQL Schema (PostgreSQL) + +```sql +-- 1. Users +CREATE TABLE users ( + id SERIAL PRIMARY KEY, + email TEXT NOT NULL UNIQUE, + password_hash TEXT NOT NULL, + created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(), + updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW() +); + +-- 2. Documents +CREATE TABLE documents ( + id SERIAL PRIMARY KEY, + user_id INTEGER NOT NULL REFERENCES users(id), + file_name TEXT NOT NULL, + input_path TEXT NOT NULL, + output_path TEXT, + status TEXT NOT NULL DEFAULT 'pending', -- e.g., pending, processing, completed, failed + created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(), + updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW() +); + +-- 3. Processing Tasks +CREATE TABLE processing_tasks ( + id SERIAL PRIMARY KEY, + document_id INTEGER NOT NULL REFERENCES documents(id), + celery_task_id TEXT NOT NULL, + task_type TEXT NOT NULL, -- e.g., 'ocr', 'thumbnail' + status TEXT NOT NULL, -- e.g., 'queued', 'started', 'success', 'failure' + created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(), + completed_at TIMESTAMPTZ, + error_message TEXT +); +``` + +## 4. API Design and Endpoints + +We use a RESTful style under `/api/v1/`. FastAPI automatically generates documentation and lets us split routes into modules. + +Key endpoints: + +- **Auth** + - POST `/api/v1/auth/register` : Create a new user account. + - POST `/api/v1/auth/login` : Authenticate and get a JWT token. + +- **Documents** + - GET `/api/v1/documents` : List all documents for the authenticated user. + - POST `/api/v1/documents` : Upload a new document file and create a record. + - GET `/api/v1/documents/{id}` : Get metadata for a single document. + - GET `/api/v1/documents/{id}/download` : Download the processed file. + - DELETE `/api/v1/documents/{id}` : Remove a document and its records. + +- **Task Status** + - GET `/api/v1/documents/{id}/status` : Check processing status (pending, processing, done). + +How it fits together: + +- The Laravel/Filament frontend calls these endpoints using JWT tokens. +- FastAPI validates input via Pydantic and returns clear JSON responses. +- When a document is uploaded, the API enqueues a Celery task and immediately returns a “pending” status. + +## 5. Hosting Solutions + +We host all containers in a cloud environment (AWS, DigitalOcean, etc.) or on-premises servers. Options include: + +- **Docker Compose on a VM**: Simple to set up, good for small to medium loads. +- **Container Service (e.g., AWS ECS/Fargate)**: Managed service, auto-scaling, and integration with RDS and ElastiCache. + +Benefits of our choice: + +- Reliability: Using managed database (RDS) and cache (ElastiCache) reduces operational overhead. +- Scalability: Easily add more API or worker tasks in ECS or on multiple VMs. +- Cost-effectiveness: Pay only for what you use, with predictable monthly bills. + +## 6. Infrastructure Components + +- **Load Balancer**: Distributes API requests across multiple FastAPI containers. +- **Redis**: Acts as both a message broker for Celery and an optional caching layer. +- **CDN (e.g., Cloudflare / AWS CloudFront)**: Serves static assets (CSS, JS) and large files quickly worldwide. +- **Reverse Proxy (Nginx)**: Routes `/api/` traffic to FastAPI, handles TLS termination. +- **Shared Volumes**: Docker volumes or cloud‐backed file shares (e.g., AWS EFS) for input/output folders. + +These components work together to: + +- Balance traffic and prevent any single container from being overloaded. +- Speed up content delivery to end users. +- Provide a unified file storage location for all services. + +## 7. Security Measures + +We protect user data and comply with regulations through: + +- **Authentication & Authorization** + - JWT tokens for stateless, secure API access. + - Role‐based checks in FastAPI endpoints. +- **Encryption** + - TLS/SSL for all HTTP traffic. + - AES-256 encryption at rest for database and file storage (managed by the cloud provider). +- **Input Validation** + - Pydantic schemas prevent invalid data from entering the system. +- **Secret Management** + - Environment variables stored securely (e.g., AWS Secrets Manager). +- **Rate Limiting & Logging** + - Simple rate limiting on sensitive endpoints to prevent abuse. + - Structured logs for monitoring and forensic analysis. + +## 8. Monitoring and Maintenance + +To keep the backend healthy: + +- **Monitoring Tools** + - Prometheus & Grafana for metrics (CPU, memory, request latency). + - Sentry or Datadog for error tracking in FastAPI and Celery. +- **Logging** + - Centralized log aggregation (e.g., ELK stack or CloudWatch) with JSON formatting. +- **Maintenance Practices** + - Scheduled database backups and automated restore tests. + - Regular dependency updates and security patching via CI/CD. + - Health check endpoints (`/healthz`) for container restarts. + +## 9. Conclusion and Overall Backend Summary + +Our backend is a modern, API-first system built with FastAPI, PostgreSQL, Celery, and Docker. It’s designed to be: + +- **Scalable**: Add more containers as needed without downtime. +- **Maintainable**: Clear service boundaries make updates and bug fixes easier. +- **Performant**: Asynchronous task processing and caching keep the UI responsive. +- **Secure**: Industry-standard encryption, token-based auth, and input validation. + +This setup aligns perfectly with our goal of a robust, user-friendly Document Management System. By leveraging containerization and cloud infrastructure, we ensure reliability, cost control, and rapid feature delivery. \ No newline at end of file diff --git a/documentation/frontend_guidelines_document.md b/documentation/frontend_guidelines_document.md new file mode 100644 index 0000000..42b6d18 --- /dev/null +++ b/documentation/frontend_guidelines_document.md @@ -0,0 +1,157 @@ +# Frontend Guideline Document + +This document outlines the frontend setup for the Document Management System (DMS) admin panel. It covers architecture, design principles, styling, components, state management, routing, performance, and testing. By following these guidelines, anyone—technical or non-technical—can understand how the frontend is built and maintained. + +## 1. Frontend Architecture + +**Overview:** +- We use a Laravel application as our frontend, enhanced with Filament v3 to build a rich admin interface. +- Filament provides ready-to-use UI components and page builders, so we can focus on domain logic rather than low-level HTML/CSS. +- The frontend runs in its own Docker container, separated from the backend. It communicates purely over RESTful API calls to the FastAPI service. + +**Scalability and Maintainability:** +- **Separation of concerns:** All UI code lives in the Laravel project. Business logic and data access happen in the backend service. We avoid mixing responsibilities. +- **Containerization:** Each service (frontend, backend, worker) is isolated. We can scale the frontend by running multiple Laravel containers behind a load balancer. +- **Clear folder structure:** We keep Filament resources (pages, widgets, forms) in dedicated directories. This makes it easy to find and update features. + +**Performance:** +- **Server-side rendering:** Laravel renders pages on the server, delivering HTML ready to display. +- **Asset bundling:** We compile CSS and JavaScript with Laravel Mix (Webpack) and use PurgeCSS to strip unused styles. +- **API proxying:** Internal API calls are proxied in Docker, reducing cross-origin overhead in production. + +## 2. Design Principles + +1. **Usability:** + - We follow a consistent layout with clear labels, icons, and actions. + - Filament’s built-in navigation and form components provide predictable behavior. + +2. **Accessibility:** + - Use semantic HTML elements (e.g., `