Skip to content

ary778/taskmanagementapi

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Task Management API

A functional, well-structured backend API for a Task Management application built with FastAPI and SQLite.

Features

Base App

  • CRUD Operations: Create, Read, Update, and Delete tasks
  • Search: Query parameter to search tasks by title (case-insensitive)
  • Filter: Query parameter to filter tasks by status
  • Task Status Tracking: Tasks can have status: "To-Do", "In Progress", "Done", "Overdue"
  • Auto-generated API Documentation: Swagger UI and ReDoc

AI Streaming

  • LLM Integration: Stream AI-generated responses token-by-token via Server-Sent Events (SSE)
  • Groq LLM: Free API for complex task planning and analysis
  • Database Persistence: Complete AI responses saved to database after streaming

Background Tasks

  • Automatic Overdue Detection: Background worker checks and updates overdue tasks hourly
  • Mock Email Notifications: Console logging of simulated email notifications
  • APScheduler Integration: Reliable task scheduling without external workers

Tech Stack

  • Backend: FastAPI (Python web framework)
  • Database: SQLite (no external setup needed)
  • ORM: SQLAlchemy
  • Server: Uvicorn
  • LLM: Groq (free tier with generous limits)
  • Task Scheduling: APScheduler

Prerequisites

Setup Instructions

1. Clone/Setup the Project

cd e:\Flodo.AI\app

2. Create a Virtual Environment (Recommended)

# On Windows
python -m venv venv
venv\Scripts\activate

# On macOS/Linux
python3 -m venv venv
source venv/bin/activate

3. Install Dependencies

pip install -r requirements.txt

4. Configure Environment Variables

Create a .env file in the project root (copy from .env.example):

# Copy the example file
cp .env.example .env

# Edit .env and add your Groq API key

Get a free Groq API key from: https://console.groq.com/

.env file:

GROQ_API_KEY=your_groq_api_key_here

5. Run the Server

uvicorn main:app --reload --host 0.0.0.0 --port 8000

The API will be available at: http://localhost:8000

Server will:

  • ✅ Create tasks.db automatically
  • ✅ Initialize all database tables
  • ✅ Start the background scheduler (checks overdue tasks every minute)

API Documentation

Once the server is running, access interactive API documentation:

API Endpoints

🟢 Base Task Endpoints

Create Task

POST /tasks/
Content-Type: application/json

{
  "title": "Buy groceries",
  "description": "Milk, eggs, bread",
  "due_date": "2026-04-25",
  "status": "To-Do"
}

Get All Tasks

GET /tasks/

Query Parameters:

  • title (optional): Search tasks by title (case-insensitive)
  • status (optional): Filter by status ("To-Do", "In Progress", "Done", "Overdue")

Examples:

GET /tasks/?title=groceries
GET /tasks/?status=To-Do
GET /tasks/?title=buy&status=In Progress

Get Single Task

GET /tasks/{task_id}

Update Task

PUT /tasks/{task_id}
Content-Type: application/json

{
  "title": "Buy groceries (updated)",
  "description": "Milk, eggs, bread, cheese",
  "due_date": "2026-04-26",
  "status": "In Progress"
}

Delete Task

DELETE /tasks/{task_id}

🤖 AI Streaming Endpoints

Stream AI Response for New Task

POST /ai/stream
Content-Type: application/json

{
  "title": "Plan Q3 Marketing Budget",
  "description": "Create a detailed marketing budget for Q3 including digital ads, content creation, and events"
}

Response: Server-Sent Events (SSE) stream of tokens, then saves to database.

Example Response:

data: {"token": "To"}
data: {"token": " plan"}
data: {"token": " a Q3"}
...
data: {"status": "complete", "task_id": 1}

Generate AI Response for Existing Task

POST /ai/generate-for-task/{task_id}

Response: Streams AI-generated response and saves to the task's ai_response field.

⏰ Background Tasks

The background scheduler automatically:

  • Checks for overdue tasks every 1 minute (configurable in background_tasks.py)
  • Updates tasks with due_date in the past and status "To-Do" to "Overdue"
  • Logs mock email notifications to console

Example Console Output:

✓ Updated 1 task(s) to Overdue status.

╔════════════════════════════════════════════════════════════════╗
║                     📧 EMAIL NOTIFICATION                      ║
╠════════════════════════════════════════════════════════════════╣
║ To: user@example.com                                           ║
║ Subject: Task Overdue - Buy groceries                          ║
║ Task ID: 1                                                     ║
║ Due Date: 2026-04-25                                           ║
║ Days Overdue: 5                                                ║
╚════════════════════════════════════════════════════════════════╝

Testing the API

Option 1: Using Swagger UI (Built-in)

  1. Navigate to http://localhost:8000/docs
  2. Try out each endpoint directly in the browser

Option 2: Using cURL

# Create a task
curl -X POST "http://localhost:8000/tasks/" \
  -H "Content-Type: application/json" \
  -d '{
    "title": "Buy groceries",
    "description": "Milk, eggs, bread",
    "due_date": "2026-04-25",
    "status": "To-Do"
  }'

# Get all tasks
curl "http://localhost:8000/tasks/"

# Get all tasks with title search
curl "http://localhost:8000/tasks/?title=groceries"

# Get all tasks filtered by status
curl "http://localhost:8000/tasks/?status=To-Do"

# Get single task (replace 1 with actual task ID)
curl "http://localhost:8000/tasks/1"

# Update a task
curl -X PUT "http://localhost:8000/tasks/1" \
  -H "Content-Type: application/json" \
  -d '{
    "title": "Buy groceries - Updated",
    "status": "In Progress"
  }'

# Delete a task
curl -X DELETE "http://localhost:8000/tasks/1"

Option 3: Using Python Script

Create test_api.py:

import requests
import json

BASE_URL = "http://localhost:8000/tasks"

# Create task
response = requests.post(BASE_URL, json={
    "title": "Buy groceries",
    "description": "Milk, eggs, bread",
    "due_date": "2026-04-25",
    "status": "To-Do"
})
print(f"Create: {response.status_code}")
task_id = response.json()["id"]

# Get all tasks
response = requests.get(BASE_URL)
print(f"Get All: {response.status_code}, Count: {len(response.json())}")

# Search by title
response = requests.get(f"{BASE_URL}/?title=groceries")
print(f"Search: {response.status_code}, Results: {len(response.json())}")

# Filter by status
response = requests.get(f"{BASE_URL}/?status=To-Do")
print(f"Filter: {response.status_code}, Results: {len(response.json())}")

# Get single task
response = requests.get(f"{BASE_URL}/{task_id}")
print(f"Get Single: {response.status_code}")

# Update task
response = requests.put(f"{BASE_URL}/{task_id}", json={
    "status": "In Progress"
})
print(f"Update: {response.status_code}")

# Delete task
response = requests.delete(f"{BASE_URL}/{task_id}")
print(f"Delete: {response.status_code}")

Run the test script:

python test_api.py

Project Structure

e:\Flodo.AI\app\
├── main.py                    # FastAPI app initialization & scheduler startup
├── database.py                # SQLite configuration
├── background_tasks.py        # APScheduler background worker
├── requirements.txt           # Python dependencies
├── .env.example               # Environment variables template
├── tasks.db                   # SQLite database (auto-created)
├── models/
│   ├── __init__.py
│   └── task.py               # SQLAlchemy Task model
├── routes/
│   ├── __init__.py
│   ├── task.py               # Task CRUD endpoints
│   └── ai.py                 # AI streaming endpoints
├── schemas/
│   ├── __init__.py
│   └── task.py               # Pydantic schemas
└── README.md                 # This file

Task Data Model

Each task includes:

  • id: Unique identifier (auto-generated)
  • title: Task title (1-100 characters)
  • description: Task description (1-500 characters)
  • due_date: Due date (ISO 8601 format: YYYY-MM-DD)
  • status: Task status - "To-Do", "In Progress", "Done", or "Overdue"
  • ai_response: AI-generated content (stored after streaming completes, optional)

Error Handling

The API returns appropriate HTTP status codes:

  • 200 OK: Successful GET/PUT request
  • 201 Created: Successful POST request
  • 204 No Content: Successful DELETE request
  • 404 Not Found: Task not found
  • 422 Unprocessable Entity: Invalid request data

📋 Step-by-Step Setup Instructions

Quick Start (5 minutes)

  1. Clone the repository

    git clone https://github.com/ary778/taskmanagementapi.git
    cd taskmanagementapi
  2. Create virtual environment

    python -m venv venv
    venv\Scripts\activate  # Windows
    source venv/bin/activate  # macOS/Linux
  3. Install dependencies

    pip install -r requirements.txt
  4. Configure Groq API Key

    cp .env.example .env
    # Edit .env and add your free Groq API key from https://console.groq.com/
  5. Run the server

    uvicorn main:app --reload --host 0.0.0.0 --port 8000
  6. Test the API


🎯 Stretch Goal: Real-time Task Collaboration

Identified Bottlenecks

  • Single-threaded database: SQLite doesn't handle concurrent writes well for multi-user scenarios
  • No user authentication: All tasks are global; no user isolation or permission levels
  • Streaming overhead: SSE connections don't persist; reconnection required after stream ends
  • Task dependency tracking: No way to link subtasks or create task hierarchies

Future Improvements

  • Migrate to PostgreSQL for concurrent multi-user support with connection pooling
  • Add JWT authentication and user-scoped task queries
  • Implement WebSocket instead of SSE for persistent real-time collaboration
  • Add task dependencies and subtask hierarchy support
  • Use Redis for caching frequently accessed tasks and session management

🤖 AI Integration Report

Groq LLM Integration (ai.py)

The AI module provides intelligent task breakdown and planning capabilities:

Architecture:

  • Uses Groq API (free tier) via OpenAI SDK compatibility layer
  • Streams responses token-by-token using Server-Sent Events (SSE)
  • Automatically parses AI responses into numbered steps
  • Creates subtasks from each step and persists to database

Key Features:

  1. Stream Endpoint (POST /ai/stream): Generate AI solution for new complex tasks
  2. Generate for Existing Task (POST /ai/generate-for-task/{task_id}): Enhance existing tasks with AI planning
  3. Automatic Subtask Creation: Parses numbered steps and creates individual tasks
  4. Full Response Persistence: Saves complete AI responses in ai_response field for reference

Performance Metrics:

  • Streaming latency: ~500ms per token (Groq API performance)
  • Model: llama-3.3-70b-versatile (free tier)
  • Max tokens: 2048 per request
  • API rate limits: Generous free tier suitable for development

Example Usage:

curl -X POST "http://localhost:8000/ai/stream" \
  -H "Content-Type: application/json" \
  -d '{
    "title": "Plan Q3 Marketing Budget",
    "description": "Create budget breakdown for digital ads, content creation, and events"
  }'

Environment Variables

The application uses the following environment variable:

  1. Create .env file in the project root (copy from .env.example):
GROQ_API_KEY=your_groq_api_key_here
  1. The GROQ_API_KEY is required for AI streaming endpoints

Troubleshooting

GROQ_API_KEY not set

  • Ensure .env file exists with your Groq API key
  • AI streaming endpoints will fail without this
  • Get a free key from: https://console.groq.com/

Module not found errors

  • Ensure virtual environment is activated
  • Run pip install -r requirements.txt

Port 8000 already in use

  • Use a different port: uvicorn main:app --port 8001

Background scheduler not running

  • Check console for startup message: "Background scheduler started..."
  • Verify tasks.db exists (should be created automatically)
  • Check logs for errors in background_tasks.py

tasks.db file not created

  • Run the server at least once: uvicorn main:app --reload
  • The database is created automatically on first run

Future Enhancements

  • User authentication and authorization
  • Task categories/projects
  • Task priorities and subtasks
  • Advanced notifications (Slack, email integration)
  • Task analytics and reporting
  • Database migrations with Alembic
  • Unit and integration tests
  • API rate limiting
  • Multiple LLM provider support (OpenAI, Gemini)
  • Task collaboration and sharing

Requirements Checklist

  • Base App: CRUD operations, Search, Filter, Documentation
  • AI Streaming: Token-by-token streaming via SSE, Groq LLM, Database persistence
  • Background Tasks: Automatic overdue detection, Email notifications, APScheduler

License

This project is provided as-is for educational purposes.

About

A functional, well-structured backend API for a Task Management application built with FastAPI and SQLite.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages