A functional, well-structured backend API for a Task Management application built with FastAPI and SQLite.
- ✅ CRUD Operations: Create, Read, Update, and Delete tasks
- ✅ Search: Query parameter to search tasks by title (case-insensitive)
- ✅ Filter: Query parameter to filter tasks by status
- ✅ Task Status Tracking: Tasks can have status: "To-Do", "In Progress", "Done", "Overdue"
- ✅ Auto-generated API Documentation: Swagger UI and ReDoc
- ✅ LLM Integration: Stream AI-generated responses token-by-token via Server-Sent Events (SSE)
- ✅ Groq LLM: Free API for complex task planning and analysis
- ✅ Database Persistence: Complete AI responses saved to database after streaming
- ✅ Automatic Overdue Detection: Background worker checks and updates overdue tasks hourly
- ✅ Mock Email Notifications: Console logging of simulated email notifications
- ✅ APScheduler Integration: Reliable task scheduling without external workers
- Backend: FastAPI (Python web framework)
- Database: SQLite (no external setup needed)
- ORM: SQLAlchemy
- Server: Uvicorn
- LLM: Groq (free tier with generous limits)
- Task Scheduling: APScheduler
- Python 3.8+
- pip (Python package manager)
- Groq API Key (free from https://console.groq.com/)
cd e:\Flodo.AI\app# On Windows
python -m venv venv
venv\Scripts\activate
# On macOS/Linux
python3 -m venv venv
source venv/bin/activatepip install -r requirements.txtCreate a .env file in the project root (copy from .env.example):
# Copy the example file
cp .env.example .env
# Edit .env and add your Groq API keyGet a free Groq API key from: https://console.groq.com/
.env file:
GROQ_API_KEY=your_groq_api_key_here
uvicorn main:app --reload --host 0.0.0.0 --port 8000The API will be available at: http://localhost:8000
Server will:
- ✅ Create
tasks.dbautomatically - ✅ Initialize all database tables
- ✅ Start the background scheduler (checks overdue tasks every minute)
Once the server is running, access interactive API documentation:
- Swagger UI: http://localhost:8000/docs
- ReDoc: http://localhost:8000/redoc
POST /tasks/
Content-Type: application/json
{
"title": "Buy groceries",
"description": "Milk, eggs, bread",
"due_date": "2026-04-25",
"status": "To-Do"
}GET /tasks/Query Parameters:
title(optional): Search tasks by title (case-insensitive)status(optional): Filter by status ("To-Do", "In Progress", "Done", "Overdue")
Examples:
GET /tasks/?title=groceries
GET /tasks/?status=To-Do
GET /tasks/?title=buy&status=In ProgressGET /tasks/{task_id}PUT /tasks/{task_id}
Content-Type: application/json
{
"title": "Buy groceries (updated)",
"description": "Milk, eggs, bread, cheese",
"due_date": "2026-04-26",
"status": "In Progress"
}DELETE /tasks/{task_id}POST /ai/stream
Content-Type: application/json
{
"title": "Plan Q3 Marketing Budget",
"description": "Create a detailed marketing budget for Q3 including digital ads, content creation, and events"
}Response: Server-Sent Events (SSE) stream of tokens, then saves to database.
Example Response:
data: {"token": "To"}
data: {"token": " plan"}
data: {"token": " a Q3"}
...
data: {"status": "complete", "task_id": 1}
POST /ai/generate-for-task/{task_id}Response: Streams AI-generated response and saves to the task's ai_response field.
The background scheduler automatically:
- Checks for overdue tasks every 1 minute (configurable in
background_tasks.py) - Updates tasks with
due_datein the past and status"To-Do"to"Overdue" - Logs mock email notifications to console
Example Console Output:
✓ Updated 1 task(s) to Overdue status.
╔════════════════════════════════════════════════════════════════╗
║ 📧 EMAIL NOTIFICATION ║
╠════════════════════════════════════════════════════════════════╣
║ To: user@example.com ║
║ Subject: Task Overdue - Buy groceries ║
║ Task ID: 1 ║
║ Due Date: 2026-04-25 ║
║ Days Overdue: 5 ║
╚════════════════════════════════════════════════════════════════╝
- Navigate to http://localhost:8000/docs
- Try out each endpoint directly in the browser
# Create a task
curl -X POST "http://localhost:8000/tasks/" \
-H "Content-Type: application/json" \
-d '{
"title": "Buy groceries",
"description": "Milk, eggs, bread",
"due_date": "2026-04-25",
"status": "To-Do"
}'
# Get all tasks
curl "http://localhost:8000/tasks/"
# Get all tasks with title search
curl "http://localhost:8000/tasks/?title=groceries"
# Get all tasks filtered by status
curl "http://localhost:8000/tasks/?status=To-Do"
# Get single task (replace 1 with actual task ID)
curl "http://localhost:8000/tasks/1"
# Update a task
curl -X PUT "http://localhost:8000/tasks/1" \
-H "Content-Type: application/json" \
-d '{
"title": "Buy groceries - Updated",
"status": "In Progress"
}'
# Delete a task
curl -X DELETE "http://localhost:8000/tasks/1"Create test_api.py:
import requests
import json
BASE_URL = "http://localhost:8000/tasks"
# Create task
response = requests.post(BASE_URL, json={
"title": "Buy groceries",
"description": "Milk, eggs, bread",
"due_date": "2026-04-25",
"status": "To-Do"
})
print(f"Create: {response.status_code}")
task_id = response.json()["id"]
# Get all tasks
response = requests.get(BASE_URL)
print(f"Get All: {response.status_code}, Count: {len(response.json())}")
# Search by title
response = requests.get(f"{BASE_URL}/?title=groceries")
print(f"Search: {response.status_code}, Results: {len(response.json())}")
# Filter by status
response = requests.get(f"{BASE_URL}/?status=To-Do")
print(f"Filter: {response.status_code}, Results: {len(response.json())}")
# Get single task
response = requests.get(f"{BASE_URL}/{task_id}")
print(f"Get Single: {response.status_code}")
# Update task
response = requests.put(f"{BASE_URL}/{task_id}", json={
"status": "In Progress"
})
print(f"Update: {response.status_code}")
# Delete task
response = requests.delete(f"{BASE_URL}/{task_id}")
print(f"Delete: {response.status_code}")Run the test script:
python test_api.pye:\Flodo.AI\app\
├── main.py # FastAPI app initialization & scheduler startup
├── database.py # SQLite configuration
├── background_tasks.py # APScheduler background worker
├── requirements.txt # Python dependencies
├── .env.example # Environment variables template
├── tasks.db # SQLite database (auto-created)
├── models/
│ ├── __init__.py
│ └── task.py # SQLAlchemy Task model
├── routes/
│ ├── __init__.py
│ ├── task.py # Task CRUD endpoints
│ └── ai.py # AI streaming endpoints
├── schemas/
│ ├── __init__.py
│ └── task.py # Pydantic schemas
└── README.md # This file
Each task includes:
- id: Unique identifier (auto-generated)
- title: Task title (1-100 characters)
- description: Task description (1-500 characters)
- due_date: Due date (ISO 8601 format: YYYY-MM-DD)
- status: Task status - "To-Do", "In Progress", "Done", or "Overdue"
- ai_response: AI-generated content (stored after streaming completes, optional)
The API returns appropriate HTTP status codes:
200 OK: Successful GET/PUT request201 Created: Successful POST request204 No Content: Successful DELETE request404 Not Found: Task not found422 Unprocessable Entity: Invalid request data
-
Clone the repository
git clone https://github.com/ary778/taskmanagementapi.git cd taskmanagementapi -
Create virtual environment
python -m venv venv venv\Scripts\activate # Windows source venv/bin/activate # macOS/Linux
-
Install dependencies
pip install -r requirements.txt
-
Configure Groq API Key
cp .env.example .env # Edit .env and add your free Groq API key from https://console.groq.com/ -
Run the server
uvicorn main:app --reload --host 0.0.0.0 --port 8000
-
Test the API
- Open http://localhost:8000/docs (Swagger UI)
- Create, read, update, and delete tasks
- Test AI streaming endpoint
- Single-threaded database: SQLite doesn't handle concurrent writes well for multi-user scenarios
- No user authentication: All tasks are global; no user isolation or permission levels
- Streaming overhead: SSE connections don't persist; reconnection required after stream ends
- Task dependency tracking: No way to link subtasks or create task hierarchies
- Migrate to PostgreSQL for concurrent multi-user support with connection pooling
- Add JWT authentication and user-scoped task queries
- Implement WebSocket instead of SSE for persistent real-time collaboration
- Add task dependencies and subtask hierarchy support
- Use Redis for caching frequently accessed tasks and session management
The AI module provides intelligent task breakdown and planning capabilities:
Architecture:
- Uses Groq API (free tier) via OpenAI SDK compatibility layer
- Streams responses token-by-token using Server-Sent Events (SSE)
- Automatically parses AI responses into numbered steps
- Creates subtasks from each step and persists to database
Key Features:
- Stream Endpoint (
POST /ai/stream): Generate AI solution for new complex tasks - Generate for Existing Task (
POST /ai/generate-for-task/{task_id}): Enhance existing tasks with AI planning - Automatic Subtask Creation: Parses numbered steps and creates individual tasks
- Full Response Persistence: Saves complete AI responses in
ai_responsefield for reference
Performance Metrics:
- Streaming latency: ~500ms per token (Groq API performance)
- Model:
llama-3.3-70b-versatile(free tier) - Max tokens: 2048 per request
- API rate limits: Generous free tier suitable for development
Example Usage:
curl -X POST "http://localhost:8000/ai/stream" \
-H "Content-Type: application/json" \
-d '{
"title": "Plan Q3 Marketing Budget",
"description": "Create budget breakdown for digital ads, content creation, and events"
}'The application uses the following environment variable:
- Create
.envfile in the project root (copy from.env.example):
GROQ_API_KEY=your_groq_api_key_here
- The
GROQ_API_KEYis required for AI streaming endpoints- Get it free from: https://console.groq.com/
- No credit card required
- Ensure
.envfile exists with your Groq API key - AI streaming endpoints will fail without this
- Get a free key from: https://console.groq.com/
- Ensure virtual environment is activated
- Run
pip install -r requirements.txt
- Use a different port:
uvicorn main:app --port 8001
- Check console for startup message: "Background scheduler started..."
- Verify tasks.db exists (should be created automatically)
- Check logs for errors in
background_tasks.py
- Run the server at least once:
uvicorn main:app --reload - The database is created automatically on first run
- User authentication and authorization
- Task categories/projects
- Task priorities and subtasks
- Advanced notifications (Slack, email integration)
- Task analytics and reporting
- Database migrations with Alembic
- Unit and integration tests
- API rate limiting
- Multiple LLM provider support (OpenAI, Gemini)
- Task collaboration and sharing
- ✅ Base App: CRUD operations, Search, Filter, Documentation
- ✅ AI Streaming: Token-by-token streaming via SSE, Groq LLM, Database persistence
- ✅ Background Tasks: Automatic overdue detection, Email notifications, APScheduler
This project is provided as-is for educational purposes.