Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
22 changes: 14 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,10 @@
# Quart + Vite + React Demo Application

**Task:** Visualize the CSV tickets with richer panels (status, priority, timeline, geography, SLA). **How would you like to view the tickets?**
**Current Task:** Document usecase demo ideas from the CSV-backed ticket dataset and build/iterate pages like `/usecase_demo_1` where each demo page has:
- a short summary
- editable agent prompt(s)
- a button that launches the agent run in background
- visible results (table/visualization)

> A teaching-oriented full-stack sample that pairs a Python Quart backend with a React + FluentUI frontend, real-time Server-Sent Events (SSE), and Playwright tests.

Expand All @@ -27,6 +31,7 @@ All deep-dive guides now live under `docs/` for easier discovery:
- [Pydantic Architecture](docs/PYDANTIC_ARCHITECTURE.md) – how models, validation, and operations fit together
- [Unified Architecture](docs/UNIFIED_ARCHITECTURE.md) – REST + MCP integration details and extension ideas
- [Troubleshooting](docs/TROUBLESHOOTING.md) – common issues and fixes for setup, dev, and tests
- [CSV AI Guidance](docs/CSV_AI_GUIDANCE.md) – how AI agents should query and reason over CSV ticket data



Expand All @@ -38,7 +43,7 @@ All deep-dive guides now live under `docs/` for easier discovery:
2. Run the automated bootstrap: `./setup.sh` (creates the repo-level `.venv`, installs frontend deps, installs Playwright, checks for Ollama)
3. (Optional) Install Ollama for LLM features: `curl -fsSL https://ollama.com/install.sh | sh && ollama pull llama3.2:1b`
4. Start all servers: `./start-dev.sh` *(or)* use the VS Code "Full Stack: Backend + Frontend" launch config
5. Open `http://localhost:3001`, switch to the **Tasks** tab, and create a task—the backend and frontend are now synced
5. Open `http://localhost:3001/usecase_demo_1` and start documenting your usecase demo idea on that page
6. (Optional) Test Ollama integration: `curl -X POST http://localhost:5001/api/ollama/chat -H "Content-Type: application/json" -d '{"messages":[{"role":"user","content":"Say hello"}]}'`
7. (Optional) Run the Playwright suite from the repo root: `npm run test:e2e`

Expand Down Expand Up @@ -95,9 +100,9 @@ Use the “Full Stack: Backend + Frontend” launch config to start backend + fr

### Smoke test checklist
- Visit `http://localhost:3001`
- Dashboard tab should show a ticking clock (SSE via `/api/time-stream`)
- Tasks tab should show three sample tasks (seeded by `TaskService.initialize_sample_data()`)
- Create a task, mark it complete, delete it—confirm state updates instantly
- Tickets tab should render CSV ticket table + stats from `/api/csv-tickets*`
- Usecase Demo tab (`/usecase_demo_1`) should show editable prompt + background run controls
- Fields tab should list mapped CSV fields from `/api/csv-tickets/fields`

## Docker (one command delivery)

Expand All @@ -113,9 +118,10 @@ docker run --rm -p 5001:5001 quart-react-demo
- Hot reloading is not part of the container flow—use the regular dev servers for iterative work and Docker for demos or deployment.

## Using the app
- **Dashboard tab:** Streams `{"time","date","timestamp"}` via EventSource; connection errors show inline.
- **Tasks tab:** Uses FluentUI `DataGrid` + dialogs; `frontend/src/features/tasks/TaskList.jsx` keeps calculations (`getTaskStats`) separate from actions (API calls).
- **About tab:** Summarizes tech choices and linkable resources.
- **Tickets tab (`/csvtickets`):** Shows CSV-backed ticket table, filtering, sorting, and pagination.
- **Usecase Demo tab (`/usecase_demo_1`):** Main demo page for documenting usecase demo ideas with editable prompts and background agent runs.
- **Fields tab (`/fields`):** Lists mapped CSV schema fields available to UI/MCP/agent flows.
- **Agent tab (`/agent`):** Chat-style agent interface for CSV ticket analysis.
- **Ollama API (backend only):**
- `POST /api/ollama/chat` — Chat with local LLM (supports conversation history)
- `GET /api/ollama/models` — List available models
Expand Down
19 changes: 15 additions & 4 deletions backend/agents.py
Original file line number Diff line number Diff line change
Expand Up @@ -309,13 +309,19 @@ def _build_csv_tools(self) -> list[StructuredTool]:
import json
service = get_csv_ticket_service()

def _csv_list_tickets(status: str | None = None, assigned_group: str | None = None, has_assignee: bool | None = None) -> str:
def _csv_list_tickets(
status: str | None = None,
assigned_group: str | None = None,
has_assignee: bool | None = None,
limit: int = 50,
) -> str:
try:
status_enum = TicketStatus(status.lower()) if status else None
except Exception:
status_enum = None
tickets = service.list_tickets(status=status_enum, assigned_group=assigned_group, has_assignee=has_assignee)
return json.dumps([t.model_dump() for t in tickets[:200]], default=str)
bounded_limit = max(1, min(limit, 100))
return json.dumps([t.model_dump() for t in tickets[:bounded_limit]], default=str)

def _csv_get_ticket(ticket_id: str) -> str:
try:
Expand All @@ -327,7 +333,7 @@ def _csv_get_ticket(ticket_id: str) -> str:
return json.dumps({"error": "not found"})
return json.dumps(ticket.model_dump(), default=str)

def _csv_search_tickets(query: str, limit: int = 50) -> str:
def _csv_search_tickets(query: str, limit: int = 25) -> str:
q = query.lower()
tickets = service.list_tickets()
matched = []
Expand Down Expand Up @@ -356,7 +362,12 @@ def _csv_ticket_fields() -> str:
StructuredTool.from_function(
func=_csv_list_tickets,
name="csv_list_tickets",
description="List tickets from CSV with optional filters: status (new, assigned, in_progress, pending, resolved, closed, cancelled), assigned_group, has_assignee (true/false). Returns JSON array.",
description=(
"List tickets from CSV with optional filters: status "
"(new, assigned, in_progress, pending, resolved, closed, cancelled), "
"assigned_group, has_assignee (true/false), and limit (default 50, max 100). "
"Returns JSON array."
),
),
StructuredTool.from_function(
func=_csv_get_ticket,
Expand Down
109 changes: 84 additions & 25 deletions backend/app.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@
import os
from datetime import datetime
from pathlib import Path
from uuid import UUID

# Load environment variables from .env file
from dotenv import load_dotenv
Expand All @@ -35,11 +36,13 @@

# CSV ticket service
from csv_data import Ticket, get_csv_ticket_service
from usecase_demo import UsecaseDemoRunCreate, usecase_demo_run_service

# FastMCP client for direct ticket MCP calls (no AI)
from fastmcp import Client as MCPClient
from mcp_handler import handle_mcp_request
from operations import (
CSV_TICKET_FIELDS,
op_create_task,
op_delete_task,
op_get_task,
Expand Down Expand Up @@ -177,6 +180,47 @@ async def rest_run_agent():
return jsonify({"error": str(e)}), 500


# ============================================================================
# USECASE DEMO AGENT RUN ENDPOINTS
# ============================================================================

@app.route("/api/usecase-demo/agent-runs", methods=["POST"])
async def create_usecase_demo_agent_run():
"""Queue a background agent run using the provided prompt."""
try:
data = await request.get_json() or {}
payload = UsecaseDemoRunCreate(**data)
run = await usecase_demo_run_service.create_run(payload)
return jsonify(run.model_dump(mode="json")), 202
except ValidationError as e:
return jsonify({"error": str(e)}), 400
except Exception as e:
return jsonify({"error": str(e)}), 500


@app.route("/api/usecase-demo/agent-runs", methods=["GET"])
async def list_usecase_demo_agent_runs():
"""List recent background agent runs."""
try:
limit = request.args.get("limit", default=20, type=int)
runs = await usecase_demo_run_service.list_runs(limit=limit or 20)
return jsonify({"runs": [run.model_dump(mode="json") for run in runs]}), 200
except Exception as e:
return jsonify({"error": str(e)}), 500


@app.route("/api/usecase-demo/agent-runs/<run_id>", methods=["GET"])
async def get_usecase_demo_agent_run(run_id: str):
"""Fetch one background run by ID."""
try:
run = await usecase_demo_run_service.get_run(run_id)
if run is None:
return jsonify({"error": "Run not found"}), 404
return jsonify(run.model_dump(mode="json")), 200
except Exception as e:
return jsonify({"error": str(e)}), 500


# ============================================================================
# TICKET MCP EXAMPLE - Direct FastMCP client usage (no AI)
# ============================================================================
Expand Down Expand Up @@ -403,31 +447,6 @@ async def get_qa_tickets():
print(f"📊 Loaded {_csv_loaded} tickets from CSV")


# Define which fields are available for display
CSV_TICKET_FIELDS = [
{"name": "id", "label": "ID", "type": "uuid"},
{"name": "summary", "label": "Summary", "type": "string"},
{"name": "status", "label": "Status", "type": "enum"},
{"name": "priority", "label": "Priority", "type": "enum"},
{"name": "assignee", "label": "Assignee", "type": "string"},
{"name": "assigned_group", "label": "Assigned Group", "type": "string"},
{"name": "requester_name", "label": "Requester", "type": "string"},
{"name": "requester_email", "label": "Email", "type": "string"},
{"name": "city", "label": "City", "type": "string"},
{"name": "country", "label": "Country", "type": "string"},
{"name": "service", "label": "Service", "type": "string"},
{"name": "incident_type", "label": "Incident Type", "type": "string"},
{"name": "product_name", "label": "Product", "type": "string"},
{"name": "manufacturer", "label": "Manufacturer", "type": "string"},
{"name": "created_at", "label": "Created", "type": "datetime"},
{"name": "updated_at", "label": "Updated", "type": "datetime"},
{"name": "urgency", "label": "Urgency", "type": "string"},
{"name": "impact", "label": "Impact", "type": "string"},
{"name": "resolution", "label": "Resolution", "type": "string"},
{"name": "notes", "label": "Notes", "type": "string"},
]


@app.route("/api/csv-tickets/fields", methods=["GET"])
async def get_csv_ticket_fields():
"""Get metadata about available CSV ticket fields."""
Expand Down Expand Up @@ -539,6 +558,46 @@ def get_sort_key(ticket: Ticket):
})


@app.route("/api/csv-tickets/<ticket_id>", methods=["GET"])
async def get_csv_ticket(ticket_id: str):
"""
Get one CSV ticket by ID.

Query params:
- fields: optional comma-separated list of fields to include
"""
try:
parsed_id = UUID(ticket_id)
except ValueError:
return jsonify({"error": "Invalid ticket ID"}), 400

ticket = _csv_ticket_service.get_ticket(parsed_id)
if ticket is None:
return jsonify({"error": "Ticket not found"}), 404

fields_param = request.args.get("fields", "")
if fields_param:
selected_fields = [f.strip() for f in fields_param.split(",") if f.strip()]
else:
selected_fields = list(ticket.model_fields.keys())

Comment on lines +578 to +583
Copy link

Copilot AI Feb 11, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When no fields query param is provided, this endpoint returns every Pydantic field on the Ticket model (including requester_email/phone/etc.). That can unintentionally expose PII and also produce large payloads. Consider defaulting to the same curated field set used by the list endpoint (or the CSV_TICKET_FIELDS allow-list), and rejecting unknown field names with a 400 to catch typos.

Copilot uses AI. Check for mistakes.
result = {}
for field in selected_fields:
val = getattr(ticket, field, None)
if val is None:
result[field] = None
elif hasattr(val, "value"):
result[field] = val.value
elif hasattr(val, "isoformat"):
result[field] = val.isoformat()
elif hasattr(val, "hex"):
result[field] = str(val)
else:
result[field] = val

return jsonify(result), 200


@app.route("/api/csv-tickets/stats", methods=["GET"])
async def get_csv_ticket_stats():
"""Get statistics about CSV tickets."""
Expand Down
Loading
Loading