Conversation
Signed-off-by: Andre Bossard <anbossar@microsoft.com>
Signed-off-by: Andre Bossard <anbossar@microsoft.com>
…out Assignee" Signed-off-by: Andre Bossard <anbossar@microsoft.com>
…and MCP JSON-RPC Signed-off-by: Andre Bossard <anbossar@microsoft.com>
There was a problem hiding this comment.
Pull request overview
This PR adds a new QA Tickets feature to support incident management workflows, along with significant backend refactoring that extracts operation definitions into a dedicated module. The feature enables users to view, filter, and make escalation decisions on QA tickets through a master-detail UI interface.
Key Changes
- New QA Tickets feature: Complete frontend component with filtering, master-detail view, and escalation decision actions (GOOD/ESCALATE)
- Backend refactoring: Operations moved from
app.pytooperations.pyfor better separation and sharing across REST, MCP, and agent interfaces - Comprehensive ticket models: Pydantic models for support ticket domain including SLA calculations and reminder logic (currently unused)
Reviewed changes
Copilot reviewed 11 out of 12 changed files in this pull request and generated 10 comments.
Show a summary per file
| File | Description |
|---|---|
frontend/src/services/api.js |
Added getQATickets() API function following existing fetchJSON pattern |
frontend/src/features/tickets/TicketList.jsx |
New 500-line component with master-detail layout, filtering, and ticket decision actions |
frontend/src/App.jsx |
Added tickets tab to navigation with AlertUrgent icon and routing |
backend/app.py |
Extracted operations to operations.py module; added /api/qa-tickets mock endpoint |
backend/tickets.py |
Comprehensive Pydantic models for support tickets with SLA logic and reminder calculations |
backend/test_tickets.py |
Validation script demonstrating ticket models with sample data |
backend/operations.py |
Centralized operation definitions using @operation decorator for unified REST/MCP/agent access |
backend/agents.py |
Import fix to ensure operations register before LangChain tool discovery |
docs/ticker_reminder/RULES_EN.md |
English specification for ticket reminder feature with field references |
docs/ticker_reminder/RULES.md |
German specification for ticket reminder workflow |
explain.drawio |
Diagram illustrating LLM planning concepts |
Comments suppressed due to low confidence (5)
backend/app.py:245
- Missing input validation on the QA tickets endpoint. Unlike other REST endpoints in the file that wrap operations with Pydantic validation (see
rest_create_task,rest_update_task), theget_qa_ticketsendpoint doesn't validate query parameters or implement any error handling. While it currently returns mock data, when this becomes a real implementation, it should follow the same pattern of validation and error handling as other endpoints in the file.
except Exception as e:
return jsonify({"error": str(e)}), 500
# ============================================================================
# TICKET MCP EXAMPLE - Direct FastMCP client usage (no AI)
# ============================================================================
async def _call_ticket_mcp_tool(tool_name: str, args: dict | None = None) -> list[dict]:
"""
Helper: Call a tool on the Ticket MCP server and extract results.
This demonstrates using FastMCP client programmatically without any AI.
The connection is opened, tool is called, and connection is closed.
Args:
tool_name: Name of the MCP tool to call (e.g., "list_tickets")
args: Optional dict of arguments for the tool
Returns:
List of parsed JSON results from the tool response
"""
args = args or {}
results = []
async with MCPClient(TICKET_MCP_SERVER_URL) as client:
response = await client.call_tool(tool_name, args)
# Extract text content from MCP response
if hasattr(response, 'content') and response.content:
for content_item in response.content:
# Only process TextContent items (use getattr for type safety)
text = getattr(content_item, 'text', None)
if text is not None and isinstance(text, str):
try:
# Parse JSON if possible
results.append(json.loads(text))
except json.JSONDecodeError:
results.append({"text": text})
return results
return results
@app.route("/api/tickets", methods=["GET"])
async def rest_list_tickets():
"""
List tickets from external Ticket MCP server.
Example of calling MCP tools directly via FastMCP client.
No AI involved - just pure MCP protocol.
Query params:
- status: Filter by status (new, assigned, in_progress, etc.)
- priority: Filter by priority (critical, high, medium, low)
- search: Full-text search in summary/description
- page: Page number (default: 1)
- page_size: Results per page (default: 20)
"""
try:
# Build args from query params
args = {}
for param in ["status", "priority", "city", "service", "search"]:
if val := request.args.get(param):
args[param] = val
for param in ["page", "page_size"]:
if val := request.args.get(param):
args[param] = int(val)
results = await _call_ticket_mcp_tool("list_tickets", args)
return jsonify(results[0] if len(results) == 1 else results), 200
backend/app.py:245
- The QA tickets endpoint returns mock data with hardcoded dates ("2025-12-09T08:30:00Z", "2025-12-10T09:00:00Z"). These dates are in the past relative to the current date (December 17, 2025), which is correct for testing. However, the dates should either use relative time calculations or clearly document that this is temporary mock data that will be replaced.
except Exception as e:
return jsonify({"error": str(e)}), 500
# ============================================================================
# TICKET MCP EXAMPLE - Direct FastMCP client usage (no AI)
# ============================================================================
async def _call_ticket_mcp_tool(tool_name: str, args: dict | None = None) -> list[dict]:
"""
Helper: Call a tool on the Ticket MCP server and extract results.
This demonstrates using FastMCP client programmatically without any AI.
The connection is opened, tool is called, and connection is closed.
Args:
tool_name: Name of the MCP tool to call (e.g., "list_tickets")
args: Optional dict of arguments for the tool
Returns:
List of parsed JSON results from the tool response
"""
args = args or {}
results = []
async with MCPClient(TICKET_MCP_SERVER_URL) as client:
response = await client.call_tool(tool_name, args)
# Extract text content from MCP response
if hasattr(response, 'content') and response.content:
for content_item in response.content:
# Only process TextContent items (use getattr for type safety)
text = getattr(content_item, 'text', None)
if text is not None and isinstance(text, str):
try:
# Parse JSON if possible
results.append(json.loads(text))
except json.JSONDecodeError:
results.append({"text": text})
return results
return results
@app.route("/api/tickets", methods=["GET"])
async def rest_list_tickets():
"""
List tickets from external Ticket MCP server.
Example of calling MCP tools directly via FastMCP client.
No AI involved - just pure MCP protocol.
Query params:
- status: Filter by status (new, assigned, in_progress, etc.)
- priority: Filter by priority (critical, high, medium, low)
- search: Full-text search in summary/description
- page: Page number (default: 1)
- page_size: Results per page (default: 20)
"""
try:
# Build args from query params
args = {}
for param in ["status", "priority", "city", "service", "search"]:
if val := request.args.get(param):
args[param] = val
for param in ["page", "page_size"]:
if val := request.args.get(param):
args[param] = int(val)
results = await _call_ticket_mcp_tool("list_tickets", args)
return jsonify(results[0] if len(results) == 1 else results), 200
backend/app.py:245
- The QA tickets endpoint is not exposed through the unified operations system. According to the architecture guidelines, every new capability should be exposed via the shared
@operationdecorator so both REST and MCP interfaces stay in sync. The endpoint is defined directly in app.py as a REST-only route, bypassing theoperations.pymodule. Consider creating an operation likeop_get_qa_ticketsinoperations.pyfollowing the pattern of existing operations.
except Exception as e:
return jsonify({"error": str(e)}), 500
# ============================================================================
# TICKET MCP EXAMPLE - Direct FastMCP client usage (no AI)
# ============================================================================
async def _call_ticket_mcp_tool(tool_name: str, args: dict | None = None) -> list[dict]:
"""
Helper: Call a tool on the Ticket MCP server and extract results.
This demonstrates using FastMCP client programmatically without any AI.
The connection is opened, tool is called, and connection is closed.
Args:
tool_name: Name of the MCP tool to call (e.g., "list_tickets")
args: Optional dict of arguments for the tool
Returns:
List of parsed JSON results from the tool response
"""
args = args or {}
results = []
async with MCPClient(TICKET_MCP_SERVER_URL) as client:
response = await client.call_tool(tool_name, args)
# Extract text content from MCP response
if hasattr(response, 'content') and response.content:
for content_item in response.content:
# Only process TextContent items (use getattr for type safety)
text = getattr(content_item, 'text', None)
if text is not None and isinstance(text, str):
try:
# Parse JSON if possible
results.append(json.loads(text))
except json.JSONDecodeError:
results.append({"text": text})
return results
return results
@app.route("/api/tickets", methods=["GET"])
async def rest_list_tickets():
"""
List tickets from external Ticket MCP server.
Example of calling MCP tools directly via FastMCP client.
No AI involved - just pure MCP protocol.
Query params:
- status: Filter by status (new, assigned, in_progress, etc.)
- priority: Filter by priority (critical, high, medium, low)
- search: Full-text search in summary/description
- page: Page number (default: 1)
- page_size: Results per page (default: 20)
"""
try:
# Build args from query params
args = {}
for param in ["status", "priority", "city", "service", "search"]:
if val := request.args.get(param):
args[param] = val
for param in ["page", "page_size"]:
if val := request.args.get(param):
args[param] = int(val)
results = await _call_ticket_mcp_tool("list_tickets", args)
return jsonify(results[0] if len(results) == 1 else results), 200
backend/app.py:245
- Field naming inconsistency between backend and Pydantic models. The QA tickets endpoint uses camelCase (createdAt, updatedAt, escalationNeeded) while the Ticket model in tickets.py uses snake_case (created_at, updated_at). The frontend component expects camelCase based on usage like
selectedTicket.createdAtat line 425. This violates the architecture guideline that Pydantic models should define the exact field names expected by both REST serializers and MCP tooling. Choose one convention consistently across all layers.
# ============================================================================
# TICKET MCP EXAMPLE - Direct FastMCP client usage (no AI)
# ============================================================================
async def _call_ticket_mcp_tool(tool_name: str, args: dict | None = None) -> list[dict]:
"""
Helper: Call a tool on the Ticket MCP server and extract results.
This demonstrates using FastMCP client programmatically without any AI.
The connection is opened, tool is called, and connection is closed.
Args:
tool_name: Name of the MCP tool to call (e.g., "list_tickets")
args: Optional dict of arguments for the tool
Returns:
List of parsed JSON results from the tool response
"""
args = args or {}
results = []
async with MCPClient(TICKET_MCP_SERVER_URL) as client:
response = await client.call_tool(tool_name, args)
# Extract text content from MCP response
if hasattr(response, 'content') and response.content:
for content_item in response.content:
# Only process TextContent items (use getattr for type safety)
text = getattr(content_item, 'text', None)
if text is not None and isinstance(text, str):
try:
# Parse JSON if possible
results.append(json.loads(text))
except json.JSONDecodeError:
results.append({"text": text})
return results
return results
@app.route("/api/tickets", methods=["GET"])
async def rest_list_tickets():
"""
List tickets from external Ticket MCP server.
Example of calling MCP tools directly via FastMCP client.
No AI involved - just pure MCP protocol.
Query params:
- status: Filter by status (new, assigned, in_progress, etc.)
- priority: Filter by priority (critical, high, medium, low)
- search: Full-text search in summary/description
- page: Page number (default: 1)
- page_size: Results per page (default: 20)
"""
try:
# Build args from query params
args = {}
for param in ["status", "priority", "city", "service", "search"]:
if val := request.args.get(param):
args[param] = val
for param in ["page", "page_size"]:
if val := request.args.get(param):
args[param] = int(val)
results = await _call_ticket_mcp_tool("list_tickets", args)
return jsonify(results[0] if len(results) == 1 else results), 200
backend/agents.py:38
- Import of 'operations' is not used.
from dotenv import load_dotenv
| Search20Regular, | ||
| Warning24Regular, | ||
| } from '@fluentui/react-icons' | ||
| import { useState } from 'react' |
There was a problem hiding this comment.
Missing useEffect import but it's not being used in the component. The TicketList component uses manual data fetching via handleStartAnalysis button instead of the typical useEffect pattern seen in other features like TaskList. This is inconsistent with the architecture where features typically load data automatically on mount. Consider adding a useEffect hook to load initial data, similar to the TaskList component pattern.
| const [loading, setLoading] = useState(false) | ||
| const [error, setError] = useState(null) | ||
| const [hasStarted, setHasStarted] = useState(false) | ||
| const [ticketDecisions, setTicketDecisions] = useState({}) |
There was a problem hiding this comment.
The ticketDecisions state is used to track user decisions but is never persisted or sent to the backend. The TODO comments at lines 272 and 280 indicate planned backend integration, but the state will be lost on page refresh. Consider either implementing the backend integration now or adding a warning in the UI that decisions are not yet persisted.
| /** | ||
| * TicketList Component | ||
| * | ||
| * Master-detail view for QA incident tickets needing escalation | ||
| * Displays unassigned tickets in a list with detail panel | ||
| * | ||
| * Following principles: | ||
| * - Pure functions for data transformations (calculations) | ||
| * - Side effects isolated in event handlers (actions) | ||
| * - Master-detail layout with split view | ||
| */ | ||
|
|
||
| import { | ||
| Badge, | ||
| Button, | ||
| DataGrid, | ||
| DataGridBody, | ||
| DataGridCell, | ||
| DataGridHeader, | ||
| DataGridHeaderCell, | ||
| DataGridRow, | ||
| Field, | ||
| Input, | ||
| MessageBar, | ||
| MessageBarBody, | ||
| Select, | ||
| Spinner, | ||
| TableCellLayout, | ||
| Text, | ||
| createTableColumn, |
There was a problem hiding this comment.
The new QA Tickets feature lacks E2E test coverage. The existing test suite in tests/e2e/app.spec.js has comprehensive coverage for other tabs (Dashboard, Tasks, About) but the new tickets tab with data-testid="tab-tickets" is not tested. According to the coding guidelines, the E2E suite assumes sample data exists and tabs are labeled via data-testid. Add test coverage for the tickets tab, including navigation, data loading via the "Start QA Analyse" button, filtering, selection, and the GOOD/ESCALATE action buttons.
| """ | ||
| Support Ticket Models | ||
|
|
||
| Strictly typed Pydantic models for the support-tickets MCP service. | ||
| These models match the schema from the external MCP tools and provide | ||
| type safety, validation, and documentation. | ||
|
|
||
| Following "Grokking Simplicity" and "A Philosophy of Software Design": | ||
| - Deep module: Clear types hide complexity | ||
| - Separation: Data models (pure), Calculations (pure), Actions (I/O) | ||
| """ | ||
|
|
||
| from datetime import datetime | ||
| from enum import Enum | ||
| from typing import Optional | ||
| from uuid import UUID | ||
|
|
||
| from pydantic import BaseModel, Field | ||
|
|
||
|
|
||
| # ============================================================================ | ||
| # ENUMS - Status and Priority types | ||
| # ============================================================================ | ||
|
|
||
| class TicketStatus(str, Enum): | ||
| """Ticket lifecycle status.""" | ||
| NEW = "new" | ||
| ASSIGNED = "assigned" | ||
| IN_PROGRESS = "in_progress" | ||
| PENDING = "pending" | ||
| RESOLVED = "resolved" | ||
| CLOSED = "closed" | ||
| CANCELLED = "cancelled" | ||
|
|
||
|
|
||
| class TicketPriority(str, Enum): | ||
| """Ticket priority with associated SLA deadlines.""" | ||
| CRITICAL = "critical" # 30 minutes | ||
| HIGH = "high" # 2 hours | ||
| MEDIUM = "medium" # 4 hours | ||
| LOW = "low" # 8 hours | ||
|
|
||
|
|
||
| class ModificationStatus(str, Enum): | ||
| """Status of a modification request.""" | ||
| PENDING = "pending" | ||
| APPROVED = "approved" | ||
| REJECTED = "rejected" | ||
|
|
||
|
|
||
| class WorkLogType(str, Enum): | ||
| """Type of worklog entry.""" | ||
| CREATION = "creation" | ||
| UPDATE = "update" | ||
| REMINDER = "reminder" | ||
| NOTE = "note" | ||
| RESOLUTION = "resolution" | ||
|
|
||
|
|
||
| # ============================================================================ | ||
| # PRIORITY SLA DEADLINES (in minutes) | ||
| # ============================================================================ | ||
|
|
||
| PRIORITY_SLA_MINUTES: dict[TicketPriority, int] = { | ||
| TicketPriority.CRITICAL: 30, | ||
| TicketPriority.HIGH: 120, | ||
| TicketPriority.MEDIUM: 240, | ||
| TicketPriority.LOW: 480, | ||
| } | ||
|
|
||
|
|
||
| # ============================================================================ | ||
| # WORKLOG MODEL | ||
| # ============================================================================ | ||
|
|
||
| class WorkLog(BaseModel): | ||
| """Worklog entry for a ticket.""" | ||
| id: UUID = Field(..., description="Unique worklog identifier") | ||
| ticket_id: UUID = Field(..., description="Parent ticket ID") | ||
| created_at: datetime = Field(..., description="When the log was created") | ||
| log_type: str = Field(..., description="Type of log entry") | ||
| summary: str = Field(..., description="Log summary") | ||
| details: Optional[str] = Field(None, description="Detailed log content") | ||
| author: str = Field(..., description="Who created the log") | ||
| time_spent_minutes: int = Field(default=0, ge=0, description="Time spent in minutes") | ||
|
|
||
| class Config: | ||
| from_attributes = True | ||
|
|
||
|
|
||
| class WorkLogCreate(BaseModel): | ||
| """Data to create a new worklog entry.""" | ||
| log_type: str = Field(..., description="Type of log entry") | ||
| summary: str = Field(..., max_length=500, description="Log summary") | ||
| details: Optional[str] = Field(None, max_length=5000, description="Detailed content") | ||
| author: str = Field(..., max_length=200, description="Author name") | ||
| time_spent_minutes: int = Field(default=0, ge=0, description="Time spent") | ||
|
|
||
|
|
||
| # ============================================================================ | ||
| # MODIFICATION REQUEST MODEL | ||
| # ============================================================================ | ||
|
|
||
| class Modification(BaseModel): | ||
| """Modification request for a ticket field.""" | ||
| id: UUID = Field(..., description="Unique modification identifier") | ||
| ticket_id: UUID = Field(..., description="Target ticket ID") | ||
| created_at: datetime = Field(..., description="Request timestamp") | ||
| requested_by: str = Field(..., description="Who requested the change") | ||
| field_name: str = Field(..., description="Field to modify") | ||
| proposed_value: str = Field(..., description="New value proposed") | ||
| reason: Optional[str] = Field(None, description="Justification for change") | ||
| status: ModificationStatus = Field(default=ModificationStatus.PENDING) | ||
| reviewed_by: Optional[str] = Field(None, description="Reviewer name") | ||
| reviewed_at: Optional[datetime] = Field(None, description="Review timestamp") | ||
| review_notes: Optional[str] = Field(None, description="Reviewer notes") | ||
|
|
||
| class Config: | ||
| from_attributes = True | ||
|
|
||
|
|
||
| class ModificationCreate(BaseModel): | ||
| """Data to request a modification.""" | ||
| field_name: str = Field(..., description="Field to modify") | ||
| proposed_value: str = Field(..., description="New value") | ||
| requested_by: str = Field(..., description="Requester name") | ||
| reason: Optional[str] = Field(None, max_length=1000, description="Reason for change") | ||
|
|
||
|
|
||
| class ModificationReview(BaseModel): | ||
| """Data to review a modification request.""" | ||
| status: ModificationStatus = Field(..., description="Approval decision") | ||
| reviewed_by: str = Field(..., description="Reviewer name") | ||
| review_notes: Optional[str] = Field(None, max_length=1000, description="Review notes") | ||
|
|
||
|
|
||
| # ============================================================================ | ||
| # OVERLAY METADATA - Tracks pending/approved modifications | ||
| # ============================================================================ | ||
|
|
||
| class OverlayMetadata(BaseModel): | ||
| """Metadata about ticket modifications.""" | ||
| has_pending: bool = Field(default=False, description="Has pending modifications") | ||
| has_overlays: bool = Field(default=False, description="Has approved overlays") | ||
| pending_count: int = Field(default=0, ge=0, description="Number of pending mods") | ||
| approved_count: int = Field(default=0, ge=0, description="Number of approved mods") | ||
| overlaid_fields: dict[str, str] = Field(default_factory=dict, description="Field -> approved value") | ||
|
|
||
|
|
||
| # ============================================================================ | ||
| # TICKET MODELS | ||
| # ============================================================================ | ||
|
|
||
| class Ticket(BaseModel): | ||
| """ | ||
| Complete support ticket representation. | ||
|
|
||
| Matches the schema from the support-tickets MCP service. | ||
| """ | ||
| # Core identifiers | ||
| id: UUID = Field(..., description="Unique ticket identifier") | ||
|
|
||
| # Summary and description | ||
| summary: str = Field(..., max_length=500, description="Short issue summary") | ||
| description: str = Field(..., description="Detailed issue description") | ||
|
|
||
| # Status and priority | ||
| status: TicketStatus = Field(..., description="Current ticket status") | ||
| priority: TicketPriority = Field(..., description="Ticket priority") | ||
| impact: Optional[str] = Field(None, description="Business impact level") | ||
| urgency: Optional[str] = Field(None, description="Urgency level") | ||
|
|
||
| # Assignment | ||
| assignee: Optional[str] = Field(None, description="Assigned agent name") | ||
| assigned_group: Optional[str] = Field(None, description="Support group name") | ||
| support_organization: Optional[str] = Field(None, description="Support org") | ||
|
|
||
| # Requester info | ||
| requester_name: str = Field(..., description="Name of person reporting") | ||
| requester_email: str = Field(..., description="Email of person reporting") | ||
| requester_phone: Optional[str] = Field(None, description="Phone number") | ||
| requester_company: Optional[str] = Field(None, description="Company name") | ||
| requester_department: Optional[str] = Field(None, description="Department") | ||
|
|
||
| # Location | ||
| city: Optional[str] = Field(None, description="City/location") | ||
| country: Optional[str] = Field(None, description="Country") | ||
| site: Optional[str] = Field(None, description="Site name") | ||
| desk_location: Optional[str] = Field(None, description="Desk location") | ||
|
|
||
| # Service and product info | ||
| service: Optional[str] = Field(None, description="Affected service") | ||
| incident_type: Optional[str] = Field(None, description="Type of incident") | ||
| reported_source: Optional[str] = Field(None, description="How ticket was reported") | ||
|
|
||
| # Product details | ||
| product_name: Optional[str] = Field(None, description="Product name") | ||
| manufacturer: Optional[str] = Field(None, description="Manufacturer") | ||
| model_version: Optional[str] = Field(None, description="Model/version") | ||
| ci_name: Optional[str] = Field(None, description="Configuration item name") | ||
|
|
||
| # Categories (tiered) | ||
| operational_category_tier1: Optional[str] = Field(None, description="Op category tier 1") | ||
| operational_category_tier2: Optional[str] = Field(None, description="Op category tier 2") | ||
| operational_category_tier3: Optional[str] = Field(None, description="Op category tier 3") | ||
| product_category_tier1: Optional[str] = Field(None, description="Product category tier 1") | ||
| product_category_tier2: Optional[str] = Field(None, description="Product category tier 2") | ||
| product_category_tier3: Optional[str] = Field(None, description="Product category tier 3") | ||
|
|
||
| # Resolution | ||
| resolution: Optional[str] = Field(None, description="Resolution details") | ||
| notes: Optional[str] = Field(None, description="Additional notes") | ||
|
|
||
| # Correlation | ||
| event_id: Optional[str] = Field(None, description="Related event ID") | ||
| correlation_key: Optional[str] = Field(None, description="Correlation key") | ||
|
|
||
| # Timestamps | ||
| created_at: datetime = Field(..., description="Creation timestamp") | ||
| updated_at: datetime = Field(..., description="Last update timestamp") | ||
|
|
||
| class Config: | ||
| from_attributes = True | ||
|
|
||
|
|
||
| class TicketWithDetails(Ticket): | ||
| """Ticket with work logs and modifications included.""" | ||
| work_logs: list[WorkLog] = Field(default_factory=list, description="Worklog entries") | ||
| modifications: list[Modification] = Field(default_factory=list, description="Modification requests") | ||
| overlay_metadata: Optional[OverlayMetadata] = Field(None, description="Modification metadata") | ||
|
|
||
|
|
||
| class TicketCreate(BaseModel): | ||
| """Data required to create a new ticket.""" | ||
| summary: str = Field(..., min_length=1, max_length=500, description="Short issue summary") | ||
| description: str = Field(..., min_length=1, description="Detailed description") | ||
| requester_name: str = Field(..., min_length=1, max_length=200, description="Reporter name") | ||
| requester_email: str = Field(..., description="Reporter email") | ||
|
|
||
| # Optional fields | ||
| priority: TicketPriority = Field(default=TicketPriority.MEDIUM, description="Priority level") | ||
| status: TicketStatus = Field(default=TicketStatus.NEW, description="Initial status") | ||
| service: Optional[str] = Field(None, max_length=200, description="Affected service") | ||
| city: Optional[str] = Field(None, max_length=100, description="Location") | ||
| requester_department: Optional[str] = Field(None, max_length=200, description="Department") | ||
|
|
||
|
|
||
| class TicketUpdate(BaseModel): | ||
| """Data for updating a ticket. All fields optional.""" | ||
| summary: Optional[str] = Field(None, min_length=1, max_length=500) | ||
| description: Optional[str] = Field(None, min_length=1) | ||
| status: Optional[TicketStatus] = Field(None) | ||
| priority: Optional[TicketPriority] = Field(None) | ||
| assignee: Optional[str] = Field(None, max_length=200) | ||
| assigned_group: Optional[str] = Field(None, max_length=200) | ||
| service: Optional[str] = Field(None, max_length=200) | ||
| resolution: Optional[str] = Field(None) | ||
| notes: Optional[str] = Field(None) | ||
|
|
||
|
|
||
| class TicketFilter(str, Enum): | ||
| """Filter options for listing tickets.""" | ||
| ALL = "all" | ||
| COMPLETED = "completed" | ||
| PENDING = "pending" | ||
|
|
||
|
|
||
| # ============================================================================ | ||
| # STATISTICS MODELS | ||
| # ============================================================================ | ||
|
|
||
| class TicketStats(BaseModel): | ||
| """Aggregated ticket statistics.""" | ||
| total_tickets: int = Field(..., ge=0, description="Total ticket count") | ||
| tickets_today: int = Field(..., ge=0, description="Tickets created today") | ||
| tickets_this_week: int = Field(..., ge=0, description="Tickets this week") | ||
| by_status: dict[str, int] = Field(default_factory=dict, description="Count by status") | ||
| by_priority: dict[str, int] = Field(default_factory=dict, description="Count by priority") | ||
| by_city: dict[str, int] = Field(default_factory=dict, description="Count by city") | ||
| by_service: dict[str, int] = Field(default_factory=dict, description="Count by service") | ||
| active_events: int = Field(default=0, ge=0, description="Active mass events") | ||
|
|
||
|
|
||
| # ============================================================================ | ||
| # REMINDER MODELS - For the "Assigned without Assignee" feature | ||
| # ============================================================================ | ||
|
|
||
| class ReminderCandidate(BaseModel): | ||
| """Ticket that may need a reminder.""" | ||
| ticket: Ticket = Field(..., description="The ticket") | ||
| minutes_since_creation: int = Field(..., description="Age in minutes") | ||
| sla_deadline_minutes: int = Field(..., description="SLA deadline") | ||
| is_overdue: bool = Field(..., description="Past SLA deadline") | ||
| was_reminded_before: bool = Field(..., description="Already reminded once") | ||
| reminder_count: int = Field(default=0, ge=0, description="Times reminded") | ||
|
|
||
|
|
||
| class ReminderRequest(BaseModel): | ||
| """Request to send reminders for selected tickets.""" | ||
| ticket_ids: list[UUID] = Field(..., min_length=1, description="Tickets to remind") | ||
| reminded_by: str = Field(..., description="Who is sending reminders") | ||
| message: Optional[str] = Field(None, max_length=2000, description="Custom message") | ||
|
|
||
|
|
||
| class ReminderResult(BaseModel): | ||
| """Result of sending reminders.""" | ||
| successful: list[UUID] = Field(default_factory=list, description="Successfully reminded") | ||
| failed: list[UUID] = Field(default_factory=list, description="Failed to remind") | ||
| errors: dict[str, str] = Field(default_factory=dict, description="Error messages by ticket ID") | ||
|
|
||
|
|
||
| # ============================================================================ | ||
| # ERROR MODEL | ||
| # ============================================================================ | ||
|
|
||
| class TicketError(BaseModel): | ||
| """Error response.""" | ||
| error: str = Field(..., description="Error message") | ||
| detail: Optional[str] = Field(None, description="Additional details") | ||
|
|
||
|
|
||
| # ============================================================================ | ||
| # CALCULATIONS - Pure functions for ticket logic | ||
| # ============================================================================ | ||
|
|
||
| def get_sla_deadline_minutes(priority: TicketPriority) -> int: | ||
| """Get SLA deadline in minutes for a priority level.""" | ||
| return PRIORITY_SLA_MINUTES.get(priority, 480) | ||
|
|
||
|
|
||
| def calculate_minutes_elapsed(created_at: datetime, now: Optional[datetime] = None) -> int: | ||
| """Calculate minutes elapsed since ticket creation.""" | ||
| if now is None: | ||
| now = datetime.now(created_at.tzinfo) | ||
| delta = now - created_at | ||
| return int(delta.total_seconds() / 60) | ||
|
|
||
|
|
||
| def is_ticket_overdue(ticket: Ticket, now: Optional[datetime] = None) -> bool: | ||
| """Check if ticket is past its SLA deadline.""" | ||
| elapsed = calculate_minutes_elapsed(ticket.created_at, now) | ||
| deadline = get_sla_deadline_minutes(ticket.priority) | ||
| return elapsed > deadline | ||
|
|
||
|
|
||
| def is_assigned_without_assignee(ticket: Ticket) -> bool: | ||
| """Check if ticket is assigned to group but has no individual assignee.""" | ||
| return ( | ||
| ticket.assigned_group is not None | ||
| and ticket.assignee is None | ||
| and ticket.status in (TicketStatus.NEW, TicketStatus.ASSIGNED) | ||
| ) | ||
|
|
||
|
|
||
| def count_reminders_in_worklogs(work_logs: list[WorkLog]) -> int: | ||
| """Count how many reminder entries exist in worklogs.""" | ||
| return sum(1 for log in work_logs if log.log_type == WorkLogType.REMINDER.value) | ||
|
|
||
|
|
||
| def build_reminder_candidate( | ||
| ticket: Ticket, | ||
| work_logs: list[WorkLog], | ||
| now: Optional[datetime] = None | ||
| ) -> ReminderCandidate: | ||
| """Build a ReminderCandidate from ticket and its worklogs.""" | ||
| elapsed = calculate_minutes_elapsed(ticket.created_at, now) | ||
| deadline = get_sla_deadline_minutes(ticket.priority) | ||
| reminder_count = count_reminders_in_worklogs(work_logs) | ||
|
|
||
| return ReminderCandidate( | ||
| ticket=ticket, | ||
| minutes_since_creation=elapsed, | ||
| sla_deadline_minutes=deadline, | ||
| is_overdue=elapsed > deadline, | ||
| was_reminded_before=reminder_count > 0, | ||
| reminder_count=reminder_count, | ||
| ) | ||
|
|
||
|
|
||
| # ============================================================================ | ||
| # EXPORTS | ||
| # ============================================================================ | ||
|
|
||
| __all__ = [ | ||
| # Enums | ||
| "TicketStatus", | ||
| "TicketPriority", | ||
| "ModificationStatus", | ||
| "WorkLogType", | ||
| # Constants | ||
| "PRIORITY_SLA_MINUTES", | ||
| # Models | ||
| "Ticket", | ||
| "TicketWithDetails", | ||
| "TicketCreate", | ||
| "TicketUpdate", | ||
| "TicketFilter", | ||
| "TicketStats", | ||
| "TicketError", | ||
| "WorkLog", | ||
| "WorkLogCreate", | ||
| "Modification", | ||
| "ModificationCreate", | ||
| "ModificationReview", | ||
| "OverlayMetadata", | ||
| # Reminder models | ||
| "ReminderCandidate", | ||
| "ReminderRequest", | ||
| "ReminderResult", | ||
| # Calculations | ||
| "get_sla_deadline_minutes", | ||
| "calculate_minutes_elapsed", | ||
| "is_ticket_overdue", | ||
| "is_assigned_without_assignee", | ||
| "count_reminders_in_worklogs", | ||
| "build_reminder_candidate", | ||
| ] |
There was a problem hiding this comment.
The comprehensive ticket models in this file are well-structured but completely unused in the current PR. The QA tickets endpoint in app.py returns simple dictionaries with different field names (e.g., createdAt vs created_at, title vs summary) that don't match the Pydantic models defined here. This creates a disconnect between the backend models and the actual API contract. Either use these models in the QA tickets endpoint or remove them if they're for future work.
|
|
||
| function formatDate(isoString) { | ||
| const date = new Date(isoString) | ||
| return date.toLocaleDateString('de-DE') + ' ' + date.toLocaleTimeString('de-DE', { hour: '2-digit', minute: '2-digit' }) |
There was a problem hiding this comment.
The date formatting uses 'de-DE' locale hardcoded in the pure function. This should be configurable or use a consistent locale with the rest of the application. The TaskList component uses default locale without parameters. Consider extracting locale to a configuration constant or using the same pattern as other features for consistency.
| return date.toLocaleDateString('de-DE') + ' ' + date.toLocaleTimeString('de-DE', { hour: '2-digit', minute: '2-digit' }) | |
| return date.toLocaleDateString() + ' ' + date.toLocaleTimeString([], { hour: '2-digit', minute: '2-digit' }) |
| <div style={{ display: 'flex', justifyContent: 'space-between', alignItems: 'center' }}> | ||
| <Text className={styles.title}>Unassigned QA Starten</Text> | ||
| <Button | ||
| appearance="primary" | ||
| icon={<PlayCircle24Regular />} | ||
| onClick={handleStartAnalysis} | ||
| disabled={loading} | ||
| data-testid="start-analysis-button" | ||
| > | ||
| {loading ? 'Lädt...' : 'Start QA Analyse'} | ||
| </Button> | ||
| </div> | ||
| {error && ( | ||
| <MessageBar intent="error" style={{ marginTop: tokens.spacingVerticalM }}> | ||
| <MessageBarBody>{error}</MessageBarBody> | ||
| </MessageBar> | ||
| )} | ||
| </div> | ||
|
|
||
| {loading ? ( | ||
| <div style={{ display: 'flex', justifyContent: 'center', alignItems: 'center', height: '400px' }}> | ||
| <Spinner size="large" label="Lade QA Tickets..." /> | ||
| </div> | ||
| ) : !hasStarted ? ( | ||
| <div style={{ display: 'flex', justifyContent: 'center', alignItems: 'center', height: '400px', flexDirection: 'column', gap: tokens.spacingVerticalM }}> | ||
| <Text size={500} weight="semibold">Bereit für QA Analyse</Text> | ||
| <Text size={400}>Klicken Sie auf "Start QA Analyse", um Tickets zu laden</Text> | ||
| </div> | ||
| ) : ( | ||
| <div className={styles.layout}> | ||
| {/* LEFT PANEL - List */} | ||
| <div className={styles.listPanel}> | ||
| <div className={styles.filterBar}> | ||
| <Field style={{ flexGrow: 1 }}> | ||
| <Input | ||
| placeholder="Suche nach ID, Titel oder Beschreibung..." | ||
| value={searchTerm} | ||
| onChange={(e, data) => setSearchTerm(data.value)} | ||
| contentBefore={<Search20Regular />} | ||
| data-testid="ticket-search" | ||
| /> | ||
| </Field> | ||
| <Field style={{ minWidth: '150px' }}> | ||
| <Select | ||
| value={priorityFilter} | ||
| onChange={(e, data) => setPriorityFilter(data.value)} | ||
| data-testid="filter-priority" | ||
| > | ||
| <option value="all">Alle Prioritäten</option> | ||
| <option value="Critical">Critical</option> | ||
| <option value="High">High</option> | ||
| <option value="Medium">Medium</option> | ||
| <option value="Low">Low</option> | ||
| </Select> | ||
| </Field> | ||
| </div> | ||
|
|
||
| <div className={styles.gridContainer}> | ||
| <DataGrid | ||
| items={filteredTickets} | ||
| columns={columns} | ||
| sortable | ||
| getRowId={(item) => item.id} | ||
| > | ||
| <DataGridHeader> | ||
| <DataGridRow> | ||
| {({ renderHeaderCell }) => ( | ||
| <DataGridHeaderCell>{renderHeaderCell()}</DataGridHeaderCell> | ||
| )} | ||
| </DataGridRow> | ||
| </DataGridHeader> | ||
| <DataGridBody> | ||
| {({ item, rowId }) => ( | ||
| <DataGridRow | ||
| key={rowId} | ||
| onClick={() => handleRowClick(item)} | ||
| className={selectedTicket?.id === item.id ? styles.selectedRow : ''} | ||
| style={{ cursor: 'pointer' }} | ||
| data-testid={`ticket-row-${item.id}`} | ||
| > | ||
| {({ renderCell }) => <DataGridCell>{renderCell(item)}</DataGridCell>} | ||
| </DataGridRow> | ||
| )} | ||
| </DataGridBody> | ||
| </DataGrid> | ||
| </div> | ||
| </div> | ||
|
|
||
| {/* RIGHT PANEL - Detail */} | ||
| <div className={styles.detailPanel}> | ||
| {selectedTicket ? ( | ||
| <div className={styles.detailContent}> | ||
| <div> | ||
| <Text size={600} weight="semibold"> | ||
| {selectedTicket.title} | ||
| </Text> | ||
| </div> | ||
|
|
||
| <div className={styles.detailField}> | ||
| <Text className={styles.detailLabel}>Ticket ID</Text> | ||
| <Text className={styles.detailValue}>{selectedTicket.id}</Text> | ||
| </div> | ||
|
|
||
| <div className={styles.detailField}> | ||
| <Text className={styles.detailLabel}>Status</Text> | ||
| <Badge appearance="outline">{selectedTicket.status}</Badge> | ||
| </div> | ||
|
|
||
| <div className={styles.detailField}> | ||
| <Text className={styles.detailLabel}>Priorität</Text> | ||
| <Badge appearance={getPriorityBadge(selectedTicket.priority)}> | ||
| {selectedTicket.priority} | ||
| </Badge> | ||
| </div> | ||
|
|
||
| <div className={styles.detailField}> | ||
| <Text className={styles.detailLabel}>Kategorie</Text> | ||
| <Text className={styles.detailValue}>{selectedTicket.category}</Text> | ||
| </div> | ||
|
|
||
| <div className={styles.detailField}> | ||
| <Text className={styles.detailLabel}>Reporter</Text> | ||
| <Text className={styles.detailValue}>{selectedTicket.reporter}</Text> | ||
| </div> | ||
|
|
||
| <div className={styles.detailField}> | ||
| <Text className={styles.detailLabel}>Zugewiesen an</Text> | ||
| <Text className={styles.detailValue}> | ||
| {selectedTicket.assignee || 'Nicht zugewiesen'} | ||
| </Text> | ||
| </div> | ||
|
|
||
| <div className={styles.detailField}> | ||
| <Text className={styles.detailLabel}>Erstellt am</Text> | ||
| <Text className={styles.detailValue}>{formatDate(selectedTicket.createdAt)}</Text> | ||
| </div> | ||
|
|
||
| <div className={styles.detailField}> | ||
| <Text className={styles.detailLabel}>Aktualisiert am</Text> | ||
| <Text className={styles.detailValue}>{formatDate(selectedTicket.updatedAt)}</Text> | ||
| </div> | ||
|
|
||
| <div className={styles.detailField}> | ||
| <Text className={styles.detailLabel}>Beschreibung</Text> | ||
| <Text className={styles.detailValue}>{selectedTicket.description}</Text> | ||
| </div> | ||
|
|
||
| {selectedTicket.escalationNeeded && ( | ||
| <MessageBar intent="warning"> | ||
| <MessageBarBody> | ||
| <AlertUrgent20Regular /> Eskalation erforderlich | ||
| </MessageBarBody> | ||
| </MessageBar> | ||
| )} | ||
|
|
||
| {ticketDecisions[selectedTicket.id] && ( | ||
| <MessageBar intent={ticketDecisions[selectedTicket.id] === 'GOOD' ? 'success' : 'error'}> | ||
| <MessageBarBody> | ||
| {ticketDecisions[selectedTicket.id] === 'GOOD' ? ( | ||
| <><Checkmark24Regular /> Als GOOD markiert</> | ||
| ) : ( | ||
| <><Warning24Regular /> Zur Eskalation markiert</> | ||
| )} | ||
| </MessageBarBody> | ||
| </MessageBar> | ||
| )} | ||
| </div> | ||
| ) : ( | ||
| <div className={styles.emptyDetail}> | ||
| <Text size={400}>Wählen Sie ein Ticket aus der Liste aus</Text> | ||
| </div> | ||
| )} | ||
| </div> | ||
| </div> | ||
| )} | ||
|
|
||
| {/* FOOTER */} | ||
| <div className={styles.footer}> | ||
| <div style={{ gridColumn: '1 / 3' }}> | ||
| {reminderMessage && ( | ||
| <MessageBar intent="success"> | ||
| <MessageBarBody>{reminderMessage}</MessageBarBody> | ||
| </MessageBar> | ||
| )} | ||
| </div> | ||
| <Button | ||
| appearance="primary" | ||
| icon={<Checkmark24Regular />} | ||
| onClick={handleMarkAsGood} | ||
| disabled={!selectedTicket} | ||
| style={{ backgroundColor: tokens.colorPaletteGreenBackground3, color: 'white' }} | ||
| data-testid="mark-good-button" | ||
| > | ||
| GOOD | ||
| </Button> | ||
| <Button | ||
| appearance="primary" | ||
| icon={<Warning24Regular />} | ||
| onClick={handleMarkAsEscalate} | ||
| disabled={!selectedTicket} | ||
| style={{ backgroundColor: tokens.colorPaletteRedBackground3, color: 'white' }} |
There was a problem hiding this comment.
Inline style usage breaks the established pattern of using FluentUI's makeStyles. The component defines useStyles but then uses inline styles in several places (lines 291, 304, 311, 315, 324, 333, 368, 469, 481, 491). According to the coding guidelines, prefer adjusting the useStyles definitions over inline styles to maintain theming consistency.
| @@ -0,0 +1,67 @@ | |||
| # Feature Canvas C5: "Assigned without Assignee" Ticket Reminder | |||
There was a problem hiding this comment.
Typo in filename: "ticker_reminder" should likely be "ticket_reminder". The word "ticker" typically refers to a stock ticker or news ticker, while "ticket" refers to support tickets. The content of the documentation files clearly refers to support tickets, not tickers. This naming inconsistency could cause confusion.
| @@ -0,0 +1,20 @@ | |||
| Features Canvas C5: Tickets “Assigned without Assignee” reminden | |||
There was a problem hiding this comment.
Typo in filename: "ticker_reminder" should likely be "ticket_reminder". The word "ticker" typically refers to a stock ticker or news ticker, while "ticket" refers to support tickets. The content of the documentation files clearly refers to support tickets, not tickers. This naming inconsistency could cause confusion.
| Features Canvas C5: Tickets “Assigned without Assignee” reminden | |
| Features Canvas C5: Tickets “Assigned without Assignee” Reminder |
| const handleReminder = () => { | ||
| if (selectedTicket) { | ||
| setReminderMessage(`Erinnerung für Ticket ${selectedTicket.id} wurde gesendet.`) | ||
| // TODO: Backend integration - send reminder API call | ||
| } | ||
| } | ||
|
|
There was a problem hiding this comment.
Unused variable handleReminder.
| const handleReminder = () => { | |
| if (selectedTicket) { | |
| setReminderMessage(`Erinnerung für Ticket ${selectedTicket.id} wurde gesendet.`) | |
| // TODO: Backend integration - send reminder API call | |
| } | |
| } |
backend/app.py
Outdated
| from operations import (op_create_task, op_delete_task, op_get_task, | ||
| op_get_task_stats, op_list_ollama_models, | ||
| op_list_tasks, op_ollama_chat, op_update_task, | ||
| task_service) |
There was a problem hiding this comment.
Import of 'Task' is not used.
Import of 'TaskStats' is not used.
…g QA tickets endpoint Signed-off-by: Andre Bossard <anbossar@microsoft.com>
…ckets Signed-off-by: Andre Bossard <anbossar@microsoft.com>
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 12 out of 13 changed files in this pull request and generated 19 comments.
Comments suppressed due to low confidence (4)
frontend/src/features/tickets/TicketsWithoutAnAssignee.jsx:247
- Unused variable handleReminder.
const handleReminder = () => {
frontend/src/features/tickets/TicketList.jsx:247
- Unused variable handleReminder.
const handleReminder = () => {
backend/app.py:37
- Import of 'ChatResponse' is not used.
Import of 'ModelListResponse' is not used.
Import of 'OllamaService' is not used.
from ollama_service import ChatRequest, ChatResponse, ModelListResponse, OllamaService
backend/app.py:58
- Import of 'Task' is not used.
Import of 'TaskService' is not used.
Import of 'TaskStats' is not used.
from tasks import Task, TaskCreate, TaskFilter, TaskService, TaskStats, TaskUpdate
| // TODO: Backend integration - send reminder API call | ||
| } | ||
| } | ||
|
|
||
| const handleStartAnalysis = async () => { | ||
| setLoading(true) | ||
| setError(null) | ||
| try { | ||
| const response = await getQATickets() | ||
| setTickets(response.tickets) | ||
| setHasStarted(true) | ||
| } catch (err) { | ||
| setError(err.message || 'Fehler beim Laden der Tickets') | ||
| } finally { | ||
| setLoading(false) | ||
| } | ||
| } | ||
|
|
||
| const handleMarkAsGood = () => { | ||
| if (selectedTicket) { | ||
| setTicketDecisions(prev => ({ ...prev, [selectedTicket.id]: 'GOOD' })) | ||
| setReminderMessage(`Ticket ${selectedTicket.id} als GOOD markiert.`) | ||
| // TODO: Backend integration - update ticket status | ||
| } | ||
| } | ||
|
|
||
| const handleMarkAsEscalate = () => { | ||
| if (selectedTicket) { | ||
| setTicketDecisions(prev => ({ ...prev, [selectedTicket.id]: 'ESCALATE' })) | ||
| setReminderMessage(`Ticket ${selectedTicket.id} zur Eskalation markiert.`) | ||
| // TODO: Backend integration - escalate ticket | ||
| } |
There was a problem hiding this comment.
The TODO comments indicate missing backend integration for critical functionality (sending reminders, updating ticket status, escalating tickets). While TODOs are acceptable during development, these represent core features of the ticket reminder system described in the requirements documents. The handlers currently only update local state, meaning user actions don't persist. Consider either implementing the backend operations following the @operation pattern, or if this is intentional for the PR scope, document this limitation clearly in the PR description.
| // TODO: Backend integration - send reminder API call | ||
| } | ||
| } | ||
|
|
||
| const handleStartAnalysis = async () => { | ||
| setLoading(true) | ||
| setError(null) | ||
| try { | ||
| const response = await getQATickets() | ||
| setTickets(response.tickets) | ||
| setHasStarted(true) | ||
| } catch (err) { | ||
| setError(err.message || 'Fehler beim Laden der Tickets') | ||
| } finally { | ||
| setLoading(false) | ||
| } | ||
| } | ||
|
|
||
| const handleMarkAsGood = () => { | ||
| if (selectedTicket) { | ||
| setTicketDecisions(prev => ({ ...prev, [selectedTicket.id]: 'GOOD' })) | ||
| setReminderMessage(`Ticket ${selectedTicket.id} als GOOD markiert.`) | ||
| // TODO: Backend integration - update ticket status | ||
| } | ||
| } | ||
|
|
||
| const handleMarkAsEscalate = () => { | ||
| if (selectedTicket) { | ||
| setTicketDecisions(prev => ({ ...prev, [selectedTicket.id]: 'ESCALATE' })) | ||
| setReminderMessage(`Ticket ${selectedTicket.id} zur Eskalation markiert.`) | ||
| // TODO: Backend integration - escalate ticket | ||
| } |
There was a problem hiding this comment.
The TODO comments indicate missing backend integration for critical functionality (sending reminders, updating ticket status, escalating tickets). While TODOs are acceptable during development, these represent core features of the ticket reminder system described in the requirements documents. The handlers currently only update local state, meaning user actions don't persist. Consider either implementing the backend operations following the @operation pattern, or if this is intentional for the PR scope, document this limitation clearly in the PR description.
| print(" REST API: http://localhost:5001/api/*") | ||
| print(" MCP JSON-RPC: http://localhost:5001/mcp") | ||
| print() | ||
| print("💡 Port 5001 (macOS AirPlay uses 5000)") | ||
| print("=" * 70) | ||
|
|
||
| app.run(debug=True, host="0.0.0.0", port=5001) |
There was a problem hiding this comment.
Duplicate code detected at the end of the file. The print statements and app.run() call appear twice (lines 500-509), which will cause the application to attempt to start twice on the same port. This is clearly a merge or copy-paste error that should be removed.
| <div className={styles.container}> | ||
| <div className={styles.header}> | ||
| <div style={{ display: 'flex', justifyContent: 'space-between', alignItems: 'center' }}> | ||
| <Text className={styles.title}>Unassigned QA Starten</Text> |
There was a problem hiding this comment.
The header comment describes this component as "Master-detail view for QA incident tickets needing escalation" and mentions "Displays unassigned tickets", but the actual title shown is "Unassigned QA Starten" which mixes English and German and is unclear. The component should have a consistent, clear title that matches its purpose. Consider using either "QA Ticket Analysis" or "QA-Tickets zur Eskalation" to match the German UI strings used elsewhere in the component.
| <div style={{ display: 'flex', justifyContent: 'space-between', alignItems: 'center' }}> | ||
| <Text className={styles.title}>Unassigned QA Starten</Text> | ||
| <Button | ||
| appearance="primary" | ||
| icon={<PlayCircle24Regular />} | ||
| onClick={handleStartAnalysis} | ||
| disabled={loading} | ||
| data-testid="start-analysis-button" | ||
| > | ||
| {loading ? 'Lädt...' : 'Start QA Analyse'} | ||
| </Button> | ||
| </div> | ||
| {error && ( | ||
| <MessageBar intent="error" style={{ marginTop: tokens.spacingVerticalM }}> | ||
| <MessageBarBody>{error}</MessageBarBody> | ||
| </MessageBar> | ||
| )} | ||
| </div> | ||
|
|
||
| {loading ? ( | ||
| <div style={{ display: 'flex', justifyContent: 'center', alignItems: 'center', height: '400px' }}> | ||
| <Spinner size="large" label="Lade QA Tickets..." /> | ||
| </div> | ||
| ) : !hasStarted ? ( | ||
| <div style={{ display: 'flex', justifyContent: 'center', alignItems: 'center', height: '400px', flexDirection: 'column', gap: tokens.spacingVerticalM }}> | ||
| <Text size={500} weight="semibold">Bereit für QA Analyse</Text> | ||
| <Text size={400}>Klicken Sie auf "Start QA Analyse", um Tickets zu laden</Text> | ||
| </div> | ||
| ) : ( | ||
| <div className={styles.layout}> | ||
| {/* LEFT PANEL - List */} | ||
| <div className={styles.listPanel}> | ||
| <div className={styles.filterBar}> | ||
| <Field style={{ flexGrow: 1 }}> | ||
| <Input | ||
| placeholder="Suche nach ID, Titel oder Beschreibung..." | ||
| value={searchTerm} | ||
| onChange={(e, data) => setSearchTerm(data.value)} | ||
| contentBefore={<Search20Regular />} | ||
| data-testid="ticket-search" | ||
| /> | ||
| </Field> | ||
| <Field style={{ minWidth: '150px' }}> | ||
| <Select | ||
| value={priorityFilter} | ||
| onChange={(e, data) => setPriorityFilter(data.value)} | ||
| data-testid="filter-priority" | ||
| > | ||
| <option value="all">Alle Prioritäten</option> | ||
| <option value="Critical">Critical</option> | ||
| <option value="High">High</option> | ||
| <option value="Medium">Medium</option> | ||
| <option value="Low">Low</option> | ||
| </Select> | ||
| </Field> | ||
| </div> | ||
|
|
||
| <div className={styles.gridContainer}> | ||
| <DataGrid | ||
| items={filteredTickets} | ||
| columns={columns} | ||
| sortable | ||
| getRowId={(item) => item.id} | ||
| > | ||
| <DataGridHeader> | ||
| <DataGridRow> | ||
| {({ renderHeaderCell }) => ( | ||
| <DataGridHeaderCell>{renderHeaderCell()}</DataGridHeaderCell> | ||
| )} | ||
| </DataGridRow> | ||
| </DataGridHeader> | ||
| <DataGridBody> | ||
| {({ item, rowId }) => ( | ||
| <DataGridRow | ||
| key={rowId} | ||
| onClick={() => handleRowClick(item)} | ||
| className={selectedTicket?.id === item.id ? styles.selectedRow : ''} | ||
| style={{ cursor: 'pointer' }} | ||
| data-testid={`ticket-row-${item.id}`} | ||
| > | ||
| {({ renderCell }) => <DataGridCell>{renderCell(item)}</DataGridCell>} | ||
| </DataGridRow> | ||
| )} | ||
| </DataGridBody> | ||
| </DataGrid> | ||
| </div> | ||
| </div> | ||
|
|
||
| {/* RIGHT PANEL - Detail */} | ||
| <div className={styles.detailPanel}> | ||
| {selectedTicket ? ( | ||
| <div className={styles.detailContent}> | ||
| <div> | ||
| <Text size={600} weight="semibold"> | ||
| {selectedTicket.title} | ||
| </Text> | ||
| </div> | ||
|
|
||
| <div className={styles.detailField}> | ||
| <Text className={styles.detailLabel}>Ticket ID</Text> | ||
| <Text className={styles.detailValue}>{selectedTicket.id}</Text> | ||
| </div> | ||
|
|
||
| <div className={styles.detailField}> | ||
| <Text className={styles.detailLabel}>Status</Text> | ||
| <Badge appearance="outline">{selectedTicket.status}</Badge> | ||
| </div> | ||
|
|
||
| <div className={styles.detailField}> | ||
| <Text className={styles.detailLabel}>Priorität</Text> | ||
| <Badge appearance={getPriorityBadge(selectedTicket.priority)}> | ||
| {selectedTicket.priority} | ||
| </Badge> | ||
| </div> | ||
|
|
||
| <div className={styles.detailField}> | ||
| <Text className={styles.detailLabel}>Reporter</Text> | ||
| <Text className={styles.detailValue}>{selectedTicket.reporter}</Text> | ||
| </div> | ||
|
|
||
| <div className={styles.detailField}> | ||
| <Text className={styles.detailLabel}>Zugewiesen an</Text> | ||
| <Text className={styles.detailValue}> | ||
| {selectedTicket.assignee || 'Nicht zugewiesen'} | ||
| </Text> | ||
| </div> | ||
|
|
||
| <div className={styles.detailField}> | ||
| <Text className={styles.detailLabel}>Erstellt am</Text> | ||
| <Text className={styles.detailValue}>{formatDate(selectedTicket.createdAt)}</Text> | ||
| </div> | ||
|
|
||
| <div className={styles.detailField}> | ||
| <Text className={styles.detailLabel}>Aktualisiert am</Text> | ||
| <Text className={styles.detailValue}>{formatDate(selectedTicket.updatedAt)}</Text> | ||
| </div> | ||
|
|
||
| <div className={styles.detailField}> | ||
| <Text className={styles.detailLabel}>Beschreibung</Text> | ||
| <Text className={styles.detailValue}>{selectedTicket.description}</Text> | ||
| </div> | ||
|
|
||
| {selectedTicket.escalationNeeded && ( | ||
| <MessageBar intent="warning"> | ||
| <MessageBarBody> | ||
| <AlertUrgent20Regular /> Eskalation erforderlich | ||
| </MessageBarBody> | ||
| </MessageBar> | ||
| )} | ||
|
|
||
| {ticketDecisions[selectedTicket.id] && ( | ||
| <MessageBar intent={ticketDecisions[selectedTicket.id] === 'GOOD' ? 'success' : 'error'}> | ||
| <MessageBarBody> | ||
| {ticketDecisions[selectedTicket.id] === 'GOOD' ? ( | ||
| <><Checkmark24Regular /> Als GOOD markiert</> | ||
| ) : ( | ||
| <><Warning24Regular /> Zur Eskalation markiert</> | ||
| )} | ||
| </MessageBarBody> | ||
| </MessageBar> | ||
| )} | ||
| </div> | ||
| ) : ( | ||
| <div className={styles.emptyDetail}> | ||
| <Text size={400}>Wählen Sie ein Ticket aus der Liste aus</Text> | ||
| </div> | ||
| )} | ||
| </div> | ||
| </div> | ||
| )} | ||
|
|
||
| {/* FOOTER */} | ||
| <div className={styles.footer}> | ||
| <div style={{ gridColumn: '1 / 3' }}> |
There was a problem hiding this comment.
The inline styles used for layout (lines 291, 311, 315, 464) break the established pattern of using makeStyles with FluentUI tokens. The codebase convention (seen in TaskList.jsx, Dashboard.jsx, and other components) is to define all styles via useStyles and reference them via className. Inline styles make theming inconsistent and harder to maintain. Move these to the useStyles definition block as proper style classes.
| /** | ||
| * TicketsWithoutAnAssignee Component | ||
| * | ||
| * Master-detail view for tickets without an assignee | ||
| * Displays unassigned tickets in a list with detail panel | ||
| * | ||
| * Following principles: | ||
| * - Pure functions for data transformations (calculations) | ||
| * - Side effects isolated in event handlers (actions) | ||
| * - Master-detail layout with split view | ||
| */ | ||
|
|
||
| import { | ||
| Badge, | ||
| Button, | ||
| DataGrid, | ||
| DataGridBody, | ||
| DataGridCell, | ||
| DataGridHeader, | ||
| DataGridHeaderCell, | ||
| DataGridRow, | ||
| Field, | ||
| Input, | ||
| MessageBar, | ||
| MessageBarBody, | ||
| Select, | ||
| Spinner, | ||
| TableCellLayout, | ||
| Text, | ||
| createTableColumn, | ||
| makeStyles, | ||
| tokens | ||
| } from '@fluentui/react-components' | ||
| import { | ||
| AlertUrgent20Regular, | ||
| Checkmark24Regular, | ||
| PlayCircle24Regular, | ||
| Search20Regular, | ||
| Warning24Regular, | ||
| } from '@fluentui/react-icons' | ||
| import { useState } from 'react' | ||
| import { getQATickets } from '../../services/api' | ||
|
|
||
| const useStyles = makeStyles({ | ||
| container: { | ||
| padding: tokens.spacingVerticalL, | ||
| }, | ||
| header: { | ||
| backgroundColor: tokens.colorNeutralBackground1, | ||
| padding: tokens.spacingVerticalL, | ||
| borderBottom: `1px solid ${tokens.colorNeutralStroke1}`, | ||
| }, | ||
| title: { | ||
| fontSize: tokens.fontSizeHero700, | ||
| fontWeight: tokens.fontWeightSemibold, | ||
| }, | ||
| layout: { | ||
| display: 'grid', | ||
| gridTemplateColumns: '1fr 1.2fr', | ||
| gap: tokens.spacingHorizontalL, | ||
| height: 'calc(100vh - 280px)', | ||
| padding: tokens.spacingVerticalL, | ||
| }, | ||
| listPanel: { | ||
| display: 'flex', | ||
| flexDirection: 'column', | ||
| gap: tokens.spacingVerticalM, | ||
| }, | ||
| filterBar: { | ||
| display: 'flex', | ||
| gap: tokens.spacingHorizontalS, | ||
| padding: tokens.spacingVerticalM, | ||
| backgroundColor: tokens.colorNeutralBackground1, | ||
| borderRadius: tokens.borderRadiusMedium, | ||
| }, | ||
| gridContainer: { | ||
| border: `1px solid ${tokens.colorNeutralStroke1}`, | ||
| borderRadius: tokens.borderRadiusMedium, | ||
| backgroundColor: tokens.colorNeutralBackground1, | ||
| overflow: 'auto', | ||
| flexGrow: 1, | ||
| }, | ||
| detailPanel: { | ||
| border: `1px solid ${tokens.colorNeutralStroke1}`, | ||
| borderRadius: tokens.borderRadiusMedium, | ||
| backgroundColor: tokens.colorNeutralBackground1, | ||
| padding: tokens.spacingVerticalL, | ||
| display: 'flex', | ||
| flexDirection: 'column', | ||
| overflow: 'auto', | ||
| }, | ||
| detailContent: { | ||
| flexGrow: 1, | ||
| display: 'flex', | ||
| flexDirection: 'column', | ||
| gap: tokens.spacingVerticalL, | ||
| }, | ||
| detailField: { | ||
| display: 'flex', | ||
| flexDirection: 'column', | ||
| gap: tokens.spacingVerticalXS, | ||
| }, | ||
| detailLabel: { | ||
| fontWeight: tokens.fontWeightSemibold, | ||
| color: tokens.colorNeutralForeground3, | ||
| fontSize: tokens.fontSizeBase200, | ||
| }, | ||
| detailValue: { | ||
| fontSize: tokens.fontSizeBase300, | ||
| }, | ||
| footer: { | ||
| display: 'grid', | ||
| gridTemplateColumns: '1fr auto auto', | ||
| gap: tokens.spacingHorizontalM, | ||
| padding: tokens.spacingVerticalL, | ||
| borderTop: `1px solid ${tokens.colorNeutralStroke1}`, | ||
| backgroundColor: tokens.colorNeutralBackground1, | ||
| }, | ||
| footerAction: { | ||
| gridColumn: '3', | ||
| }, | ||
| emptyDetail: { | ||
| display: 'flex', | ||
| alignItems: 'center', | ||
| justifyContent: 'center', | ||
| height: '100%', | ||
| color: tokens.colorNeutralForeground3, | ||
| }, | ||
| selectedRow: { | ||
| backgroundColor: tokens.colorNeutralBackground1Selected, | ||
| }, | ||
| }) | ||
|
|
||
| // ============================================================================ | ||
| // CALCULATIONS - Pure functions | ||
| // ============================================================================ | ||
|
|
||
| function formatDate(isoString) { | ||
| const date = new Date(isoString) | ||
| return date.toLocaleDateString('de-DE') + ' ' + date.toLocaleTimeString('de-DE', { hour: '2-digit', minute: '2-digit' }) | ||
| } | ||
|
|
||
| function getPriorityBadge(priority) { | ||
| const appearances = { | ||
| Critical: 'important', | ||
| High: 'important', | ||
| Medium: 'informative', | ||
| Low: 'subtle', | ||
| } | ||
| return appearances[priority] || 'subtle' | ||
| } | ||
|
|
||
| function filterTickets(tickets, searchTerm, priorityFilter) { | ||
| let filtered = tickets | ||
|
|
||
| if (searchTerm) { | ||
| const term = searchTerm.toLowerCase() | ||
| filtered = filtered.filter( | ||
| (ticket) => | ||
| ticket.id.toLowerCase().includes(term) || | ||
| ticket.title.toLowerCase().includes(term) || | ||
| ticket.description.toLowerCase().includes(term) | ||
| ) | ||
| } | ||
|
|
||
| if (priorityFilter && priorityFilter !== 'all') { | ||
| filtered = filtered.filter((ticket) => ticket.priority === priorityFilter) | ||
| } | ||
|
|
||
| return filtered | ||
| } | ||
|
|
||
| // ============================================================================ | ||
| // COMPONENT | ||
| // ============================================================================ | ||
|
|
||
| export default function TicketsWithoutAnAssignee() { | ||
| const styles = useStyles() | ||
|
|
||
| // State | ||
| const [tickets, setTickets] = useState([]) | ||
| const [selectedTicket, setSelectedTicket] = useState(null) | ||
| const [searchTerm, setSearchTerm] = useState('') | ||
| const [priorityFilter, setPriorityFilter] = useState('all') | ||
| const [reminderMessage, setReminderMessage] = useState(null) | ||
| const [loading, setLoading] = useState(false) | ||
| const [error, setError] = useState(null) | ||
| const [hasStarted, setHasStarted] = useState(false) | ||
| const [ticketDecisions, setTicketDecisions] = useState({}) | ||
|
|
||
| // Calculations | ||
| const filteredTickets = filterTickets(tickets, searchTerm, priorityFilter) | ||
|
|
||
| // Columns for DataGrid | ||
| const columns = [ | ||
| createTableColumn({ | ||
| columnId: 'id', | ||
| compare: (a, b) => a.id.localeCompare(b.id), | ||
| renderHeaderCell: () => 'ID', | ||
| renderCell: (item) => ( | ||
| <TableCellLayout> | ||
| <Text weight="semibold">{item.id}</Text> | ||
| </TableCellLayout> | ||
| ), | ||
| }), | ||
| createTableColumn({ | ||
| columnId: 'title', | ||
| compare: (a, b) => a.title.localeCompare(b.title), | ||
| renderHeaderCell: () => 'Titel', | ||
| renderCell: (item) => ( | ||
| <TableCellLayout truncate title={item.title}> | ||
| {item.title} | ||
| </TableCellLayout> | ||
| ), | ||
| }), | ||
| createTableColumn({ | ||
| columnId: 'priority', | ||
| compare: (a, b) => a.priority.localeCompare(b.priority), | ||
| renderHeaderCell: () => 'Priorität', | ||
| renderCell: (item) => ( | ||
| <TableCellLayout> | ||
| <Badge appearance={getPriorityBadge(item.priority)}>{item.priority}</Badge> | ||
| </TableCellLayout> | ||
| ), | ||
| }), | ||
| createTableColumn({ | ||
| columnId: 'status', | ||
| compare: (a, b) => a.status.localeCompare(b.status), | ||
| renderHeaderCell: () => 'Status', | ||
| renderCell: (item) => ( | ||
| <TableCellLayout> | ||
| <Badge appearance="outline">{item.status}</Badge> | ||
| </TableCellLayout> | ||
| ), | ||
| }), | ||
| ] | ||
|
|
||
| // ============================================================================ | ||
| // ACTIONS - Event handlers | ||
| // ============================================================================ | ||
|
|
||
| const handleRowClick = (ticket) => { | ||
| setSelectedTicket(ticket) | ||
| setReminderMessage(null) | ||
| } | ||
|
|
||
| const handleReminder = () => { | ||
| if (selectedTicket) { | ||
| setReminderMessage(`Erinnerung für Ticket ${selectedTicket.id} wurde gesendet.`) | ||
| // TODO: Backend integration - send reminder API call | ||
| } | ||
| } | ||
|
|
||
| const handleStartAnalysis = async () => { | ||
| setLoading(true) | ||
| setError(null) | ||
| try { | ||
| const response = await getQATickets() | ||
| setTickets(response.tickets) | ||
| setHasStarted(true) | ||
| } catch (err) { | ||
| setError(err.message || 'Fehler beim Laden der Tickets') | ||
| } finally { | ||
| setLoading(false) | ||
| } | ||
| } | ||
|
|
||
| const handleMarkAsGood = () => { | ||
| if (selectedTicket) { | ||
| setTicketDecisions(prev => ({ ...prev, [selectedTicket.id]: 'GOOD' })) | ||
| setReminderMessage(`Ticket ${selectedTicket.id} als GOOD markiert.`) | ||
| // TODO: Backend integration - update ticket status | ||
| } | ||
| } | ||
|
|
||
| const handleMarkAsEscalate = () => { | ||
| if (selectedTicket) { | ||
| setTicketDecisions(prev => ({ ...prev, [selectedTicket.id]: 'ESCALATE' })) | ||
| setReminderMessage(`Ticket ${selectedTicket.id} zur Eskalation markiert.`) | ||
| // TODO: Backend integration - escalate ticket | ||
| } | ||
| } | ||
|
|
||
| // ============================================================================ | ||
| // RENDER | ||
| // ============================================================================ | ||
|
|
||
| return ( | ||
| <div className={styles.container}> | ||
| <div className={styles.header}> | ||
| <div style={{ display: 'flex', justifyContent: 'space-between', alignItems: 'center' }}> | ||
| <Text className={styles.title}>Tickets Without Assignee</Text> | ||
| <Button | ||
| appearance="primary" | ||
| icon={<PlayCircle24Regular />} | ||
| onClick={handleStartAnalysis} | ||
| disabled={loading} | ||
| data-testid="start-unassigned-button" | ||
| > | ||
| {loading ? 'Lädt...' : 'Load Unassigned Tickets'} | ||
| </Button> | ||
| </div> | ||
| {error && ( | ||
| <MessageBar intent="error" style={{ marginTop: tokens.spacingVerticalM }}> | ||
| <MessageBarBody>{error}</MessageBarBody> | ||
| </MessageBar> | ||
| )} | ||
| </div> | ||
|
|
||
| {loading ? ( | ||
| <div style={{ display: 'flex', justifyContent: 'center', alignItems: 'center', height: '400px' }}> | ||
| <Spinner size="large" label="Loading unassigned tickets..." /> | ||
| </div> | ||
| ) : !hasStarted ? ( | ||
| <div style={{ display: 'flex', justifyContent: 'center', alignItems: 'center', height: '400px', flexDirection: 'column', gap: tokens.spacingVerticalM }}> | ||
| <Text size={500} weight="semibold">Ready to Load Unassigned Tickets</Text> | ||
| <Text size={400}>Click "Load Unassigned Tickets" to fetch tickets without an assignee</Text> | ||
| </div> | ||
| ) : ( | ||
| <div className={styles.layout}> | ||
| {/* LEFT PANEL - List */} | ||
| <div className={styles.listPanel}> | ||
| <div className={styles.filterBar}> | ||
| <Field style={{ flexGrow: 1 }}> | ||
| <Input | ||
| placeholder="Suche nach ID, Titel oder Beschreibung..." | ||
| value={searchTerm} | ||
| onChange={(e, data) => setSearchTerm(data.value)} | ||
| contentBefore={<Search20Regular />} | ||
| data-testid="unassigned-ticket-search" | ||
| /> | ||
| </Field> | ||
| <Field style={{ minWidth: '150px' }}> | ||
| <Select | ||
| value={priorityFilter} | ||
| onChange={(e, data) => setPriorityFilter(data.value)} | ||
| data-testid="unassigned-filter-priority" | ||
| > | ||
| <option value="all">Alle Prioritäten</option> | ||
| <option value="Critical">Critical</option> | ||
| <option value="High">High</option> | ||
| <option value="Medium">Medium</option> | ||
| <option value="Low">Low</option> | ||
| </Select> | ||
| </Field> | ||
| </div> | ||
|
|
||
| <div className={styles.gridContainer}> | ||
| <DataGrid | ||
| items={filteredTickets} | ||
| columns={columns} | ||
| sortable | ||
| getRowId={(item) => item.id} | ||
| > | ||
| <DataGridHeader> | ||
| <DataGridRow> | ||
| {({ renderHeaderCell }) => ( | ||
| <DataGridHeaderCell>{renderHeaderCell()}</DataGridHeaderCell> | ||
| )} | ||
| </DataGridRow> | ||
| </DataGridHeader> | ||
| <DataGridBody> | ||
| {({ item, rowId }) => ( | ||
| <DataGridRow | ||
| key={rowId} | ||
| onClick={() => handleRowClick(item)} | ||
| className={selectedTicket?.id === item.id ? styles.selectedRow : ''} | ||
| style={{ cursor: 'pointer' }} | ||
| data-testid={`unassigned-ticket-row-${item.id}`} | ||
| > | ||
| {({ renderCell }) => <DataGridCell>{renderCell(item)}</DataGridCell>} | ||
| </DataGridRow> | ||
| )} | ||
| </DataGridBody> | ||
| </DataGrid> | ||
| </div> | ||
| </div> | ||
|
|
||
| {/* RIGHT PANEL - Detail */} | ||
| <div className={styles.detailPanel}> | ||
| {selectedTicket ? ( | ||
| <div className={styles.detailContent}> | ||
| <div> | ||
| <Text size={600} weight="semibold"> | ||
| {selectedTicket.title} | ||
| </Text> | ||
| </div> | ||
|
|
||
| <div className={styles.detailField}> | ||
| <Text className={styles.detailLabel}>Ticket ID</Text> | ||
| <Text className={styles.detailValue}>{selectedTicket.id}</Text> | ||
| </div> | ||
|
|
||
| <div className={styles.detailField}> | ||
| <Text className={styles.detailLabel}>Status</Text> | ||
| <Badge appearance="outline">{selectedTicket.status}</Badge> | ||
| </div> | ||
|
|
||
| <div className={styles.detailField}> | ||
| <Text className={styles.detailLabel}>Priorität</Text> | ||
| <Badge appearance={getPriorityBadge(selectedTicket.priority)}> | ||
| {selectedTicket.priority} | ||
| </Badge> | ||
| </div> | ||
|
|
||
| <div className={styles.detailField}> | ||
| <Text className={styles.detailLabel}>Reporter</Text> | ||
| <Text className={styles.detailValue}>{selectedTicket.reporter}</Text> | ||
| </div> | ||
|
|
||
| <div className={styles.detailField}> | ||
| <Text className={styles.detailLabel}>Zugewiesen an</Text> | ||
| <Text className={styles.detailValue}> | ||
| {selectedTicket.assignee || 'Nicht zugewiesen'} | ||
| </Text> | ||
| </div> | ||
|
|
||
| <div className={styles.detailField}> | ||
| <Text className={styles.detailLabel}>Erstellt am</Text> | ||
| <Text className={styles.detailValue}>{formatDate(selectedTicket.createdAt)}</Text> | ||
| </div> | ||
|
|
||
| <div className={styles.detailField}> | ||
| <Text className={styles.detailLabel}>Aktualisiert am</Text> | ||
| <Text className={styles.detailValue}>{formatDate(selectedTicket.updatedAt)}</Text> | ||
| </div> | ||
|
|
||
| <div className={styles.detailField}> | ||
| <Text className={styles.detailLabel}>Beschreibung</Text> | ||
| <Text className={styles.detailValue}>{selectedTicket.description}</Text> | ||
| </div> | ||
|
|
||
| {selectedTicket.escalationNeeded && ( | ||
| <MessageBar intent="warning"> | ||
| <MessageBarBody> | ||
| <AlertUrgent20Regular /> Eskalation erforderlich | ||
| </MessageBarBody> | ||
| </MessageBar> | ||
| )} | ||
|
|
||
| {ticketDecisions[selectedTicket.id] && ( | ||
| <MessageBar intent={ticketDecisions[selectedTicket.id] === 'GOOD' ? 'success' : 'error'}> | ||
| <MessageBarBody> | ||
| {ticketDecisions[selectedTicket.id] === 'GOOD' ? ( | ||
| <><Checkmark24Regular /> Als GOOD markiert</> | ||
| ) : ( | ||
| <><Warning24Regular /> Zur Eskalation markiert</> | ||
| )} | ||
| </MessageBarBody> | ||
| </MessageBar> | ||
| )} | ||
| </div> | ||
| ) : ( | ||
| <div className={styles.emptyDetail}> | ||
| <Text size={400}>Wählen Sie ein Ticket aus der Liste aus</Text> | ||
| </div> | ||
| )} | ||
| </div> | ||
| </div> | ||
| )} | ||
|
|
||
| {/* FOOTER */} | ||
| <div className={styles.footer}> | ||
| <div style={{ gridColumn: '1 / 3' }}> | ||
| {reminderMessage && ( | ||
| <MessageBar intent="success"> | ||
| <MessageBarBody>{reminderMessage}</MessageBarBody> | ||
| </MessageBar> | ||
| )} | ||
| </div> | ||
| <Button | ||
| appearance="primary" | ||
| icon={<Checkmark24Regular />} | ||
| onClick={handleMarkAsGood} | ||
| disabled={!selectedTicket} | ||
| style={{ backgroundColor: tokens.colorPaletteGreenBackground3, color: 'white' }} | ||
| data-testid="unassigned-mark-good-button" | ||
| > | ||
| GOOD | ||
| </Button> | ||
| <Button | ||
| appearance="primary" | ||
| icon={<Warning24Regular />} | ||
| onClick={handleMarkAsEscalate} | ||
| disabled={!selectedTicket} | ||
| style={{ backgroundColor: tokens.colorPaletteRedBackground3, color: 'white' }} | ||
| className={styles.footerAction} | ||
| data-testid="unassigned-mark-escalate-button" | ||
| > | ||
| ESCALATE | ||
| </Button> | ||
| </div> | ||
| </div> | ||
| ) | ||
| } |
There was a problem hiding this comment.
The two ticket components (TicketList and TicketsWithoutAnAssignee) are almost identical, containing extensive code duplication. Both share the same styles, calculations (formatDate, getPriorityBadge, filterTickets), state management, and UI structure. This violates the DRY principle and creates a maintenance burden - any bug fix or enhancement would need to be applied twice. Consider extracting shared functionality into a reusable component or creating a base component that both can extend with their specific differences (title text, API endpoint, data-testid values).
| const handleStartAnalysis = async () => { | ||
| setLoading(true) | ||
| setError(null) | ||
| try { | ||
| const response = await getQATickets() | ||
| setTickets(response.tickets) | ||
| setHasStarted(true) | ||
| } catch (err) { | ||
| setError(err.message || 'Fehler beim Laden der Tickets') | ||
| } finally { | ||
| setLoading(false) | ||
| } | ||
| } | ||
|
|
||
| const handleMarkAsGood = () => { | ||
| if (selectedTicket) { | ||
| setTicketDecisions(prev => ({ ...prev, [selectedTicket.id]: 'GOOD' })) | ||
| setReminderMessage(`Ticket ${selectedTicket.id} als GOOD markiert.`) | ||
| // TODO: Backend integration - update ticket status | ||
| } | ||
| } | ||
|
|
||
| const handleMarkAsEscalate = () => { | ||
| if (selectedTicket) { | ||
| setTicketDecisions(prev => ({ ...prev, [selectedTicket.id]: 'ESCALATE' })) | ||
| setReminderMessage(`Ticket ${selectedTicket.id} zur Eskalation markiert.`) | ||
| // TODO: Backend integration - escalate ticket | ||
| } | ||
| } | ||
|
|
||
| // ============================================================================ | ||
| // RENDER | ||
| // ============================================================================ | ||
|
|
||
| return ( | ||
| <div className={styles.container}> | ||
| <div className={styles.header}> | ||
| <div style={{ display: 'flex', justifyContent: 'space-between', alignItems: 'center' }}> | ||
| <Text className={styles.title}>Unassigned QA Starten</Text> | ||
| <Button | ||
| appearance="primary" | ||
| icon={<PlayCircle24Regular />} | ||
| onClick={handleStartAnalysis} | ||
| disabled={loading} | ||
| data-testid="start-analysis-button" | ||
| > | ||
| {loading ? 'Lädt...' : 'Start QA Analyse'} | ||
| </Button> | ||
| </div> | ||
| {error && ( | ||
| <MessageBar intent="error" style={{ marginTop: tokens.spacingVerticalM }}> | ||
| <MessageBarBody>{error}</MessageBarBody> | ||
| </MessageBar> | ||
| )} | ||
| </div> | ||
|
|
||
| {loading ? ( | ||
| <div style={{ display: 'flex', justifyContent: 'center', alignItems: 'center', height: '400px' }}> | ||
| <Spinner size="large" label="Lade QA Tickets..." /> | ||
| </div> | ||
| ) : !hasStarted ? ( | ||
| <div style={{ display: 'flex', justifyContent: 'center', alignItems: 'center', height: '400px', flexDirection: 'column', gap: tokens.spacingVerticalM }}> | ||
| <Text size={500} weight="semibold">Bereit für QA Analyse</Text> | ||
| <Text size={400}>Klicken Sie auf "Start QA Analyse", um Tickets zu laden</Text> | ||
| </div> | ||
| ) : ( | ||
| <div className={styles.layout}> | ||
| {/* LEFT PANEL - List */} | ||
| <div className={styles.listPanel}> | ||
| <div className={styles.filterBar}> | ||
| <Field style={{ flexGrow: 1 }}> | ||
| <Input | ||
| placeholder="Suche nach ID, Titel oder Beschreibung..." | ||
| value={searchTerm} | ||
| onChange={(e, data) => setSearchTerm(data.value)} | ||
| contentBefore={<Search20Regular />} | ||
| data-testid="ticket-search" | ||
| /> | ||
| </Field> | ||
| <Field style={{ minWidth: '150px' }}> | ||
| <Select | ||
| value={priorityFilter} | ||
| onChange={(e, data) => setPriorityFilter(data.value)} | ||
| data-testid="filter-priority" | ||
| > | ||
| <option value="all">Alle Prioritäten</option> | ||
| <option value="Critical">Critical</option> | ||
| <option value="High">High</option> | ||
| <option value="Medium">Medium</option> | ||
| <option value="Low">Low</option> | ||
| </Select> | ||
| </Field> | ||
| </div> | ||
|
|
||
| <div className={styles.gridContainer}> | ||
| <DataGrid | ||
| items={filteredTickets} | ||
| columns={columns} | ||
| sortable | ||
| getRowId={(item) => item.id} | ||
| > | ||
| <DataGridHeader> | ||
| <DataGridRow> | ||
| {({ renderHeaderCell }) => ( | ||
| <DataGridHeaderCell>{renderHeaderCell()}</DataGridHeaderCell> | ||
| )} | ||
| </DataGridRow> | ||
| </DataGridHeader> | ||
| <DataGridBody> | ||
| {({ item, rowId }) => ( | ||
| <DataGridRow | ||
| key={rowId} | ||
| onClick={() => handleRowClick(item)} | ||
| className={selectedTicket?.id === item.id ? styles.selectedRow : ''} | ||
| style={{ cursor: 'pointer' }} | ||
| data-testid={`ticket-row-${item.id}`} | ||
| > | ||
| {({ renderCell }) => <DataGridCell>{renderCell(item)}</DataGridCell>} | ||
| </DataGridRow> | ||
| )} | ||
| </DataGridBody> | ||
| </DataGrid> | ||
| </div> | ||
| </div> | ||
|
|
||
| {/* RIGHT PANEL - Detail */} | ||
| <div className={styles.detailPanel}> | ||
| {selectedTicket ? ( | ||
| <div className={styles.detailContent}> | ||
| <div> | ||
| <Text size={600} weight="semibold"> | ||
| {selectedTicket.title} | ||
| </Text> | ||
| </div> | ||
|
|
||
| <div className={styles.detailField}> | ||
| <Text className={styles.detailLabel}>Ticket ID</Text> | ||
| <Text className={styles.detailValue}>{selectedTicket.id}</Text> | ||
| </div> | ||
|
|
||
| <div className={styles.detailField}> | ||
| <Text className={styles.detailLabel}>Status</Text> | ||
| <Badge appearance="outline">{selectedTicket.status}</Badge> | ||
| </div> | ||
|
|
||
| <div className={styles.detailField}> | ||
| <Text className={styles.detailLabel}>Priorität</Text> | ||
| <Badge appearance={getPriorityBadge(selectedTicket.priority)}> | ||
| {selectedTicket.priority} | ||
| </Badge> | ||
| </div> | ||
|
|
||
| <div className={styles.detailField}> | ||
| <Text className={styles.detailLabel}>Reporter</Text> | ||
| <Text className={styles.detailValue}>{selectedTicket.reporter}</Text> | ||
| </div> | ||
|
|
||
| <div className={styles.detailField}> | ||
| <Text className={styles.detailLabel}>Zugewiesen an</Text> | ||
| <Text className={styles.detailValue}> | ||
| {selectedTicket.assignee || 'Nicht zugewiesen'} | ||
| </Text> | ||
| </div> | ||
|
|
||
| <div className={styles.detailField}> | ||
| <Text className={styles.detailLabel}>Erstellt am</Text> | ||
| <Text className={styles.detailValue}>{formatDate(selectedTicket.createdAt)}</Text> | ||
| </div> | ||
|
|
||
| <div className={styles.detailField}> | ||
| <Text className={styles.detailLabel}>Aktualisiert am</Text> | ||
| <Text className={styles.detailValue}>{formatDate(selectedTicket.updatedAt)}</Text> | ||
| </div> | ||
|
|
||
| <div className={styles.detailField}> | ||
| <Text className={styles.detailLabel}>Beschreibung</Text> | ||
| <Text className={styles.detailValue}>{selectedTicket.description}</Text> | ||
| </div> | ||
|
|
||
| {selectedTicket.escalationNeeded && ( | ||
| <MessageBar intent="warning"> | ||
| <MessageBarBody> | ||
| <AlertUrgent20Regular /> Eskalation erforderlich | ||
| </MessageBarBody> | ||
| </MessageBar> | ||
| )} | ||
|
|
||
| {ticketDecisions[selectedTicket.id] && ( | ||
| <MessageBar intent={ticketDecisions[selectedTicket.id] === 'GOOD' ? 'success' : 'error'}> | ||
| <MessageBarBody> | ||
| {ticketDecisions[selectedTicket.id] === 'GOOD' ? ( | ||
| <><Checkmark24Regular /> Als GOOD markiert</> | ||
| ) : ( | ||
| <><Warning24Regular /> Zur Eskalation markiert</> | ||
| )} | ||
| </MessageBarBody> | ||
| </MessageBar> | ||
| )} | ||
| </div> | ||
| ) : ( | ||
| <div className={styles.emptyDetail}> | ||
| <Text size={400}>Wählen Sie ein Ticket aus der Liste aus</Text> | ||
| </div> | ||
| )} | ||
| </div> | ||
| </div> | ||
| )} | ||
|
|
||
| {/* FOOTER */} | ||
| <div className={styles.footer}> | ||
| <div style={{ gridColumn: '1 / 3' }}> | ||
| {reminderMessage && ( | ||
| <MessageBar intent="success"> | ||
| <MessageBarBody>{reminderMessage}</MessageBarBody> | ||
| </MessageBar> | ||
| )} | ||
| </div> | ||
| <Button | ||
| appearance="primary" | ||
| icon={<Checkmark24Regular />} | ||
| onClick={handleMarkAsGood} | ||
| disabled={!selectedTicket} | ||
| style={{ backgroundColor: tokens.colorPaletteGreenBackground3, color: 'white' }} | ||
| data-testid="mark-good-button" | ||
| > | ||
| GOOD | ||
| </Button> | ||
| <Button | ||
| appearance="primary" | ||
| icon={<Warning24Regular />} | ||
| onClick={handleMarkAsEscalate} | ||
| disabled={!selectedTicket} | ||
| style={{ backgroundColor: tokens.colorPaletteRedBackground3, color: 'white' }} | ||
| className={styles.footerAction} | ||
| data-testid="mark-escalate-button" | ||
| > | ||
| ESCALATE | ||
| </Button> | ||
| </div> | ||
| </div> | ||
| ) | ||
| } |
There was a problem hiding this comment.
The new ticket functionality lacks end-to-end test coverage. The repository has comprehensive E2E tests for tasks (tests/e2e/app.spec.js) and ollama features, but no tests exist for the new QA tickets and unassigned tickets views. Since the coding guideline emphasizes maintaining test coverage where it exists, tests should be added to verify: loading tickets via the "Start" buttons, filtering by priority, searching, selecting tickets, and marking them as GOOD/ESCALATE. The data-testid attributes are already present (start-analysis-button, ticket-search, filter-priority, etc.), making test implementation straightforward.
| <div style={{ display: 'flex', justifyContent: 'space-between', alignItems: 'center' }}> | ||
| <Text className={styles.title}>Tickets Without Assignee</Text> | ||
| <Button | ||
| appearance="primary" | ||
| icon={<PlayCircle24Regular />} | ||
| onClick={handleStartAnalysis} | ||
| disabled={loading} | ||
| data-testid="start-unassigned-button" | ||
| > | ||
| {loading ? 'Lädt...' : 'Load Unassigned Tickets'} | ||
| </Button> | ||
| </div> | ||
| {error && ( | ||
| <MessageBar intent="error" style={{ marginTop: tokens.spacingVerticalM }}> | ||
| <MessageBarBody>{error}</MessageBarBody> | ||
| </MessageBar> | ||
| )} | ||
| </div> | ||
|
|
||
| {loading ? ( | ||
| <div style={{ display: 'flex', justifyContent: 'center', alignItems: 'center', height: '400px' }}> | ||
| <Spinner size="large" label="Loading unassigned tickets..." /> | ||
| </div> | ||
| ) : !hasStarted ? ( | ||
| <div style={{ display: 'flex', justifyContent: 'center', alignItems: 'center', height: '400px', flexDirection: 'column', gap: tokens.spacingVerticalM }}> | ||
| <Text size={500} weight="semibold">Ready to Load Unassigned Tickets</Text> | ||
| <Text size={400}>Click "Load Unassigned Tickets" to fetch tickets without an assignee</Text> | ||
| </div> | ||
| ) : ( | ||
| <div className={styles.layout}> | ||
| {/* LEFT PANEL - List */} | ||
| <div className={styles.listPanel}> | ||
| <div className={styles.filterBar}> | ||
| <Field style={{ flexGrow: 1 }}> | ||
| <Input | ||
| placeholder="Suche nach ID, Titel oder Beschreibung..." | ||
| value={searchTerm} | ||
| onChange={(e, data) => setSearchTerm(data.value)} | ||
| contentBefore={<Search20Regular />} | ||
| data-testid="unassigned-ticket-search" | ||
| /> | ||
| </Field> | ||
| <Field style={{ minWidth: '150px' }}> | ||
| <Select | ||
| value={priorityFilter} | ||
| onChange={(e, data) => setPriorityFilter(data.value)} | ||
| data-testid="unassigned-filter-priority" | ||
| > | ||
| <option value="all">Alle Prioritäten</option> | ||
| <option value="Critical">Critical</option> | ||
| <option value="High">High</option> | ||
| <option value="Medium">Medium</option> | ||
| <option value="Low">Low</option> | ||
| </Select> | ||
| </Field> | ||
| </div> | ||
|
|
||
| <div className={styles.gridContainer}> | ||
| <DataGrid | ||
| items={filteredTickets} | ||
| columns={columns} | ||
| sortable | ||
| getRowId={(item) => item.id} | ||
| > | ||
| <DataGridHeader> | ||
| <DataGridRow> | ||
| {({ renderHeaderCell }) => ( | ||
| <DataGridHeaderCell>{renderHeaderCell()}</DataGridHeaderCell> | ||
| )} | ||
| </DataGridRow> | ||
| </DataGridHeader> | ||
| <DataGridBody> | ||
| {({ item, rowId }) => ( | ||
| <DataGridRow | ||
| key={rowId} | ||
| onClick={() => handleRowClick(item)} | ||
| className={selectedTicket?.id === item.id ? styles.selectedRow : ''} | ||
| style={{ cursor: 'pointer' }} | ||
| data-testid={`unassigned-ticket-row-${item.id}`} | ||
| > | ||
| {({ renderCell }) => <DataGridCell>{renderCell(item)}</DataGridCell>} | ||
| </DataGridRow> | ||
| )} | ||
| </DataGridBody> | ||
| </DataGrid> | ||
| </div> | ||
| </div> | ||
|
|
||
| {/* RIGHT PANEL - Detail */} | ||
| <div className={styles.detailPanel}> | ||
| {selectedTicket ? ( | ||
| <div className={styles.detailContent}> | ||
| <div> | ||
| <Text size={600} weight="semibold"> | ||
| {selectedTicket.title} | ||
| </Text> | ||
| </div> | ||
|
|
||
| <div className={styles.detailField}> | ||
| <Text className={styles.detailLabel}>Ticket ID</Text> | ||
| <Text className={styles.detailValue}>{selectedTicket.id}</Text> | ||
| </div> | ||
|
|
||
| <div className={styles.detailField}> | ||
| <Text className={styles.detailLabel}>Status</Text> | ||
| <Badge appearance="outline">{selectedTicket.status}</Badge> | ||
| </div> | ||
|
|
||
| <div className={styles.detailField}> | ||
| <Text className={styles.detailLabel}>Priorität</Text> | ||
| <Badge appearance={getPriorityBadge(selectedTicket.priority)}> | ||
| {selectedTicket.priority} | ||
| </Badge> | ||
| </div> | ||
|
|
||
| <div className={styles.detailField}> | ||
| <Text className={styles.detailLabel}>Reporter</Text> | ||
| <Text className={styles.detailValue}>{selectedTicket.reporter}</Text> | ||
| </div> | ||
|
|
||
| <div className={styles.detailField}> | ||
| <Text className={styles.detailLabel}>Zugewiesen an</Text> | ||
| <Text className={styles.detailValue}> | ||
| {selectedTicket.assignee || 'Nicht zugewiesen'} | ||
| </Text> | ||
| </div> | ||
|
|
||
| <div className={styles.detailField}> | ||
| <Text className={styles.detailLabel}>Erstellt am</Text> | ||
| <Text className={styles.detailValue}>{formatDate(selectedTicket.createdAt)}</Text> | ||
| </div> | ||
|
|
||
| <div className={styles.detailField}> | ||
| <Text className={styles.detailLabel}>Aktualisiert am</Text> | ||
| <Text className={styles.detailValue}>{formatDate(selectedTicket.updatedAt)}</Text> | ||
| </div> | ||
|
|
||
| <div className={styles.detailField}> | ||
| <Text className={styles.detailLabel}>Beschreibung</Text> | ||
| <Text className={styles.detailValue}>{selectedTicket.description}</Text> | ||
| </div> | ||
|
|
||
| {selectedTicket.escalationNeeded && ( | ||
| <MessageBar intent="warning"> | ||
| <MessageBarBody> | ||
| <AlertUrgent20Regular /> Eskalation erforderlich | ||
| </MessageBarBody> | ||
| </MessageBar> | ||
| )} | ||
|
|
||
| {ticketDecisions[selectedTicket.id] && ( | ||
| <MessageBar intent={ticketDecisions[selectedTicket.id] === 'GOOD' ? 'success' : 'error'}> | ||
| <MessageBarBody> | ||
| {ticketDecisions[selectedTicket.id] === 'GOOD' ? ( | ||
| <><Checkmark24Regular /> Als GOOD markiert</> | ||
| ) : ( | ||
| <><Warning24Regular /> Zur Eskalation markiert</> | ||
| )} | ||
| </MessageBarBody> | ||
| </MessageBar> | ||
| )} | ||
| </div> | ||
| ) : ( | ||
| <div className={styles.emptyDetail}> | ||
| <Text size={400}>Wählen Sie ein Ticket aus der Liste aus</Text> | ||
| </div> | ||
| )} | ||
| </div> | ||
| </div> | ||
| )} | ||
|
|
||
| {/* FOOTER */} | ||
| <div className={styles.footer}> | ||
| <div style={{ gridColumn: '1 / 3' }}> |
There was a problem hiding this comment.
The inline styles used for layout (lines 291, 311, 315, 464) break the established pattern of using makeStyles with FluentUI tokens. The codebase convention (seen in TaskList.jsx, Dashboard.jsx, and other components) is to define all styles via useStyles and reference them via className. Inline styles make theming inconsistent and harder to maintain. Move these to the useStyles definition block as proper style classes.
| @@ -0,0 +1,20 @@ | |||
| Features Canvas C5: Tickets “Assigned without Assignee” reminden | |||
There was a problem hiding this comment.
The directory is named "ticker_reminder" but the actual feature is about "ticket reminder" (for support tickets). The typo "ticker" appears to be a spelling error that should be "ticket" to maintain consistency with the feature name and avoid confusion.
| load_dotenv() | ||
|
|
||
| # Ensure operations register before we request LangChain tools | ||
| import operations # noqa: F401 |
There was a problem hiding this comment.
Import of 'operations' is not used.
…ious components Signed-off-by: Andre Bossard <anbossar@microsoft.com>
…hat component and related API calls, add AgentChat component for task management; update frontend routing and backend operations accordingly. Signed-off-by: Andre Bossard <anbossar@microsoft.com>
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 17 out of 18 changed files in this pull request and generated 8 comments.
Comments suppressed due to low confidence (2)
frontend/src/features/agent/AgentChat.jsx:328
- Missing test coverage: The AgentChat component (renamed/refactored from OllamaChat) lacks E2E tests. The previous Ollama integration had comprehensive E2E tests (removed in tests/e2e/ollama.spec.js) covering chat interactions, model selection, and error handling. The new Azure OpenAI agent integration should have similar test coverage to ensure the chat interface works correctly with the agent backend.
frontend/src/features/tickets/TicketsWithoutAnAssignee.jsx:247 - Unused variable handleReminder.
const handleReminder = () => {
| const handleReminder = () => { | ||
| if (selectedTicket) { | ||
| setReminderMessage(`Erinnerung für Ticket ${selectedTicket.id} wurde gesendet.`) | ||
| // TODO: Backend integration - send reminder API call | ||
| } | ||
| } | ||
|
|
There was a problem hiding this comment.
Unused function: The handleReminder function is defined but never called anywhere in the component. It appears to be dead code that was perhaps intended for a different UI action but was replaced by handleMarkAsGood and handleMarkAsEscalate. Consider removing it or implementing the corresponding UI button if reminder functionality is still needed.
| const handleReminder = () => { | |
| if (selectedTicket) { | |
| setReminderMessage(`Erinnerung für Ticket ${selectedTicket.id} wurde gesendet.`) | |
| // TODO: Backend integration - send reminder API call | |
| } | |
| } |
| export default function TicketList() { | ||
| const styles = useStyles() | ||
|
|
||
| // State | ||
| const [tickets, setTickets] = useState([]) | ||
| const [selectedTicket, setSelectedTicket] = useState(null) | ||
| const [ticketDetail, setTicketDetail] = useState(null) | ||
| const [searchTerm, setSearchTerm] = useState('') | ||
| const [priorityFilter, setPriorityFilter] = useState('all') | ||
| const [statusFilter, setStatusFilter] = useState('all') | ||
| const [loading, setLoading] = useState(true) | ||
| const [detailLoading, setDetailLoading] = useState(false) | ||
| const [error, setError] = useState(null) | ||
| const [activeTab, setActiveTab] = useState('details') | ||
|
|
||
| // Calculations | ||
| const filteredTickets = filterTickets(tickets, searchTerm, priorityFilter, statusFilter) | ||
|
|
||
| // Columns for DataGrid | ||
| const columns = [ | ||
| createTableColumn({ | ||
| columnId: 'summary', | ||
| compare: (a, b) => (a.summary || '').localeCompare(b.summary || ''), | ||
| renderHeaderCell: () => 'Summary', | ||
| renderCell: (item) => ( | ||
| <TableCellLayout truncate title={item.summary}> | ||
| <Text weight="semibold" style={{ fontSize: tokens.fontSizeBase200 }}> | ||
| {item.summary} | ||
| </Text> | ||
| </TableCellLayout> | ||
| ), | ||
| }), | ||
| createTableColumn({ | ||
| columnId: 'status', | ||
| compare: (a, b) => (a.status || '').localeCompare(b.status || ''), | ||
| renderHeaderCell: () => 'Status', | ||
| renderCell: (item) => ( | ||
| <TableCellLayout> | ||
| <Badge | ||
| appearance={getStatusAppearance(item.status)} | ||
| className={styles.statusBadge} | ||
| > | ||
| {item.status?.replace('_', ' ')} | ||
| </Badge> | ||
| </TableCellLayout> | ||
| ), | ||
| }), | ||
| createTableColumn({ | ||
| columnId: 'priority', | ||
| compare: (a, b) => (a.priority || '').localeCompare(b.priority || ''), | ||
| renderHeaderCell: () => 'Priority', | ||
| renderCell: (item) => ( | ||
| <TableCellLayout> | ||
| <Badge appearance={getPriorityAppearance(item.priority)}> | ||
| {item.priority} | ||
| </Badge> | ||
| </TableCellLayout> | ||
| ), | ||
| }), | ||
| createTableColumn({ | ||
| columnId: 'created', | ||
| compare: (a, b) => new Date(a.created_at) - new Date(b.created_at), | ||
| renderHeaderCell: () => 'Created', | ||
| renderCell: (item) => ( | ||
| <TableCellLayout> | ||
| <Text style={{ fontSize: tokens.fontSizeBase200, color: tokens.colorNeutralForeground3 }}> | ||
| {formatRelativeTime(item.created_at)} | ||
| </Text> | ||
| </TableCellLayout> | ||
| ), | ||
| }), | ||
| ] | ||
|
|
||
| // ============================================================================ | ||
| // ACTIONS - Event handlers & effects | ||
| // ============================================================================ | ||
|
|
||
| // Load tickets on mount | ||
| useEffect(() => { | ||
| loadTickets() | ||
| }, []) | ||
|
|
||
| async function loadTickets() { | ||
| setLoading(true) | ||
| setError(null) | ||
| try { | ||
| const response = await fetch('/api/tickets?page_size=100') | ||
| if (!response.ok) throw new Error('Failed to load tickets') | ||
| const data = await response.json() | ||
| setTickets(data.tickets || []) | ||
| } catch (err) { | ||
| setError(err.message || 'Error loading tickets') | ||
| } finally { | ||
| setLoading(false) | ||
| } | ||
| } | ||
|
|
||
| async function loadTicketDetail(ticketId) { | ||
| setDetailLoading(true) | ||
| try { | ||
| const response = await fetch(`/api/tickets/${ticketId}`) | ||
| if (!response.ok) throw new Error('Failed to load ticket details') | ||
| const data = await response.json() | ||
| setTicketDetail(data) | ||
| } catch (err) { | ||
| console.error('Error loading ticket detail:', err) | ||
| setTicketDetail(null) | ||
| } finally { | ||
| setDetailLoading(false) | ||
| } | ||
| } | ||
|
|
||
| const handleRowClick = (ticket) => { | ||
| setSelectedTicket(ticket) | ||
| setActiveTab('details') | ||
| loadTicketDetail(ticket.id) | ||
| } | ||
|
|
||
| // ============================================================================ | ||
| // RENDER HELPERS | ||
| // ============================================================================ | ||
|
|
||
| function renderDetailField(label, value, fullWidth = false) { | ||
| return ( | ||
| <div className={styles.detailField} style={fullWidth ? { gridColumn: '1 / -1' } : undefined}> | ||
| <Text className={styles.detailLabel}>{label}</Text> | ||
| <Text className={styles.detailValue}>{value || '—'}</Text> | ||
| </div> | ||
| ) | ||
| } | ||
|
|
||
| function renderWorklog(log) { | ||
| return ( | ||
| <div key={log.id} className={styles.worklogItem}> | ||
| <div className={styles.worklogHeader}> | ||
| <Badge appearance="outline">{log.log_type}</Badge> | ||
| <div className={styles.worklogMeta}> | ||
| <span><Person20Regular /> {log.author}</span> | ||
| <span><Clock20Regular /> {formatDate(log.created_at)}</span> | ||
| {log.time_spent_minutes > 0 && ( | ||
| <span>{log.time_spent_minutes} min</span> | ||
| )} | ||
| </div> | ||
| </div> | ||
| <Text weight="semibold" style={{ display: 'block', marginBottom: tokens.spacingVerticalXS }}> | ||
| {log.summary} | ||
| </Text> | ||
| {log.details && ( | ||
| <Text style={{ color: tokens.colorNeutralForeground2 }}>{log.details}</Text> | ||
| )} | ||
| </div> | ||
| ) | ||
| } | ||
|
|
||
| // ============================================================================ | ||
| // RENDER | ||
| // ============================================================================ | ||
|
|
||
| if (loading) { | ||
| return ( | ||
| <div className={styles.loadingContainer}> | ||
| <Spinner size="large" /> | ||
| <Text>Loading tickets...</Text> | ||
| </div> | ||
| ) | ||
| } | ||
|
|
||
| if (error) { | ||
| return ( | ||
| <div className={styles.loadingContainer}> | ||
| <MessageBar intent="error"> | ||
| <MessageBarBody>{error}</MessageBarBody> | ||
| </MessageBar> | ||
| </div> | ||
| ) | ||
| } | ||
|
|
||
| const detail = ticketDetail?.ticket | ||
|
|
||
| return ( | ||
| <div className={styles.container}> | ||
| {/* HEADER */} | ||
| <div className={styles.header}> | ||
| <div className={styles.headerLeft}> | ||
| <Text className={styles.title}> | ||
| <DocumentBulletList20Regular /> | ||
| Support Tickets | ||
| </Text> | ||
| <span className={styles.ticketCount}>{tickets.length} tickets</span> | ||
| </div> | ||
| <Badge appearance="outline" icon={<ArrowClockwise20Regular />}> | ||
| Last updated: {formatRelativeTime(new Date().toISOString())} | ||
| </Badge> | ||
| </div> | ||
|
|
||
| {/* MAIN LAYOUT */} | ||
| <div className={styles.layout}> | ||
| {/* LEFT PANEL - List */} | ||
| <div className={styles.listPanel}> | ||
| <div className={styles.filterBar}> | ||
| <Field style={{ flexGrow: 1 }}> | ||
| <Input | ||
| placeholder="Search tickets..." | ||
| value={searchTerm} | ||
| onChange={(e, data) => setSearchTerm(data.value)} | ||
| contentBefore={<Search20Regular />} | ||
| data-testid="ticket-search" | ||
| /> | ||
| </Field> | ||
| <Field style={{ minWidth: '120px' }}> | ||
| <Select | ||
| value={statusFilter} | ||
| onChange={(e, data) => setStatusFilter(data.value)} | ||
| data-testid="filter-status" | ||
| > | ||
| <option value="all">All Status</option> | ||
| <option value="new">New</option> | ||
| <option value="assigned">Assigned</option> | ||
| <option value="in_progress">In Progress</option> | ||
| <option value="pending">Pending</option> | ||
| <option value="resolved">Resolved</option> | ||
| <option value="closed">Closed</option> | ||
| </Select> | ||
| </Field> | ||
| <Field style={{ minWidth: '120px' }}> | ||
| <Select | ||
| value={priorityFilter} | ||
| onChange={(e, data) => setPriorityFilter(data.value)} | ||
| data-testid="filter-priority" | ||
| > | ||
| <option value="all">All Priority</option> | ||
| <option value="critical">Critical</option> | ||
| <option value="high">High</option> | ||
| <option value="medium">Medium</option> | ||
| <option value="low">Low</option> | ||
| </Select> | ||
| </Field> | ||
| </div> | ||
|
|
||
| <div className={styles.gridContainer}> | ||
| <DataGrid | ||
| items={filteredTickets} | ||
| columns={columns} | ||
| sortable | ||
| getRowId={(item) => item.id} | ||
| > | ||
| <DataGridHeader> | ||
| <DataGridRow> | ||
| {({ renderHeaderCell }) => ( | ||
| <DataGridHeaderCell>{renderHeaderCell()}</DataGridHeaderCell> | ||
| )} | ||
| </DataGridRow> | ||
| </DataGridHeader> | ||
| <DataGridBody> | ||
| {({ item, rowId }) => ( | ||
| <DataGridRow | ||
| key={rowId} | ||
| onClick={() => handleRowClick(item)} | ||
| className={selectedTicket?.id === item.id ? styles.selectedRow : ''} | ||
| style={{ cursor: 'pointer' }} | ||
| data-testid={`ticket-row-${item.id}`} | ||
| > | ||
| {({ renderCell }) => <DataGridCell>{renderCell(item)}</DataGridCell>} | ||
| </DataGridRow> | ||
| )} | ||
| </DataGridBody> | ||
| </DataGrid> | ||
| </div> | ||
| </div> | ||
|
|
||
| {/* RIGHT PANEL - Detail */} | ||
| <div className={styles.detailPanel}> | ||
| {detailLoading ? ( | ||
| <div className={styles.emptyDetail}> | ||
| <Spinner size="medium" /> | ||
| <Text>Loading ticket details...</Text> | ||
| </div> | ||
| ) : detail ? ( | ||
| <> | ||
| {/* Detail Header */} | ||
| <div className={styles.detailHeader}> | ||
| <Text className={styles.detailTitle}>{detail.summary}</Text> | ||
| <div className={styles.detailMeta}> | ||
| <Badge appearance={getStatusAppearance(detail.status)} className={styles.statusBadge}> | ||
| {detail.status?.replace('_', ' ')} | ||
| </Badge> | ||
| <Badge appearance={getPriorityAppearance(detail.priority)}> | ||
| {detail.priority} | ||
| </Badge> | ||
| <span className={styles.metaItem}> | ||
| <Calendar20Regular /> {formatDate(detail.created_at)} | ||
| </span> | ||
| <span className={styles.metaItem}> | ||
| <Person20Regular /> {detail.requester_name} | ||
| </span> | ||
| </div> | ||
| </div> | ||
|
|
||
| {/* Tabs */} | ||
| <TabList | ||
| selectedValue={activeTab} | ||
| onTabSelect={(_, d) => setActiveTab(d.value)} | ||
| style={{ padding: `0 ${tokens.spacingHorizontalM}`, borderBottom: `1px solid ${tokens.colorNeutralStroke1}` }} | ||
| > | ||
| <Tab value="details" icon={<Info20Regular />}>Details</Tab> | ||
| <Tab value="worklogs" icon={<Document20Regular />}> | ||
| Worklogs ({ticketDetail?.work_logs?.length || 0}) | ||
| </Tab> | ||
| </TabList> | ||
|
|
||
| {/* Content */} | ||
| <div className={styles.detailContent}> | ||
| {activeTab === 'details' && ( | ||
| <> | ||
| {/* Description */} | ||
| <div className={styles.section}> | ||
| <Text className={styles.sectionTitle}> | ||
| <Document20Regular /> Description | ||
| </Text> | ||
| <div className={styles.descriptionBox}> | ||
| {detail.description || 'No description provided.'} | ||
| </div> | ||
| </div> | ||
|
|
||
| {/* Requester Info */} | ||
| <div className={styles.section}> | ||
| <Text className={styles.sectionTitle}> | ||
| <Person20Regular /> Requester Information | ||
| </Text> | ||
| <div className={styles.fieldGrid}> | ||
| {renderDetailField('Name', detail.requester_name)} | ||
| {renderDetailField('Email', detail.requester_email)} | ||
| {renderDetailField('Phone', detail.requester_phone)} | ||
| {renderDetailField('Department', detail.requester_department)} | ||
| {renderDetailField('Company', detail.requester_company)} | ||
| </div> | ||
| </div> | ||
|
|
||
| {/* Location & Assignment */} | ||
| <div className={styles.section}> | ||
| <Text className={styles.sectionTitle}> | ||
| <Location20Regular /> Location & Assignment | ||
| </Text> | ||
| <div className={styles.fieldGrid}> | ||
| {renderDetailField('City', detail.city)} | ||
| {renderDetailField('Site', detail.site)} | ||
| {renderDetailField('Desk Location', detail.desk_location)} | ||
| {renderDetailField('Assignee', detail.assignee)} | ||
| {renderDetailField('Assigned Group', detail.assigned_group)} | ||
| {renderDetailField('Support Org', detail.support_organization)} | ||
| </div> | ||
| </div> | ||
|
|
||
| {/* Technical Details */} | ||
| <div className={styles.section}> | ||
| <Text className={styles.sectionTitle}> | ||
| <Tag20Regular /> Technical Details | ||
| </Text> | ||
| <div className={styles.fieldGrid}> | ||
| {renderDetailField('Service', detail.service)} | ||
| {renderDetailField('Product', detail.product_name)} | ||
| {renderDetailField('Manufacturer', detail.manufacturer)} | ||
| {renderDetailField('Model', detail.model_version)} | ||
| {renderDetailField('CI Name', detail.ci_name)} | ||
| {renderDetailField('Incident Type', detail.incident_type)} | ||
| {renderDetailField('Impact', detail.impact)} | ||
| {renderDetailField('Urgency', detail.urgency)} | ||
| </div> | ||
| </div> | ||
|
|
||
| {/* Categories */} | ||
| {(detail.operational_category_tier1 || detail.product_category_tier1) && ( | ||
| <div className={styles.section}> | ||
| <Text className={styles.sectionTitle}> | ||
| <Building20Regular /> Categories | ||
| </Text> | ||
| <div className={styles.fieldGrid}> | ||
| {renderDetailField('Operational Tier 1', detail.operational_category_tier1)} | ||
| {renderDetailField('Operational Tier 2', detail.operational_category_tier2)} | ||
| {renderDetailField('Operational Tier 3', detail.operational_category_tier3)} | ||
| {renderDetailField('Product Tier 1', detail.product_category_tier1)} | ||
| {renderDetailField('Product Tier 2', detail.product_category_tier2)} | ||
| {renderDetailField('Product Tier 3', detail.product_category_tier3)} | ||
| </div> | ||
| </div> | ||
| )} | ||
|
|
||
| {/* Resolution */} | ||
| {detail.resolution && ( | ||
| <div className={styles.section}> | ||
| <Text className={styles.sectionTitle}>Resolution</Text> | ||
| <div className={styles.descriptionBox}>{detail.resolution}</div> | ||
| </div> | ||
| )} | ||
| </> | ||
| )} | ||
|
|
||
| {activeTab === 'worklogs' && ( | ||
| <div className={styles.section}> | ||
| {ticketDetail?.work_logs?.length > 0 ? ( | ||
| ticketDetail.work_logs.map(renderWorklog) | ||
| ) : ( | ||
| <div className={styles.emptyDetail}> | ||
| <Text>No work logs available for this ticket.</Text> | ||
| </div> | ||
| )} | ||
| </div> | ||
| )} | ||
| </div> | ||
| </> | ||
| ) : ( | ||
| <div className={styles.emptyDetail}> | ||
| <DocumentBulletList20Regular style={{ fontSize: '48px', opacity: 0.5 }} /> | ||
| <Text size={400}>Select a ticket to view details</Text> | ||
| </div> | ||
| )} | ||
| </div> | ||
| </div> | ||
| </div> | ||
| ) | ||
| } |
There was a problem hiding this comment.
Missing test coverage: The new TicketList component lacks E2E tests. The existing test suite (tests/e2e/app.spec.js) has comprehensive coverage for Dashboard and TaskList, including user interactions, filtering, and data loading. The TicketList component has similar functionality (master-detail view, filtering, data fetching) that should be tested to maintain consistent test coverage across the application.
| print(" REST API: http://localhost:5001/api/*") | ||
| print(" MCP JSON-RPC: http://localhost:5001/mcp") | ||
| print() | ||
| print("💡 Port 5001 (macOS AirPlay uses 5000)") | ||
| print("=" * 70) | ||
|
|
||
| app.run(debug=True, host="0.0.0.0", port=5001) |
There was a problem hiding this comment.
Duplicate code detected: Lines 494-501 repeat the same print statements and app.run() call that already exist on lines 488-494. This means the server startup message will be printed twice and app.run() will be called twice, which could cause unexpected behavior.
| print(" REST API: http://localhost:5001/api/*") | |
| print(" MCP JSON-RPC: http://localhost:5001/mcp") | |
| print() | |
| print("💡 Port 5001 (macOS AirPlay uses 5000)") | |
| print("=" * 70) | |
| app.run(debug=True, host="0.0.0.0", port=5001) |
| try { | ||
| return await fetchJSON(`${API_BASE_URL}/agents/run`, { | ||
| method: "POST", | ||
| body: JSON.stringify({ prompt, agent_type: "task_assistant" }), | ||
| }); | ||
| } catch (error) { | ||
| // Provide helpful message for support channel | ||
| throw new Error("CHECK THE SUPPORT CHANNEL"); | ||
| } |
There was a problem hiding this comment.
Unhelpful error message: The catch block replaces all errors with "CHECK THE SUPPORT CHANNEL", which hides the actual error details from users and developers. This makes debugging difficult and doesn't follow the pattern used elsewhere in the codebase where actual error messages are preserved. Consider either removing the try-catch (letting fetchJSON errors propagate) or rethrowing with a more informative message that includes the original error.
| try { | |
| return await fetchJSON(`${API_BASE_URL}/agents/run`, { | |
| method: "POST", | |
| body: JSON.stringify({ prompt, agent_type: "task_assistant" }), | |
| }); | |
| } catch (error) { | |
| // Provide helpful message for support channel | |
| throw new Error("CHECK THE SUPPORT CHANNEL"); | |
| } | |
| return fetchJSON(`${API_BASE_URL}/agents/run`, { | |
| method: "POST", | |
| body: JSON.stringify({ prompt, agent_type: "task_assistant" }), | |
| }); |
| function formatDate(isoString) { | ||
| if (!isoString) return '—' | ||
| const date = new Date(isoString) | ||
| return date.toLocaleDateString('de-CH', { | ||
| day: '2-digit', | ||
| month: '2-digit', | ||
| year: 'numeric' | ||
| }) + ' ' + date.toLocaleTimeString('de-CH', { | ||
| hour: '2-digit', | ||
| minute: '2-digit' | ||
| }) |
There was a problem hiding this comment.
Inconsistent date formatting: This component uses 'de-CH' locale for date formatting, while TicketsWithoutAnAssignee.jsx uses 'de-DE' locale. This inconsistency can lead to different date formats across the application (e.g., different separators or formats). Consider extracting a shared date formatting utility to ensure consistency across all ticket-related components.
| function formatDate(isoString) { | |
| if (!isoString) return '—' | |
| const date = new Date(isoString) | |
| return date.toLocaleDateString('de-CH', { | |
| day: '2-digit', | |
| month: '2-digit', | |
| year: 'numeric' | |
| }) + ' ' + date.toLocaleTimeString('de-CH', { | |
| hour: '2-digit', | |
| minute: '2-digit' | |
| }) | |
| const TICKET_DATE_LOCALE = 'de-DE' | |
| function formatDate(isoString) { | |
| if (!isoString) return '—' | |
| const date = new Date(isoString) | |
| return ( | |
| date.toLocaleDateString(TICKET_DATE_LOCALE, { | |
| day: '2-digit', | |
| month: '2-digit', | |
| year: 'numeric', | |
| }) + | |
| ' ' + | |
| date.toLocaleTimeString(TICKET_DATE_LOCALE, { | |
| hour: '2-digit', | |
| minute: '2-digit', | |
| }) | |
| ) |
| def _map_mcp_ticket_to_frontend(mcp_ticket: dict) -> dict: | ||
| """ | ||
| Pure function: Map MCP ticket schema to frontend expected format. | ||
|
|
||
| MCP fields -> Frontend fields: | ||
| - summary -> title | ||
| - requester_name -> reporter | ||
| - created_at -> createdAt (camelCase) | ||
| - updated_at -> updatedAt | ||
| - priority (lowercase) -> Priority (capitalized) | ||
| - status (lowercase) -> status (capitalized) | ||
| """ | ||
| priority_raw = mcp_ticket.get("priority", "medium") | ||
| priority = priority_raw.capitalize() if priority_raw else "Medium" | ||
|
|
||
| status_raw = mcp_ticket.get("status", "new") | ||
| status = status_raw.replace("_", " ").title() if status_raw else "New" | ||
|
|
||
| # Derive escalationNeeded from priority | ||
| escalation_needed = priority in ("Critical", "High") | ||
|
|
||
| return { | ||
| "id": str(mcp_ticket.get("id", "")), | ||
| "title": mcp_ticket.get("summary", ""), | ||
| "description": mcp_ticket.get("description", ""), | ||
| "status": status, | ||
| "priority": priority, | ||
| "assignee": mcp_ticket.get("assignee"), | ||
| "reporter": mcp_ticket.get("requester_name", ""), | ||
| "createdAt": mcp_ticket.get("created_at", ""), | ||
| "updatedAt": mcp_ticket.get("updated_at", ""), | ||
| "escalationNeeded": escalation_needed, | ||
| } |
There was a problem hiding this comment.
Missing input validation: The _map_mcp_ticket_to_frontend function doesn't validate that the mcp_ticket parameter is a dictionary or handle None values. If the ticket data structure is malformed (e.g., mcp_ticket is None or not a dict), the .get() calls will fail. Consider adding a type check at the beginning or using a try-except to handle malformed data gracefully.
| # Import unified operation system | ||
|
|
||
| # Agent service for Azure OpenAI LangGraph agents | ||
| from agents import AgentRequest, AgentResponse, agent_service |
There was a problem hiding this comment.
Import of 'AgentResponse' is not used.
| from agents import AgentRequest, AgentResponse, agent_service | |
| from agents import AgentRequest, agent_service |
| # Import Pydantic models and service | ||
| from tasks import (Task, TaskCreate, TaskFilter, TaskService, TaskStats, | ||
| TaskUpdate) | ||
| from tasks import Task, TaskCreate, TaskFilter, TaskService, TaskStats, TaskUpdate |
There was a problem hiding this comment.
Import of 'Task' is not used.
Import of 'TaskService' is not used.
Import of 'TaskStats' is not used.
| from tasks import Task, TaskCreate, TaskFilter, TaskService, TaskStats, TaskUpdate | |
| from tasks import TaskCreate, TaskFilter, TaskUpdate |
…nd agent execution Signed-off-by: Andre Bossard <anbossar@microsoft.com>
* feat: Add QA tickets management with new TicketList component and API integration * feat: Add initial diagram for project planning in explain.drawio Signed-off-by: Andre Bossard <anbossar@microsoft.com> * feat: Add RULES.md to document project guidelines Signed-off-by: Andre Bossard <anbossar@microsoft.com> * feat: Add ticket models and reminder functionality for "Assigned without Assignee" Signed-off-by: Andre Bossard <anbossar@microsoft.com> * refactor: Rearrange imports and enhance startup logging for REST API and MCP JSON-RPC Signed-off-by: Andre Bossard <anbossar@microsoft.com> * feat: Enhance ticket handling by adding mapping functions and updating QA tickets endpoint Signed-off-by: Andre Bossard <anbossar@microsoft.com> * feat: Add TicketsWithoutAnAssignee component to display unassigned tickets Signed-off-by: Andre Bossard <anbossar@microsoft.com> * refactor: Clean up code formatting and improve ticket handling in various components Signed-off-by: Andre Bossard <anbossar@microsoft.com> * Refactor Ollama integration to use Azure OpenAI agent; remove OllamaChat component and related API calls, add AgentChat component for task management; update frontend routing and backend operations accordingly. Signed-off-by: Andre Bossard <anbossar@microsoft.com> * feat: Enhance AgentService with detailed logging for MCP tool calls and agent execution Signed-off-by: Andre Bossard <anbossar@microsoft.com> --------- Signed-off-by: Andre Bossard <anbossar@microsoft.com>
* Add comprehensive Ubuntu installation guide for 22.04 and 24.04 LTS
Co-authored-by: abossard <86611+abossard@users.noreply.github.com>
* Fix footer to say "Target Platforms" instead of falsely claiming "Tested On"
Co-authored-by: abossard <86611+abossard@users.noreply.github.com>
* Simplify guide: Ubuntu 22.04 only, one method per tool, Python 3.13, Node 20 LTS
Co-authored-by: abossard <86611+abossard@users.noreply.github.com>
* Update setup for Python virtual environment and improve documentation
- Change virtual environment creation to use `.venv` at the repo root
- Update activation commands in various documentation files
- Modify setup and start scripts to reflect new virtual environment structure
- Ensure consistency across installation guides and troubleshooting documentation
* Add Chromium installation instructions and verification step to Ubuntu guide
* Add launch configuration for Python Quart backend and frontend development
* Update .gitignore to include vscode-chromium-profile and exclude launch.json
* Add VSCode extensions recommendations for Python and JavaScript development
* Update LEARNING.md
* feat: Integrate Ollama LLM for AI chat functionality (#3)
- Added `httpx` dependency for async HTTP requests to Ollama API.
- Implemented OllamaChat component in frontend for user interaction with the LLM.
- Created backend service for handling chat requests and model listing.
- Updated setup scripts to check for Ollama installation and pull required models.
- Added API endpoints for chat and model listing in the backend.
- Implemented end-to-end tests for Ollama integration, covering model listing and chat functionality.
- Enhanced error handling and user feedback in the chat interface.
* Andre prepare day 4 (#4)
* feat: Implement MCP JSON-RPC 2.0 handler and refactor API decorators
Signed-off-by: Andre Bossard <anbossar@microsoft.com>
* fix: Update API decorators for optional HTTP path and clean up imports
docs: Enhance LEVEL_UP.md with Copilot chat testing instructions
Signed-off-by: Andre Bossard <anbossar@microsoft.com>
* feat: Migrate task management to SQLModel ORM and update related documentation
Signed-off-by: Andre Bossard <anbossar@microsoft.com>
* feat: Implement LangGraph agent with Azure OpenAI integration and extend API decorators for tool conversion
Signed-off-by: Andre Bossard <anbossar@microsoft.com>
* feat: Refactor AgentService to use OpenAI SDK and enhance tool integration
Signed-off-by: Andre Bossard <anbossar@microsoft.com>
* feat: Refactor AgentService to integrate LangGraph and replace OpenAI SDK usage
Signed-off-by: Andre Bossard <anbossar@microsoft.com>
* fix: Correct docstring formatting in tool_wrapper for consistency
Signed-off-by: Andre Bossard <anbossar@microsoft.com>
* feat: Update documentation structure for Day 4 lessons and announcements
Signed-off-by: Andre Bossard <anbossar@microsoft.com>
---------
Signed-off-by: Andre Bossard <anbossar@microsoft.com>
* refactor: remove agent service initialization and related endpoints (#7)
Signed-off-by: Andre Bossard <anbossar@microsoft.com>
* OpenAI and ticket feed (#9)
* refactor: update Azure OpenAI configuration and streamline environment variable usage
Signed-off-by: Andre Bossard <anbossar@microsoft.com>
* feat: integrate FastMCP client for external tool support and add tests
Signed-off-by: Andre Bossard <anbossar@microsoft.com>
* feat: implement Ticket MCP integration with FastMCP client and add REST endpoints for ticket management
Signed-off-by: Andre Bossard <anbossar@microsoft.com>
---------
Signed-off-by: Andre Bossard <anbossar@microsoft.com>
* Update for support (#10)
* feat: Add QA tickets management with new TicketList component and API integration
* feat: Add initial diagram for project planning in explain.drawio
Signed-off-by: Andre Bossard <anbossar@microsoft.com>
* feat: Add RULES.md to document project guidelines
Signed-off-by: Andre Bossard <anbossar@microsoft.com>
* feat: Add ticket models and reminder functionality for "Assigned without Assignee"
Signed-off-by: Andre Bossard <anbossar@microsoft.com>
* refactor: Rearrange imports and enhance startup logging for REST API and MCP JSON-RPC
Signed-off-by: Andre Bossard <anbossar@microsoft.com>
* feat: Enhance ticket handling by adding mapping functions and updating QA tickets endpoint
Signed-off-by: Andre Bossard <anbossar@microsoft.com>
* feat: Add TicketsWithoutAnAssignee component to display unassigned tickets
Signed-off-by: Andre Bossard <anbossar@microsoft.com>
* refactor: Clean up code formatting and improve ticket handling in various components
Signed-off-by: Andre Bossard <anbossar@microsoft.com>
* Refactor Ollama integration to use Azure OpenAI agent; remove OllamaChat component and related API calls, add AgentChat component for task management; update frontend routing and backend operations accordingly.
Signed-off-by: Andre Bossard <anbossar@microsoft.com>
* feat: Enhance AgentService with detailed logging for MCP tool calls and agent execution
Signed-off-by: Andre Bossard <anbossar@microsoft.com>
---------
Signed-off-by: Andre Bossard <anbossar@microsoft.com>
* refactor: update architecture documentation and improve environment variable handling in agents.py (#11)
Signed-off-by: Andre Bossard <anbossar@microsoft.com>
* Work on csv (#12)
* feat: Implement CSV Ticket Viewer
- Refactor App component to replace existing features with CSV Ticket Table.
- Add CSVTicketTable component for displaying tickets from CSV data source.
- Introduce API functions for fetching CSV ticket fields, tickets, and statistics.
- Create CSV data source in backend to handle loading and processing of CSV files.
- Enhance AgentChat component to display error details from API responses.
- Update styles and layout for improved user experience in ticket viewing.
Signed-off-by: Andre Bossard <anbossar@microsoft.com>
* refactor: Update import formatting and enhance status badge display in CSVTicketTable component
Signed-off-by: Andre Bossard <anbossar@microsoft.com>
* feat: Add Nivo chart visualizations for CSV tickets and enhance documentation
Signed-off-by: Andre Bossard <anbossar@microsoft.com>
---------
Signed-off-by: Andre Bossard <anbossar@microsoft.com>
* Ai baby workbench (#13)
* refactor: update configuration from Azure OpenAI to OpenAI and enhance agent service initialization
Signed-off-by: Andre Bossard <anbossar@microsoft.com>
* refactor: update AgentChat component for OpenAI integration and enhance markdown support
Signed-off-by: Andre Bossard <anbossar@microsoft.com>
---------
Signed-off-by: Andre Bossard <anbossar@microsoft.com>
* feat: Implement Usecase Demo Agent orchestration and UI components (#14)
- Added backend orchestration for usecase demo agent runs in `usecase_demo.py`.
- Created documentation for CSV ticket guidance in `CSV_AI_GUIDANCE.md`.
- Developed frontend components for usecase demo description and page in `UsecaseDemoDescription.jsx` and `UsecaseDemoPage.jsx`.
- Introduced demo definitions for usecase demos in `demoDefinitions.js`.
- Implemented result views for structured table and markdown in `resultViews.jsx`.
- Added utility functions for handling usecase demo runs in `usecaseDemoUtils.js`.
- Included a network diagram in `net.drawio`.
* Optimize agent runtime and SLA demo flow (#15)
* feat: Enhance SLA Breach Risk functionality and UI integration
- Increased max_length for agent prompt to 5000
- Added fields parameter to list and search tickets for selective data retrieval
- Updated timeout for usecase demo agent to 300 seconds
- Introduced SLA Breach Risk demo with detailed prompt and ticket analysis
- Added E2E tests for SLA Breach Risk demo page
* feat: add incident_id field to ticket model and related components
- Added incident_id to the ticket mapping in app.py.
- Updated csv_data.py to include incident_id when converting CSV rows to tickets.
- Modified operations.py to define incident_id as a CSV ticket field.
- Enhanced the Ticket model in tickets.py to include incident_id.
- Updated usecase_demo.py to accommodate changes in ticket structure.
- Modified CSVTicketTable.jsx to display incident_id in the ticket table.
- Updated TicketList.jsx to filter and display incident_id in the ticket list.
- Enhanced TicketsWithoutAnAssignee.jsx to include incident_id in ticket operations.
- Updated UsecaseDemoPage.jsx to pass matchingTickets to the render function.
- Enhanced demoDefinitions.js to improve prompts for use case demos.
- Added SLA Breach Overview result view in resultViews.jsx to visualize SLA status of tickets.
Signed-off-by: Andre Bossard <anbossar@microsoft.com>
* refactor: clean up import statements across multiple components
Signed-off-by: Andre Bossard <anbossar@microsoft.com>
* refactor: standardize import statement formatting in resultViews.jsx
Signed-off-by: Andre Bossard <anbossar@microsoft.com>
* feat: add SLA breach reporting functionality and related API endpoints
Signed-off-by: Andre Bossard <anbossar@microsoft.com>
* feat: implement SLA breach report retrieval for unassigned tickets
Signed-off-by: Andre Bossard <anbossar@microsoft.com>
---------
Signed-off-by: Andre Bossard <anbossar@microsoft.com>
* fix: update API proxy target from localhost to 127.0.0.1 in vite.config.js (#16)
Co-authored-by: luca Spring <luca.spring@bit.admin.ch>
* Agent fabric (#17)
* feat: Implement Tool Registry and Workbench Integration
- Added ToolRegistry class to manage LangChain StructuredTool instances.
- Created workbench_integration.py to wire tools into the Agent Workbench.
- Developed WorkbenchPage component for agent management in the frontend.
- Implemented backend tests for tool registration and agent operations.
- Added end-to-end tests for agent creation and deletion in the UI.
Signed-off-by: Andre Bossard <anbossar@microsoft.com>
* feat: Refactor Agent Workbench to Agent Fabric and enhance tool metadata handling
Signed-off-by: Andre Bossard <anbossar@microsoft.com>
* feat: Add required input handling to agent definitions and update UI components
Signed-off-by: Andre Bossard <anbossar@microsoft.com>
* feat: Enhance Markdown output handling in agent workflow and update UI components
Signed-off-by: Andre Bossard <anbossar@microsoft.com>
---------
Signed-off-by: Andre Bossard <anbossar@microsoft.com>
* Agent fabric (#18)
* feat: Implement Tool Registry and Workbench Integration
- Added ToolRegistry class to manage LangChain StructuredTool instances.
- Created workbench_integration.py to wire tools into the Agent Workbench.
- Developed WorkbenchPage component for agent management in the frontend.
- Implemented backend tests for tool registration and agent operations.
- Added end-to-end tests for agent creation and deletion in the UI.
Signed-off-by: Andre Bossard <anbossar@microsoft.com>
* feat: Refactor Agent Workbench to Agent Fabric and enhance tool metadata handling
Signed-off-by: Andre Bossard <anbossar@microsoft.com>
* feat: Add required input handling to agent definitions and update UI components
Signed-off-by: Andre Bossard <anbossar@microsoft.com>
* feat: Enhance Markdown output handling in agent workflow and update UI components
Signed-off-by: Andre Bossard <anbossar@microsoft.com>
* feat: Enhance ticket handling by adding incident ID support and improve UI components for better user experience
Signed-off-by: Andre Bossard <anbossar@microsoft.com>
* feat: Add tool invocation logging with latency tracking in WorkbenchService
Signed-off-by: Andre Bossard <anbossar@microsoft.com>
---------
Signed-off-by: Andre Bossard <anbossar@microsoft.com>
* Kba draft review fixes (#20)
* kba-draft implementiert
* - test dateien entfernt
- struktur aufgeräumt
- README.md angepasst
- learning_mechanism.md plan erstellt
- desing fixes
* feat: add search questions generation with database migration and UI
Database & Backend:
- Add search_questions column migration in operations.py (ALTER TABLE for existing databases)
- Add /api/kba/drafts/{id}/replace endpoint in app.py
- Fix backward compatibility in kba_service.py (_table_to_draft, _draft_to_table)
- Add search questions generation to replace_draft workflow
- Fix NULL constraint errors by ensuring empty strings for required fields
- Update related_tickets validation: accept INC + 9-12 digits (was fixed at 12)
Frontend:
- Add Text component import to KBADrafterPage.jsx (fix TypeError)
- Add full-screen blur overlay with centered spinner during KBA generation
- Show overlay for both new draft creation and replacement operations
- Update styles: loadingOverlay with backdrop-filter blur effect
Documentation:
- Update kba_prompts.py: clarify related_tickets format with examples
- Update GENERAL.md: correct related_tickets format specification
Fixes #1 - KBA drafts not loading (missing DB column)
Fixes #2 - Replace endpoint not found (405 error)
Fixes #3 - Ticket ID validation too strict
* tickets in popup ansehen
* feat(kba-drafter): add ability to reset reviewed KBAs back to draft
- Add "Zurück zu Entwurf" button for reviewed status KBAs
- Add handleUnreview() handler to update status from "reviewed" to "draft"
- Import ArrowUndo24Regular icon for the unreview action
- Allow users to continue editing KBAs after review without deletion
This enables editing of reviewed KBAs that need changes before publishing.
* feat(kba-drafter): add ticket viewer, unreview, status filter, and UI improvements
- Add ticket viewer dialog to display original incident details
* New "Ticket" button in KBA header with DocumentSearch icon
* Modal dialog showing incident data (ID, summary, status, priority, assignee, notes, resolution)
* Backend endpoint /api/csv-tickets/by-incident/<incident_id> for incident ID lookup
* Frontend API function getCSVTicketByIncident()
- Add unreview functionality for reviewed KBAs
* "Zurück zu Entwurf" button with ArrowUndo icon
* Allows resetting reviewed KBAs back to draft status for further editing
- Redesign KBA overview list
* Replace corner delete button with professional overflow menu (⋮)
* Horizontal layout: content left, status badge right-aligned, menu button
* Menu component with delete option
- Add status filter dropdown to KBA overview
* Filter options: All, draft, reviewed, published
* Dropdown in card header for easy filtering
- Align EditableList "Add" button width with input fields
* Use invisible placeholder buttons for exact width matching
* Ensures consistent layout regardless of allowReorder setting
Files modified:
- frontend/src/features/kba-drafter/KBADrafterPage.jsx
- frontend/src/features/kba-drafter/components/EditableList.jsx
- frontend/src/services/api.js
- backend/app.py
* fix(kba): fix draft deletion bug and add collapsible AutoGenSettings
- Fix delete draft error: use response.items instead of response.drafts
- Make AutoGenSettings card collapsible with chevron icon
- Starts collapsed to reduce visual dominance
- Smooth slide-down animation when expanded
- Status badge visible in collapsed header
- Clickable header with keyboard support (Enter key)
* fix(kba): auto-scroll to top when opening draft
When clicking on a draft from the list after scrolling down,
the page now automatically scrolls to the top with a smooth animation.
This ensures users always start at the beginning of the draft content.
* feat: replace browser confirms with custom modal dialogs for unsaved changes
Replace native window.confirm() with ConfirmDialog component for better UX
consistency and modern appearance. Adds centered warning modal when user
attempts to discard unsaved changes (close draft, switch to preview, or
load different draft).
Changes:
- Add unsavedChangesDialogOpen and pendingAction states
- Update toggleEditMode, loadDraft, and handleClose to trigger modal
- Add handleDiscardChanges and handleCancelDiscard handlers
- Add ConfirmDialog with warning intent at end of component
* fix: address code review issues and add KBA drafter e2e tests
Fixes:
- Fix CSV folder case mismatch (CSV -> csv) in app.py and operations.py
- Remove duplicate get_ticket_by_incident_id method in csv_data.py
- Replace inefficient len(session.exec().all()) with SQL COUNT(*) in kba_service.py
- Replace hardcoded placeholder credentials with env var lookups in kba_service.py
- Fix scheduler swallowing exceptions (remove bare raise, return None)
- Add settings reload at start of each scheduler run to fix race condition
- Add generation_warnings field to surface search questions failures to users
- Add schema migration for generation_warnings column
Tests:
- Add 19 Playwright e2e tests for KBA Drafter feature covering:
page load, navigation, LLM health status, draft generation,
draft display, draft list, editing, review workflow,
duplicate handling, and backend API integration
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* feat: add LiteLLM fallback, Playwright tests, and remove OpenAI hard dependency
- LiteLLM is now the default LLM backend (no .env or API key needed)
- Multistage model fallback chain: claude-sonnet-4 → gpt-4o → gpt-4o-mini
- OpenAI SDK still used when OPENAI_API_KEY is explicitly set
- agents.py and workbench service use ChatLiteLLM when no OpenAI key
- Added csv_ticket_stats and csv_sla_breach_tickets to agent tools
- Added KBA Drafter to Playwright nav tests and menu screenshots
- Added e2e tests: publish, delete, status filter, ticket viewer
- 32 unit tests + 5 live integration tests for LLM service
- Updated .env.example with LiteLLM-first documentation
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
---------
Co-authored-by: SubSonic731 <alessandro.roschi@bit.admin.ch>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* Agent workbench v2 (#19)
* Extract agent builder into extensible module with tests
Create backend/agent_builder/ as a standalone, deeply layered module
following Grokking Simplicity (data/calculations/actions separation)
and A Philosophy of Software Design (deep modules).
Structure:
- models/: Pure data (Pydantic/SQLModel) - agent, run, evaluation, chat
- tools/: ToolRegistry, schema converter, MCP adapter
- engine/: Unified ReAct runner, callbacks, prompt builder
- evaluator.py: Success criteria evaluation (mostly calculations)
- persistence/: DB engine setup + repository pattern
- service.py: WorkbenchService (deep module facade)
- chat_service.py: ChatService using shared ReAct engine
- routes.py: Quart Blueprint replacing 200+ lines from app.py
- tests/: 107 tests (unit + integration + E2E)
Key improvements:
- Eliminated duplicate ReAct agent building (was in both agents.py
and agent_workbench/service.py)
- DRY error handling in routes via Blueprint
- Repository pattern isolates DB from business logic
- Pure calculation modules (prompt_builder, schema_converter,
evaluator) are independently testable
- Backward-compatible: agent_workbench/__init__.py shims to new module
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* Add per-agent LLM config: model, temperature, recursion_limit, max_tokens, output_instructions
Each AgentDefinition now stores configurable LLM parameters:
- model: override service default (e.g. gpt-4o vs gpt-4o-mini)
- temperature: 0.0-2.0 (deterministic to creative)
- recursion_limit: 1-100 max ReAct loop iterations
- max_tokens: cap response length (0 = unlimited)
- output_instructions: custom formatting (replaces default markdown)
Changes:
- models/agent.py: 5 new fields with validation (ge/le bounds)
- persistence/database.py: migrations for existing DBs
- engine/react_runner.py: build_llm accepts temperature+max_tokens
- engine/prompt_builder.py: append_output_instructions for custom formatting
- service.py: _resolve_llm_for_agent builds per-agent LLM when config differs
- routes.py: ui-config v2 exposes llm_config_fields and defaults
- 12 new tests (model validation, CRUD, E2E roundtrip via REST)
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* Add output_schema for type-safe structured output, fix defaults
Changes:
- recursion_limit default: 10 → 3 (most agents finish in 1-3 tool calls)
- max_tokens default: 0 → 4096 (sensible cap instead of unlimited)
- New field: output_schema (JSON Schema stored as JSON in DB)
output_schema is config, not code. You define the expected response
shape as a JSON Schema:
{"type":"object","properties":{"breaches":{"type":"array",...}}}
At runtime this does two things:
1. Injected into system prompt so the LLM knows the expected structure
2. Takes priority over output_instructions and default markdown
Priority chain for output formatting:
output_schema (strict JSON) > output_instructions (free text) > default markdown
128 tests pass (9 new tests for schema handling).
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* Add suggest-schema endpoint and UI button
New endpoint: POST /api/workbench/suggest-schema
Takes agent name, description, system_prompt and asks the LLM to
propose a JSON Schema for the agent's structured output.
Backend:
- service.py: suggest_schema() method - builds a prompt, calls LLM,
parses JSON response (handles markdown fences), falls back to
generic schema on parse failure
- routes.py: POST /api/workbench/suggest-schema route
Frontend:
- api.js: suggestOutputSchema() function
- WorkbenchPage.jsx: output schema textarea + Suggest Schema button
in the create form. Schema is editable JSON, sent as output_schema
on agent creation. Button disabled until name or prompt is filled.
129 tests pass (1 new E2E test for suggest-schema endpoint).
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* Wire output_schema to LangGraph response_format for SDK-level enforcement
When an agent has output_schema configured, it now does TWO things:
1. Prompt injection (existing) — schema is described in the system prompt
so the LLM understands the expected structure
2. SDK enforcement (new) — schema is passed as response_format to
create_react_agent(), which uses LangGraph's built-in structured
output mechanism (provider-native or tool-based)
At runtime, structured_response from the LangGraph result takes
priority over raw message content. If the agent has no output_schema,
behavior is unchanged (markdown output from final message).
The output pipeline:
output_schema defined → response_format=schema → structured_response → JSON
no output_schema → final message content → markdown (default)
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* Always use structured_response with default schema
Every agent now always returns structured output via LangGraph's
response_format — no more untyped markdown strings.
Default schema (when no custom output_schema is set):
{
"message": "string (markdown)",
"referenced_tickets": ["string"]
}
This means:
- Plain agents → get {message: '...markdown...', referenced_tickets: [...]}
- Custom schema agents → get whatever schema they define
- Both enforced at SDK level via response_format, not just prompt
Changes:
- prompt_builder.py: DEFAULT_OUTPUT_SCHEMA, resolve_output_schema()
- service.py: always passes effective schema to create_react_agent
- routes.py: ui-config exposes default_output_schema for frontend
- Tests updated (132 pass)
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* Add comprehensive docs with mermaid diagrams, clean up stale docs
New: docs/AGENT_BUILDER.md — full architecture documentation with:
- Architecture diagram (module layers + data flow)
- Sequence diagram (agent run lifecycle)
- Structured output pipeline flowchart
- ER diagram (DB schema)
- Data/Calculations/Actions separation diagram
- Deep modules table
- Extensibility flowchart
- API endpoint reference
- Testing commands
Updated:
- AGENTS_IMPLEMENTATION.md — replaced stale content with summary + pointer
- docs/AGENTS.md — replaced stale architecture with mermaid + pointer
- docs/PROJECT_STRUCTURE.md — added agent_builder/ to tree
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* Docs overhaul + remove ~1800 lines of dead code/stale docs
Documentation:
- README.md: Complete rewrite with features table, screenshots, mermaid
architecture diagram, agent builder section, correct tech stack
- PROJECT_STRUCTURE.md: Full rewrite matching actual codebase
- AGENTS.md: Fixed AgentService→WorkbenchService, updated examples
- LEARNING.md: Fixed broken link
Deleted stale docs:
- AGENTS_IMPLEMENTATION.md (was a 3-line redirect stub)
- docs/RULES.md (empty file)
- docs/SQLMODEL_MIGRATION.md (historical, migration complete)
Dead code removed from agents.py (~250 lines):
- MCP client stubs (_mcp_tool_to_langchain, _ensure_ticket_mcp_connection, close)
- Schema helpers only used by dead MCP code (_json_type_to_python, _schema_to_pydantic)
- OpenAI logging callback (duplicated in agent_builder/engine/callbacks.py)
- _build_state_graph learning example (dead code)
- Unused imports (get_langchain_tools, MCPClient, create_model)
Deleted old agent_workbench/ source files (~1030 lines):
- models.py, service.py, evaluator.py, tool_registry.py
- Only __init__.py shim remains for backward compatibility
132 backend tests + 15 Playwright tests pass.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* Add Playwright tests for suggest-schema and agent chat
New E2E tests in workbench.spec.js:
- 'creates agent with output schema via suggest button' — mocks
/api/workbench/suggest-schema, clicks Suggest Schema, verifies
schema populates textarea, creates agent, deletes it
- 'sends message and displays mocked response' (Agent Chat UI) —
mocks /api/agents/run, types message, clicks send, verifies
markdown heading and tool badge render
17 Playwright tests pass (was 15, +2 new).
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* Add VPN agent and failure handling Playwright tests
New Agent Fabric E2E tests:
- 'runs VPN troubleshooting agent and verifies structured output'
Creates agent with VPN analysis prompt, runs it (mocked),
verifies structured JSON output with ticket IDs (INC-101, INC-312),
referenced_tickets field, and VPN content in rendered output
- 'handles agent run failure gracefully'
Creates agent, runs it with mocked failure response,
verifies UI doesn't crash and shows completion state
19 Playwright tests pass (was 17, +2 new).
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* Fix structured output rendering in Agent Fabric UI
The output is now always structured JSON ({message, referenced_tickets}).
The UI now parses it and renders each part appropriately:
- message → rendered as GitHub-flavored Markdown (ReactMarkdown)
- referenced_tickets → rendered as monospace badges below the output
- Extra custom schema fields → rendered as formatted JSON in a pre block
- Button preview → shows message text, not raw JSON
Also handles non-JSON output gracefully (falls back to raw markdown).
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* Add MCP App technical documentation
New: docs/MCP_APP.md — comprehensive guide on how this project
works as an MCP application:
- What an MCP App is (app that exposes business logic via MCP protocol)
- Architecture diagrams: consumers (Claude, Copilot, agents) → MCP endpoint
- Full protocol sequence diagram (initialize → tools/list → tools/call)
- The @operation decorator: single source of truth for REST + MCP + LangChain
- How to connect clients (Claude Desktop, Python, curl examples)
- 4-layer architecture diagram (business logic → operations → adapters → consumers)
- Extension roadmap: Resources, Prompts, SSE streaming
- Security considerations table
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* Add SchemaRenderer + visual SchemaEditor with x-ui widget system
SchemaRenderer (frontend/src/features/workbench/SchemaRenderer.jsx):
- Generic component: takes {data, schema} and renders each property
using x-ui widget annotations
- Widgets: markdown, table, badge-list, stat-card, bar-chart (Nivo),
pie-chart (Nivo), json, hidden
- Auto-detection when no x-ui: string→markdown, integer→stat-card,
array of objects→table, array of strings→badge-list, object→json
- Console debug logging, data-testid per field for E2E testing
SchemaEditor (frontend/src/features/workbench/SchemaEditor.jsx):
- Visual property list editor (no raw JSON editing needed)
- Add/remove properties, set name/type/description
- Widget picker dropdown with all available widgets
- Context-sensitive options (columns for table, label for stat-card,
indexBy/keys for bar-chart)
- Syncs with suggest-schema: LLM suggestion populates visual editor
- Outputs valid JSON Schema with x-ui annotations
Backend:
- DEFAULT_OUTPUT_SCHEMA now has x-ui annotations (markdown + badge-list)
- suggest_schema prompt updated to suggest x-ui widgets per property
Wiring:
- WorkbenchPage uses SchemaRenderer for run output (replaces hardcoded)
- WorkbenchPage uses SchemaEditor for create form (replaces textarea)
20 Playwright tests pass (including new SchemaRenderer widget test).
132 backend tests pass.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* Improve suggest-schema prompt with full data domain + widget docs
The suggest-schema LLM prompt now includes:
- Ticket data domain (all field names, types, enum values, example cities)
- Available tools with descriptions (csv_list_tickets, csv_search_tickets, etc.)
- Full widget documentation with use-cases and options for each:
markdown, table (columns), badge-list, stat-card (label),
bar-chart (indexBy, keys), pie-chart, json, hidden
- Explicit rules: always include message+referenced_tickets,
match widget to data shape, use snake_case names
This gives the LLM enough context to suggest schemas that actually
match the ticket data (e.g. status distribution → pie-chart,
ticket list → table with incident_id/summary/status columns).
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* Fix latency issues: schema title bug + recursion_limit headroom
Investigation found 3 root causes for slow AI calls:
1. gpt-5-nano is a REASONING model — burns 192-832 reasoning tokens
per LLM call (invisible chain-of-thought), taking 2-8s each.
A simple 'say hello' costs 8.4s with 832 reasoning tokens.
2. response_format adds a 3rd LLM call — LangGraph's
generate_structured_response makes a separate LLM call to format
the output as JSON after the ReAct loop finishes.
Without: 4.7s (2 calls). With: 13s (3 calls).
3. Missing 'title' in output_schema crashed with_structured_output.
OpenAI's API requires a top-level 'title' in the JSON Schema.
Fixes applied:
- resolve_output_schema() now auto-adds 'title': 'AgentOutput'
when missing (both default and custom schemas)
- DEFAULT_OUTPUT_SCHEMA has explicit 'title' field
- recursion_limit: user's setting (default 3) is now multiplied by 4
for the actual LangGraph graph, with a floor of 10. This prevents
GraphRecursionError when response_format adds extra graph steps.
Note: The main latency driver (reasoning tokens) is inherent to the
model choice. Users can switch to gpt-4o-mini via per-agent 'model'
field for ~10x faster non-reasoning responses.
133 backend + 20 Playwright tests pass.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* Fix agent tool token bloat: compact fields + lower default limits
Root cause: csv_list_tickets tool returned full Ticket objects with ALL
fields (notes, description, resolution, work logs) — ~65K tokens for
100 tickets. The LLM had to process all of this, causing 30-60s per
step with a reasoning model.
Changes to operations.py:
- csv_list_tickets: returns compact dicts (10 fields, not 30+),
default limit 25 (was 100), max limit 100 (was 500)
- csv_search_tickets: same compact treatment, limit 25 (was 50)
- csv_get_ticket: now accepts optional 'fields' parameter for
selective detail drill-down, returns dict (was full Ticket)
- Tool descriptions updated to guide agents: 'use csv_get_ticket
for full details' pattern
Token impact per tool call:
Before: 100 tickets × ~400 tokens = ~65,000 tokens
After: 25 tickets × ~60 tokens = ~1,500 tokens (97% reduction)
Expected latency improvement:
Before: ~13s per tool call (65K token input processing)
After: ~3-5s per tool call (1.5K token input)
153 tests pass (133 backend + 20 Playwright).
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* Drop response_format to eliminate extra LLM call
LangGraph 1.0.8 implements response_format via a SEPARATE LLM call
(generate_structured_response) — adding 5-10s latency per run.
The refactor to inline tool-based structured output (github.com/
langchain-ai/langgraph/issues/5872) hasn't shipped yet.
Fix: remove response_format from create_react_agent. The system
prompt already instructs the LLM to produce JSON matching the
schema (via append_output_instructions). The frontend's
SchemaRenderer handles both parsed JSON and raw text gracefully.
Latency impact:
Before: 3 LLM calls (decide tool + answer + format JSON) ~13s
After: 2 LLM calls (decide tool + answer as JSON) ~5s
When LangGraph ships inline structured output, we can re-enable
response_format with zero code changes (just pass it back to
build_react_agent).
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* Enable OpenAI JSON mode for guaranteed valid JSON output
Adds response_format: {type: 'json_object'} to the ChatOpenAI
constructor via model_kwargs. This is a model-level setting that
constrains token generation to valid JSON — no extra LLM call,
no post-processing, just guaranteed JSON from every response.
This is different from LangGraph's response_format parameter
(which adds a separate LLM call). This is OpenAI's native JSON
mode applied at the API level during the same call.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* Revert JSON mode — incompatible with non-strict tool schemas
OpenAI's response_format: json_object requires all tools to have
strict schemas. Our tools (from @operation decorator) don't set
strict=True, causing: 'csv_search_tickets is not strict. Only
strict function tools can be auto-parsed'.
Reverting to prompt-only JSON enforcement, which tested at 3/3
reliability with gpt-5-nano. The frontend fallback (wraps non-JSON
as {message: raw_text}) provides additional safety.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* Add widget E2E tests + strict tools + Agent Chat JSON mode
New Playwright tests (23 total, +3):
- 'renders bar-chart and pie-chart from x-ui annotations' — injects
mock agent with output_schema containing x-ui widgets, verifies
SVG rendering for pie/bar charts, stat-card with label, badges
- 'renders raw JSON for object data' — verifies auto-detection:
objects render as formatted JSON in pre blocks
- 'falls back gracefully for non-JSON output' — verifies plain
markdown string wraps as {message: text} and renders correctly
Agent Chat (agents.py) fixes:
- Added JSON output mode (response_format: json_object)
- Added strict=True tool binding for compatibility
- Matches the same pattern as agent_builder
Strict tool binding (react_runner.py):
- build_react_agent pre-binds tools with strict=True
- Required for OpenAI JSON mode (response_format: json_object)
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* Fix NameError: OpenAICallLoggingCallback was removed but still referenced
The class was deleted in the dead code cleanup but agents.py still
used it. Replaced with make_llm_logging_callback from agent_builder.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* Add 'Show in Menu' — agents appear as tabs in navigation
When an agent has show_in_menu=true, it appears as a tab in the
main navigation bar. Clicking it opens a dedicated run page with
just the input field, run button, and SchemaRenderer output.
Backend:
- AgentDefinition: new show_in_menu bool field (default false)
- AgentDefinitionCreate/Update: show_in_menu parameter
- Migration for existing DBs
- Service wires it through create/update
Frontend:
- WorkbenchPage: 'Show in menu' checkbox in create form
- App.jsx: fetches agents with show_in_menu=true, injects as tabs
- AgentRunPage.jsx: simple standalone run page (title, description,
optional input, run button, SchemaRenderer output)
- Dynamic routes: /agent-run/{agentId}
E2E test:
- Creates agent via API with show_in_menu=true
- Verifies tab appears in navigation with agent name
- Clicks tab, verifies AgentRunPage renders
- Runs agent (mocked), verifies output with SchemaRenderer
24 Playwright + 133 backend = 157 tests pass.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* Add missing tools to chat agent: csv_sla_breach_tickets, csv_ticket_stats
The SLA Breach page was slow because the chat agent (agents.py)
didn't have the csv_sla_breach_tickets tool. The prompt said
'call csv_sla_breach_tickets' but the tool didn't exist, so the
LLM tried to replicate SLA breach logic manually using
csv_list_tickets — fetching many tickets and reasoning over them.
Now the chat agent has all 6 CSV tools matching the operations:
- csv_list_tickets (existing)
- csv_get_ticket (existing)
- csv_search_tickets (existing)
- csv_ticket_fields (existing)
- csv_sla_breach_tickets (NEW — pre-computed, ~1000 tokens)
- csv_ticket_stats (NEW — aggregated stats, ~350 bytes)
Expected improvement: 1 tool call (~1000 tokens) instead of
multiple list calls + manual reasoning (~30-60K tokens).
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* Add ticket detail modal and enhance CSV ticket table functionality
Signed-off-by: Andre Bossard <anbossar@microsoft.com>
* Refactor CSVTicketTable component: reorder DialogActions import for consistency
Signed-off-by: Andre Bossard <anbossar@microsoft.com>
* Add reasoning_effort config + new tools for major speed improvement
Performance:
- reasoning_effort='low' as default — reduces gpt-5-nano from
512 reasoning tokens (~7s) to 0-192 tokens (~1-3s) per LLM call
- Configurable per agent: low (fast), medium, high (deep), default
- Both agent_builder and legacy chat agent use reasoning_effort='low'
New tools:
- csv_count_tickets: count matching tickets WITHOUT returning data.
Lets the LLM check 'how many VPN tickets?' (~50 tokens) before
deciding to fetch details (~5000 tokens)
- csv_search_tickets_with_details: search + return full details
(notes, resolution, description) in ONE call. Eliminates the
N × csv_get_ticket drill-down pattern that caused the
'Ticket Knowledgebase Creator' to make 5+ sequential LLM calls
Impact on 'Ticket Knowledgebase Creator' agent:
Before: search(compact) → get_ticket × N → generate = 5+ LLM calls × ~5s = 25s+
After: search_with_details(query, limit=10) → generate = 2 LLM calls × ~2s = 4s
Also fixed: removed stale response_format: json_object from build_llm
(was causing strict tool errors).
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* Update incident details in FALL_2_HARDWARE_PERIPHERIE and FALL_3_ZUGRIFF_BERECHTIGUNG documentation for consistency and clarity
Signed-off-by: Andre Bossard <anbossar@microsoft.com>
* Fix: all E2E tests now clean up created agents
Two tests were creating agents via the UI but not deleting them,
leaving orphans in the DB after each test run:
- 'runs an agent and appends output to run button'
- 'requires and forwards configured run input'
Added Delete button clicks at the end of both tests.
All 10 agent-creating tests now verified to clean up.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* Rewrite workbench e2e tests for tabbed UI
- Add helpers: goToCreateTab, goToAgentsTab, createAgent, createAgentViaAPI, deleteAgentViaAPI, mockEmptyRuns
- Update 'creates and deletes' to use Create Agent tab and agent cards
- Update 'runs an agent' to verify output in RunsSidePanel
- Update 'requires input' to use card inline input field + Go button
- Update 'suggest schema' to navigate to Create tab first
- Update 'failure handling' to check error in run detail panel
- Refactor SchemaRenderer tests to use setupSchemaTest helper (API-created agents, run output in side panel)
- Keep Agent Chat UI and Show in Menu tests unchanged
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* feat: redesign workbench with agent cards, runs side panel, and tabbed layout
- Split WorkbenchPage into tabbed UI: Agents (cards grid) + Create Agent
- AgentCardsPanel: icon cards with Run/Edit/Delete buttons per agent
- RunsSidePanel: scrollable run history with click-to-view output
- AgentEditDialog: edit existing agents via dialog
- AgentCreateForm: extracted creation form (reusable for create + edit)
- Added API functions: updateWorkbenchAgent, listAllRuns, getRun
- All 47 Playwright tests pass (12 workbench tests updated for new UI)
- Removed Ollama references from setup.sh and package.json
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* fix: LiteLLM fallback in agent_builder + add live lifecycle test
- Fixed agent_builder/engine/react_runner.py: ChatLiteLLM when no API key
- Fixed agent_builder/service.py: removed hard OpenAI key requirement
- Fixed agent_builder/chat_service.py: same
- Fixed RunsSidePanel output parsing for raw string output
- Added full lifecycle e2e test (live LLM): create → run → edit → re-run → verify history → delete
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* feat: suggest schema & tools, default no tools, pure function refactor
- 'Suggest Schema & Tools' button: LLM suggests output schema AND tool selection
- Backend: _build_suggest_prompt and _parse_suggest_response as pure functions
- Frontend: tools default to empty, populated by suggest response
- RunsSidePanel: pure calculations extracted (buildAgentMap, sortRunsNewestFirst,
resolveOutputSchema, resolveAgentName, parseRunOutput, formatRelativeTime)
- All 49 Playwright tests pass (2 live LLM tests included)
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* fix: result dialog, chart rendering, markdown fence parsing
- Run results now open in a large Dialog (900px wide, 85vh max)
- Fixed parseRunOutput: strips markdown code fences from LLM output
- Fixed PieChartWidget: filters non-numeric values, formats labels
- Fixed BarChartWidget: accepts object {key: number} in addition to arrays
- Chart containers: 300px height, 600px max-width
- Tests: close dialog before cleanup (dialog blocks pointer events)
- All 49 Playwright tests pass
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* feat: all-live Playwright tests, result dialog fix, runs panel fix
- Rewrote workbench tests: ZERO mocks, all 8 tests use live LLM
- Fixed RunsSidePanel: min-height for layout, runs visible on load
- Fixed parseRunOutput: strips markdown fences from LLM output
- Fixed chart widgets: pie/bar handle non-numeric values, proper sizing
- Fixed dialog close: tests use X button (in viewport) not Close (scrolled)
- Total: 43 tests, all passing, all live (1.1 min)
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* refactor: extract shared parseRunOutput, add delete-all-runs
- Extracted parseRunOutput (fence-stripping + JSON parsing) into
outputUtils.js — shared by RunsSidePanel and AgentRunPage
- Fixed AgentRunPage (show_in_menu): renders markdown instead of raw JSON
- Added DELETE /api/workbench/runs endpoint + trash button in Runs panel
- Runs panel: min-height 500px so content is visible on load
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* feat: add SSE activity monitor, settings page, agent templates & run history
- Agent Activity page with real-time SSE event stream (tool calls, LLM
thinking, run lifecycle), filterable by run_id via URL query param
- EventBus pub/sub + StreamingCallbackHandler wired into ReAct engine
- Settings page: drag-and-drop tab reorder, hide/show toggles, icon
picker (57 FluentUI icons), persisted to localStorage
- Agent templates dropdown (KBA from tickets, worklog stats, next step
advisor) pre-fills the create agent form
- AgentRunPage now shows filtered run history with detail dialog and
link to Activity page filtered by run_id
- 19 new Playwright E2E tests (8 activity + 11 settings)
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* feat: add Support Workflow canvas page with interactive editor
Purely browser-side workflow visualization using HTML Canvas:
- 5 node types: Start, End, Action, Decision, Wait (each with
distinct shape and color)
- Drag-and-drop to reposition nodes
- Shift+drag to create connections between nodes
- Double-click to rename nodes inline
- Animate button shows flowing dots along edges
- Toolbar to add/delete nodes, reset to default workflow
- Default workflow: Ticket Created → Auto-Classify → Priority
decision → L1/L2 paths → Resolved decision → Close/Reopen
- 9 Playwright E2E tests with screenshots
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* feat: metro-map workflow with presets, color picker, agent assignment
Rewrite WorkflowPage as metro-map style inspired by Incident &
Problem Solving methodology:
- 3 workflow presets: Incident Solving, Problem Solving, Change Mgmt
- Metro station circle nodes with thick colored edge lines
- Edge color inherited from outgoing node
- Click node → dialog with color picker (8 colors) and agent selector
(10 agent presets)
- Agent indicator dot on nodes with assigned agents
- Color legend auto-generated from used colors
- 12 Playwright E2E tests covering presets, node config, animation
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* feat: friendlier workflow editor — connect mode, double-click add, dialog edges
- Connect Mode toggle button: click source node then target to draw edge
(no shift key needed). Crosshair cursor + green '+' hint on target.
- Double-click empty canvas area to add a node at that position
- Node dialog now has 'Connect to…' section with buttons for each
unconnected node — draw edges without touching the canvas
- Add Node button opens config dialog immediately for the new node
- Dynamic help text updates based on current mode
- Escape key exits connect mode
- Updated Playwright tests for new UX
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* feat: add 'Improve my Prompt' button to Agent Fabric
LLM-powered prompt improvement following 2025 best practices:
- Backend: /api/workbench/improve-prompt endpoint + service method
that rewrites prompts with clear role, goals, numbered steps,
tool references, output format, and constraints
- Frontend: '✨ Improve my Prompt' button below the system prompt
textarea, disabled when empty, replaces prompt with improved version
- 4 Playwright E2E tests with before/after screenshots
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* fix: prompt improvement skips output format (handled by schema)
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* fix: improve-prompt uses selected tools, not all available
Pass tool_names from frontend form state so the LLM only references
tools the user actually selected for this agent.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* fix: remove maxHeight on tools list to avoid scrolling
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* fix: replace worklog template with Topic & Product Analysis
Worklog columns in data.csv are all empty/zero. New template analyzes
topics, products, services, priority distribution, and group workload
using data that actually exists in the CSV.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
---------
Signed-off-by: Andre Bossard <anbossar@microsoft.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* fix: enhance CSV ticket handling and update LLM backend initialization (#25)
Signed-off-by: Andre Bossard <anbossar@microsoft.com>
* Implement code changes to enhance functionality and improve performance
Signed-off-by: Andre Bossard <anbossar@microsoft.com>
* feat: enhance UI components and improve test coverage for agent functionality
Signed-off-by: Andre Bossard <anbossar@microsoft.com>
* Add model selection to agent workbench
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* Add DSPy Prompt Tuning Playground — 8 notebooks, 20 tasks, 165 tests
Interactive Jupyter notebook series teaching prompt optimization with DSPy.
Organized by learning concepts from Grokking Simplicity and A Philosophy
of Software Design.
Structure:
- 8 notebooks (00-07): Introduction → Data/Calc/Actions → Deep Modules →
Evaluation as Spec → Optimizer as Compiler → Domain Tuning → Agentic → Finale
- 20 tasks across 4 tiers: Basics, Reasoning, Composition, Agentic
- dspy_tasks/ library: data.py (DATA), calculations.py (CALCULATIONS),
actions.py (ACTIONS), tools.py, visualize.py (ipywidgets + Plotly)
- 16 JSON datasets (13 generic + 3 CSV-derived from ticket data)
- 165 passing pytest tests covering signatures, metrics, and registry
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* Integrate DSPy notebooks with project LiteLLM/Copilot config
Add config.py that reads .env and dynamically discovers models via
litellm.get_valid_models() — same env vars as the backend (LITELLM_MODEL,
LITELLM_FALLBACK_MODELS). Replace all hardcoded model lists in 8 notebooks
with get_available_models(). Replace raw dspy.LM() calls with
configure_dspy(). 192 tests passing (27 new config tests).
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* Add DSPy Playground section to README
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* Remove unused screenshot files from the repository
Signed-off-by: Andre Bossard <anbossar@microsoft.com>
* Add cross-platform start.sh for notebook playground
Auto-creates venv, installs/updates deps, launches Jupyter Lab.
Works on macOS (zsh/bash) and Ubuntu (bash).
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* Replace pure-function tests with E2E tests hitting live LiteLLM
Remove test_calculations.py, test_data.py, test_config.py (pure function tests).
Add test_e2e.py: 11 tests covering config discovery, tier 1-3 predictions,
baseline scoring, BootstrapFewShot optimization, and cross-model comparison
— all running against real Copilot models via LiteLLM.
Also fix configure_dspy() to inject Editor-Version headers for Copilot models.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* Translate notebooks to informal German, add Mermaid diagrams + quiz
All 8 notebooks: markdown → informelles Deutsch (du-Form).
Code cells unchanged. Coherent story arc with bridges between notebooks.
- 5 Mermaid diagrams (learning path, DATA/CALC/ACTIONS, module depth,
optimizer pipeline, full architecture)
- Interactive quiz in Notebook 07 (7 MC questions via ipywidgets)
- mermaid() and quiz() helpers added to visualize.py
- 11 E2E tests still passing
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* Tone down DSPy, add HTML/CSS diagrams, collapse setup cells
- DSPy mentions in markdown: 29 → 4 (only where referencing code)
- Focus on universal concepts: evaluation, tuning, optimization
- mermaid() (CDN) → diagram()/diagram_compare() (pure HTML/CSS, zero deps)
- Setup code cells collapsed via jupyter.source_hidden metadata
- Hands-on narrative: baseline → improve → see difference
- 11 E2E tests passing
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* Add hands-on learning flow: LLM failures, manual tuning, benchmarks
New library functions:
- run_with_prompt(): evaluate custom user instructions
- run_on_examples(): evaluate on benchmark datasets
- prompt_workshop(): prefilled Textarea + Run + score history widget
- benchmarks.py: load_hotpotqa(), load_math(), load_truthfulqa()
- datasets/truthfulqa_sample.json: 30 validated hallucination questions
Notebook enhancements (user edits text only, never code):
- NB01: Tricky sarcasm examples showing LLM failures + TruthfulQA
benchmark + 'Was ist Accuracy?' explanation
- NB03: Interactive prompt_workshop() for manual tuning with score
history tracking + TruthfulQA editing exercise
- NB04: 'Erst du, dann die Maschine' — manual attempt before
auto-optimization with side-by-side comparison
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* Fix diagram_compare() call in NB05 (left/right, not before/after)
All 8 notebooks verified: execute headless without errors.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* Restructure: Grokking Simplicity + Deep Modules to appendix, print→display
Main flow streamlined to 6 notebooks:
00 Intro → 01 Evaluation → 02 Optimization → 03 Domain → 04 Agents → 05 Finale
Grokking Simplicity and Deep Modules moved to optional appendices.
Replaced print() with styled display(HTML()) in standalone cells.
Removed '20 Aufgaben' listing from intro. Updated all bridges,
learning path diagram, and README.
All 8 notebooks execute headless without errors. 11 E2E tests pass.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* Fix HTML string escaping in NB01 metrics cell
Use triple-quoted f-string and HTML entities (‘) instead of
raw single quotes inside single-quoted strings.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* Replace display(HTML()) with native print/display in all notebooks
No more inline HTML in code cells. Use print() with emoji formatting,
pandas DataFrames, and display(Markdown()) for the finale only.
15 cells rewritten across 7 notebooks.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* Fix token_f1 examples to show precision vs recall tradeoff
Old example had precision=recall=0.5 which doesn't teach anything.
New examples show: cautious (high P, low R), overeager (low P, high R),
perfect, and completely wrong.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* Add clickable links to next notebook at bottom of each
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* Merge 00+01 into single notebook, streamline to 5+2 structure
00_introduction + 01_evaluation merged into 01_evaluation_and_tuning.
Flow: Setup → first call → LLM failures → accuracy → manual tuning
all in one notebook. Normalized cell IDs. Updated README + links.
Structure: 01-05 main path + 2 appendices.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* Replace 'Deine erste Vorhersage' with myth-busting quiz
Instead of DSPy-specific Predict demo, start with factual questions
where the model gets things WRONG (Australian capital, glass myth,
goldfish memory). Shows immediately: LLMs sound confident but aren't
always correct. Motivates why evaluation matters.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* Add 5 evaluation strategies with live demos
After showing that LLMs return unpredictable strings, teach 5 solutions:
1. Constrain answers (number/yes/no) — live demo
2. Multiple choice
3. Keyword matching
4. Semantic similarity (Token F1)
5. LLM-as-Judge — live demo with a judge LLM
Includes tradeoff comparison table.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* Single model config: remove model arg from all action functions
Model is configured ONCE via configure_dspy() at notebook startup.
Action functions (run_baseline, run_optimization, etc.) use the
already-configured LM. No more threading issues from widget callbacks.
Fixed broken prompt_workshop() calls (missing commas from regex cleanup).
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* Add markdown explanations before every code cell, collapse setup
Every visible code cell now has a German markdown header explaining
what the user will see and why it matters. Setup cells collapsed.
Fixed compare_models import (removed from simplified actions.py).
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* Fix nb01: swap hidden setup cell before markdown to ensure visible code has header
Reorder cells [22] and [23] so the hidden setup code comes before
the 'Klassische Tests vs KI-Metriken' markdown, which then directly
precedes the visible diagram_compare code cell.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* Restructure NB01: numbered chapters, code gen moved to end with exec
Clean chapter flow:
1. Wie gut ist das Modell? (quiz + failures)
2. Wie bewertet man LLM-Antworten? (5 strategies + live demos)
3. Metriken in der Praxis (F1, composite, tradeoffs)
4. Prompt-Tuning Workshop (interactive text box)
5. Kann das Modell Code schreiben? (generates + executes code)
Removed 'Teil 2' divider, removed orphan cells.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* Rename 'Workshop' to 'Kann ich mit dem Prompt die Genauigkeit verbessern?'
Clearer framing: not a DSPy workshop, but the natural question after
seeing failures + metrics. Bridges to automatic optimization.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* Replace model picker widgets with simple code variable
MODEL = 'github_copilot/gpt-5.1' — user edits the string to switch.
Available models listed via print(). No more dropdown widgets.
Cleaned all model_dd/model_picker references across all notebooks.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* Fix table truncation: show full content with word-wrap
Removed [:80] and [:100] truncation from actions.py and visualize.py.
Table uses table-layout:fixed + word-break:break-word for wrapping.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* Remove ALL widgets — pure executable notebooks, no interaction needed
Every notebook now runs top-to-bottom without clicks or input:
- Model selection: MODEL = 'github_copilot/gpt-5.1' (edit the string)
- Task selection: TASK = 'ticket_routing' (edit the string)
- ROI calculator: plain variables instead of sliders
- Prompt tuning: PROMPT_V1/V2 variables instead of Textarea
- No more buttons, dropdowns, or callbacks anywhere
All 5 notebooks verified headless (NB03 token-expired during long run).
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* Enhance code generation dataset and improve example evaluation with timeout handling
- Added a new function to evaluate mathematical expressions respecting operator precedence and parentheses to the code generation dataset.
- Introduced a function to merge overlapping intervals in the code generation dataset.
- Modified the _evaluate_examples function to include a timeout feature for LLM calls, ensuring that each example is evaluated within a specified time limit.
- Improved error handling and output formatting during example evaluations to provide clearer feedback on timeouts and errors.
* Refactor code generation examples: simplify Fibonacci and palindrome functions, and enhance flatten function implementation
* Add 5s timeout to code generation exec() — prevents infinite loops
Both compilation and test execution are wrapped with signal.alarm(5).
If generated code runs too long (infinite loop), it times out cleanly
with '❌ Timeout: Code läuft zu lange (Endlosschleife?)'.
NB01 verified: executes headless without errors.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* Add baseline individual scores to optimization results
- Updated the `run_optimization` function to include `baseline_individual_scores` in the returned results, capturing per-example results before optimization.
- Modified the `OptimizationResult` class to define `baseline_individual_scores` as a list of dictionaries, allowing for detailed tracking of individual scores pre-optimization.
* Rewrite NB03: clean 4-step arc, no duplicates
1. See your real ticket data
2. Run generic prompt → mediocre score
3. Tune with your data → big improvement
4. Takeaway: your data is your moat
Removed 3x duplicate headers, old button references, and
repeated explanations. 21 cells → 11 cells.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* Strip notebook outputs + install nbstripout git filter
Outputs are automatically stripped on git add via .gitattributes filter.
Notebooks checked out from git will have no outputs — run them fresh.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* Integrate notebook setup into setup.sh, fix start.sh
setup.sh: adds notebook venv + deps install after main setup.
start.sh: --install-only flag for non-interactive use,
fixed filename reference (00→01). Both work on bash + zsh.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* Refactor optimization notebook and enhance actions module
- Updated the optimization notebook to improve prompt instructions and evaluation process.
- Changed the manual prompt variable name for clarity and adjusted the evaluation metrics.
- Enhanced the `run_optimization` function to accept custom instructions for better prompt optimization.
- Added a new function to format optimized prompts for improved readability.
- Introduced a new CSV analysis script to summarize categories, priorities, and assigned groups from the dataset.
* Add data processing scripts for ticket analysis and routing
- Implemented _check_fields.py to analyze fields in the CSV data and print useful content.
- Created _check_more_fields.py to explore additional fields of interest and their fill rates.
- Developed _curate_data.py to curate and score tickets for training, focusing on informative content.
- Added _import_csv.py to import CSV data, analyze incident types, and generate new tickets for underrepresented groups.
- Introduced _predict_group.py to evaluate predictive power of various fields for ticket assignment.
- Built _rebuild_data.py to create a balanced dataset for ticket routing, ensuring representation across groups.
* Enhance run_optimization function: adjust BootstrapFewShot parameters for improved performance
* SECURITY: Pin litellm to safe versions (supply chain attack)
litellm PyPI versions 1.82.7 and 1.82.8 were compromised by attacker
TeamPCP with credential-stealing malware. See:
https://github.com/BerriAI/litellm/issues/24518
Pinned to known-safe versions:
- backend: litellm==1.82.1
- notebooks: litellm==1.82.6
Do NOT upgrade until BerriAI confirms PyPI is clean.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* Move task picker from NB02 to NB03, expand domain tuning
NB02: removed 'Beliebige Aufgabe' section (belongs in NB03)
NB03: 6-step arc with task catalog + domain tuning:
1. See all 20 tasks
2. Pick any task and optimize it
3. Load real ticket data
4. Run generic prompt → mediocre
5. Tune with domain data → much better
6. Takeaway: your data is your moat
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* Fix _format_optimized_prompt for different dump_state return types
dump_state() can return a list or dict depending on DSPy version.
Handle both gracefully with type checks.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* Refactor and enhance agent tasks and tools
- Updated task definitions in notebooks to use more descriptive field names (e.g., 'query' to 'question').
- Changed the default task in domain tuning notebook from "sentiment" to "plan_execute".
- Improved agent behavior optimization by refining prompts and adding explanations for model choices.
- Enhanced search functionality in tools to provide better ticket search results and counts.
- Updated calculations for plan quality and self-correct accuracy to align with new output structures.
- Added MIPROv2 optimization step to improve agent responses based on vague prompts.
- Adjusted dataset for search agent to include more complex queries and answers.
- Updated kernel specifications across notebooks to use Python 3.13.12.
* Update domain tuning notebook to use 'plan_execute' task and improve agent optimization examples. Change model to 'github_copilot/gpt-4o-mini' for faster performance. Enhance explanations for prompt optimization and MIPROv2. Adjust markdown formatting and update kernel specifications across notebooks.
* Refactor domain tuning notebook: streamline optimization section and enhance takeaway insights
* Refactor environment loading: support .env files from both project root and notebooks directory
Signed-off-by: Andre Bossard <anbossar@microsoft.com>
* Refactor test files: streamline imports and enhance readability in LLM service tests and evaluation…
No description provided.