Parent
Part of #570 (Chat-to-proposal NLP gap)
Problem
All three LLM providers (Mock, OpenAI, Gemini) use the same static LlmIntentClassifier for intent detection. The real LLM's response content is only used for the conversational reply — it is never used to extract structured instructions. This wastes the NLP capability of real providers.
Proposed Architecture
System Prompt Addition
When sending chat to OpenAI/Gemini, include a system prompt that:
- Describes Taskdeck's supported instruction patterns
- Asks the LLM to detect actionable intent
- Requests structured instruction output alongside the conversational reply
Structured Output
Use the LLM's structured output capability:
- OpenAI: function calling or JSON mode
- Gemini: structured output schema
Response shape:
{
"reply": "Sure! I'll create onboarding tasks for non-technical roles.",
"actionable": true,
"instructions": [
"create card \"HR orientation session\"",
"create card \"Communication tools walkthrough\"",
"create card \"Company culture introduction\""
]
}
Fallback Strategy
- Mock provider: keep static classifier (deterministic for tests)
- Real providers: use LLM extraction, fall back to static classifier on parse failure
- Degraded mode: use static classifier when LLM is unavailable
ChatService Flow Change
// Current (line 232):
ParseInstructionAsync(dto.Content, ...) // raw user message
// Proposed:
if (llmResult.Instructions?.Any() == true)
foreach (var instruction in llmResult.Instructions)
ParseInstructionAsync(instruction, ...) // LLM-structured instruction
else
ParseInstructionAsync(dto.Content, ...) // fallback to raw
Affected Files
backend/src/Taskdeck.Application/Services/ChatService.cs — flow change
backend/src/Taskdeck.Application/Services/OpenAiLlmProvider.cs — system prompt + structured output
backend/src/Taskdeck.Application/Services/GeminiLlmProvider.cs — system prompt + structured output
backend/src/Taskdeck.Application/Services/LlmCompletionResult.cs — add Instructions field
backend/src/Taskdeck.Application/Services/ChatCompletionRequest.cs — add system prompt support
Acceptance Criteria
Parent
Part of #570 (Chat-to-proposal NLP gap)
Problem
All three LLM providers (Mock, OpenAI, Gemini) use the same static
LlmIntentClassifierfor intent detection. The real LLM's response content is only used for the conversational reply — it is never used to extract structured instructions. This wastes the NLP capability of real providers.Proposed Architecture
System Prompt Addition
When sending chat to OpenAI/Gemini, include a system prompt that:
Structured Output
Use the LLM's structured output capability:
Response shape:
{ "reply": "Sure! I'll create onboarding tasks for non-technical roles.", "actionable": true, "instructions": [ "create card \"HR orientation session\"", "create card \"Communication tools walkthrough\"", "create card \"Company culture introduction\"" ] }Fallback Strategy
ChatService Flow Change
Affected Files
backend/src/Taskdeck.Application/Services/ChatService.cs— flow changebackend/src/Taskdeck.Application/Services/OpenAiLlmProvider.cs— system prompt + structured outputbackend/src/Taskdeck.Application/Services/GeminiLlmProvider.cs— system prompt + structured outputbackend/src/Taskdeck.Application/Services/LlmCompletionResult.cs— addInstructionsfieldbackend/src/Taskdeck.Application/Services/ChatCompletionRequest.cs— add system prompt supportAcceptance Criteria