Skip to content

LLM-assisted instruction extraction from natural language chat #573

@Chris0Jeky

Description

@Chris0Jeky

Parent

Part of #570 (Chat-to-proposal NLP gap)

Problem

All three LLM providers (Mock, OpenAI, Gemini) use the same static LlmIntentClassifier for intent detection. The real LLM's response content is only used for the conversational reply — it is never used to extract structured instructions. This wastes the NLP capability of real providers.

Proposed Architecture

System Prompt Addition

When sending chat to OpenAI/Gemini, include a system prompt that:

  1. Describes Taskdeck's supported instruction patterns
  2. Asks the LLM to detect actionable intent
  3. Requests structured instruction output alongside the conversational reply

Structured Output

Use the LLM's structured output capability:

  • OpenAI: function calling or JSON mode
  • Gemini: structured output schema

Response shape:

{
  "reply": "Sure! I'll create onboarding tasks for non-technical roles.",
  "actionable": true,
  "instructions": [
    "create card \"HR orientation session\"",
    "create card \"Communication tools walkthrough\"",
    "create card \"Company culture introduction\""
  ]
}

Fallback Strategy

  • Mock provider: keep static classifier (deterministic for tests)
  • Real providers: use LLM extraction, fall back to static classifier on parse failure
  • Degraded mode: use static classifier when LLM is unavailable

ChatService Flow Change

// Current (line 232):
ParseInstructionAsync(dto.Content, ...)  // raw user message

// Proposed:
if (llmResult.Instructions?.Any() == true)
    foreach (var instruction in llmResult.Instructions)
        ParseInstructionAsync(instruction, ...)  // LLM-structured instruction
else
    ParseInstructionAsync(dto.Content, ...)  // fallback to raw

Affected Files

  • backend/src/Taskdeck.Application/Services/ChatService.cs — flow change
  • backend/src/Taskdeck.Application/Services/OpenAiLlmProvider.cs — system prompt + structured output
  • backend/src/Taskdeck.Application/Services/GeminiLlmProvider.cs — system prompt + structured output
  • backend/src/Taskdeck.Application/Services/LlmCompletionResult.cs — add Instructions field
  • backend/src/Taskdeck.Application/Services/ChatCompletionRequest.cs — add system prompt support

Acceptance Criteria

  • OpenAI provider extracts structured instructions from natural language
  • Gemini provider extracts structured instructions from natural language
  • Mock provider unchanged (static classifier)
  • "can you create onboarding tasks for non-technical people?" produces a proposal
  • Fallback to static classifier when LLM extraction fails
  • Integration tests with mock provider still pass
  • Review-first gate preserved (instructions become proposals, not direct mutations)

Metadata

Metadata

Assignees

No one assigned

    Projects

    Status

    Done

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions