Parent
Part of #570 (Chat-to-proposal NLP gap)
Problem
When a user's request is actionable but ambiguous, the system either:
- Fails to parse it (current behavior)
- Would generate a best-guess proposal that may not match user intent
Examples of ambiguity:
- "create onboarding tasks" — how many? what titles? which column?
- "reorganize the board" — what does reorganize mean?
- "clean up old cards" — what counts as old?
Proposed Changes
When the LLM detects actionable intent but cannot fully resolve it into specific instructions:
- LLM asks clarifying questions instead of generating instructions
- User answers refine the instruction set
- After sufficient clarity, LLM generates concrete instructions
- Instructions flow to the existing proposal pipeline
Example Flow
User: "create onboarding tasks for non-technical people"
LLM: "I can help with that! To create the right tasks, could you tell me:
1. How many tasks do you need?
2. What specific areas? (e.g., HR orientation, tools setup, team intros)
3. Which column should they go in?"
User: "3 tasks: HR orientation, tools walkthrough, team introductions. Put them in Backlog."
LLM: [generates 3 create card instructions for Backlog column]
Depends On
- LLM-assisted instruction extraction
- Board-context-aware prompting (for column suggestions)
Acceptance Criteria
Parent
Part of #570 (Chat-to-proposal NLP gap)
Problem
When a user's request is actionable but ambiguous, the system either:
Examples of ambiguity:
Proposed Changes
When the LLM detects actionable intent but cannot fully resolve it into specific instructions:
Example Flow
Depends On
Acceptance Criteria