Skip to content

fix(templates): add feedback classification rule to spec-extraction-workflow#166

Merged
abeltrano merged 2 commits intomainfrom
fix/feedback-classification-rule
Apr 2, 2026
Merged

fix(templates): add feedback classification rule to spec-extraction-workflow#166
abeltrano merged 2 commits intomainfrom
fix/feedback-classification-rule

Conversation

@abeltrano
Copy link
Copy Markdown
Collaborator

@abeltrano abeltrano commented Apr 2, 2026

Summary

Adds a Classification Rule to Phase 3 (Human Clarification Loop) of the \spec-extraction-workflow\ template.

Closes #165

Problem

During a real spec-extraction session, user feedback stating 'there are no true unit tests' was misclassified as an editorial label fix instead of a systemic finding. The LLM updated terminology but failed to create a formal audit finding until the user explicitly escalated during Phase 5.

Fix

Add a classification rule that requires the LLM to categorize every piece of user feedback as:

  1. Editorial — terminology, formatting, wording (apply silently)
  2. Correction — factual error in the draft (fix and cite)
  3. Finding — a gap, risk, or systemic issue (create formal finding, confirm severity with user)

Default to finding when uncertain. It is better to over-promote user feedback and have the user downgrade it than to silently under-promote domain-expert input.

Changes

  • templates/spec-extraction-workflow.md: Added Classification Rule section to Phase 3, before the existing Critical Rule.

…orkflow

During a real spec-extraction session, user feedback stating 'there are
no true unit tests' was misclassified as an editorial label fix instead
of a systemic finding. The agent updated terminology but failed to
create a formal audit finding until the user explicitly escalated.

Add a Classification Rule to Phase 3 that requires the agent to
categorize user feedback as editorial, correction, or finding — and
default to finding when uncertain. This prevents under-promotion of
domain-expert input.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Copilot AI review requested due to automatic review settings April 2, 2026 20:10
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR updates the spec-extraction-workflow template to prevent under-escalation of user feedback during Phase 3 (Human Clarification Loop) by introducing an explicit feedback classification rule.

Changes:

  • Adds a “Classification Rule” section in Phase 3 to categorize user feedback as Editorial, Correction, or Finding.
  • Establishes a “default to finding when uncertain” policy to avoid silently downgrading domain-expert feedback.

- Editorial: replace 'apply silently' with acknowledgement requirement
  to align with iterative-refinement protocol's 'justify every change'
  guidance. Editorial changes are still not escalated but are now
  tracked in the walkthrough or revision history.

- Finding: clarify Phase 3 vs Phase 4 responsibility. Phase 3 records
  candidate findings and updates draft specs (RISK-NNN or Open
  Questions). Formal F-NNN entries are deferred to Phase 4. Adds
  explicit severity scale (Critical/High/Medium/Low/Informational).

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
@abeltrano abeltrano merged commit c27de08 into main Apr 2, 2026
5 checks passed
@abeltrano abeltrano deleted the fix/feedback-classification-rule branch April 2, 2026 20:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

spec-extraction-workflow: user feedback during Phase 3 can be silently under-promoted

3 participants