-
Notifications
You must be signed in to change notification settings - Fork 3.7k
Closed as not planned
Closed as not planned
Copy link
Labels
invalidIssue doesn't seem to be related to Claude CodeIssue doesn't seem to be related to Claude Code
Description
Bug Description
The improvement_detect_confusion_patterns MCP tool returns confusing/misleading output by mixing two different data sources without making it clear.
Current Behavior
Tool returns:
{
"total_events": 27,
"resolved_events": 6,
"unresolved_events": 21,
"resolution_rate": 100
}This is contradictory - if 6/27 are resolved, resolution rate should be ~22%, not 100%.
Root Cause
After investigation, the tool appears to combine:
improvement_confusion_eventstable: 6 events, 6 resolved = 100% resolution ✓user_correctionobservations: 21 records (no matching confusion event records)
The "27 events" = 6 + 21, but only the 6 from the dedicated table have resolution tracking.
Expected Behavior
Either:
Option A: Separate the metrics clearly
{
"confusion_events": {
"total": 6,
"resolved": 6,
"resolution_rate": 100
},
"user_corrections_without_confusion_event": {
"total": 21,
"note": "These corrections don't have matching confusion_event records"
}
}Option B: Only report from the dedicated table
{
"total_events": 6,
"resolved_events": 6,
"resolution_rate": 100,
"note": "From improvement_confusion_events table only"
}Impact
- D10 (Confusion Removal) dimension scoring uses this tool
- Misleading output can cause incorrect dimension scores
- Auditors may trust the numbers without realizing the data source mixing
How Discovered
During /improve self-audit, questioned why 6/27 = 100% resolution rate. Direct database queries revealed the two-source conflation.
Files to Check
ecosystem-mcp/src/nautical_ecosystem/tools/improvement.py(likely location ofdetect_confusion_patternsimplementation)
Metadata
Metadata
Assignees
Labels
invalidIssue doesn't seem to be related to Claude CodeIssue doesn't seem to be related to Claude Code