Problem
The pending contradictions queue has 113 entries. Every single one is a co-retrieval false positive. Zero real contradictions.
The co-retrieval detector flags any two knowledge nodes that appear in the same search result as a "contradiction." But topical proximity is not semantic conflict:
- "CEO doesn't code" vs "Apollo is the best" — same topic (agent team), not contradictory
- "Rollout status" vs "dir structure" — both project infrastructure, not contradictory
- "Use conv3 as indicator" vs "LOCOMO v9 overnight run" — both eval-related, not contradictory
Data
113 total pending contradictions
113 co-retrieval (100%)
0 real contradictions (0%)
~20 unique node pairs, each generating 2-10 duplicate entries
Impact
- User never sees contradictions. They're printed at session start (buried in a 260-line startup dump), but agents correctly ignore them because they're noise.
- The
recall_contradict list tool returns 113 items — unusable as a review surface.
- Real contradictions (if any existed) would be invisible — drowned in false positives.
Root Cause
The co-retrieval heuristic in consolidate.py (around line 1146) queues a contradiction whenever two knowledge nodes are co-retrieved. This is too aggressive — co-occurrence in search results is evidence of topical similarity, not semantic conflict.
Additionally, there's no dedup at queue time. The same (old_node_id, new_content) pair can be queued multiple times, which is why ~20 unique pairs inflate to 113 entries.
Proposed Fix
Phase 1: Stop the bleeding
- Dedup at queue time: before
add_pending_contradiction(), check if the same (old_node_id, new_node_id/new_content) pair already exists in pending_contradictions. Skip if so.
- Tighten or remove the co-retrieval heuristic. Co-retrieval alone is not sufficient signal. Options:
- Remove entirely (require explicit
action: "contradict" from the enrichment LLM only)
- Add a semantic similarity threshold — only flag if content is similar and makes opposing claims
- Require keyword overlap + sentiment/polarity reversal
Phase 2: Surface real contradictions effectively
Once the signal is real (not noise):
3. Cap session start output: show at most 3 contradictions, prioritized by recency/confidence. Include count: "3 of N pending — use recall_contradict list to see all."
4. Interactive resolution: agent asks user about each one individually with full context, not a dump of all at once.
5. CLI command: synapt contradictions for out-of-band review.
Phase 3: Bulk cleanup
- Dismiss all existing co-retrieval false positives (all 113 current entries).
Problem
The pending contradictions queue has 113 entries. Every single one is a co-retrieval false positive. Zero real contradictions.
The co-retrieval detector flags any two knowledge nodes that appear in the same search result as a "contradiction." But topical proximity is not semantic conflict:
Data
Impact
recall_contradict listtool returns 113 items — unusable as a review surface.Root Cause
The co-retrieval heuristic in
consolidate.py(around line 1146) queues a contradiction whenever two knowledge nodes are co-retrieved. This is too aggressive — co-occurrence in search results is evidence of topical similarity, not semantic conflict.Additionally, there's no dedup at queue time. The same (old_node_id, new_content) pair can be queued multiple times, which is why ~20 unique pairs inflate to 113 entries.
Proposed Fix
Phase 1: Stop the bleeding
add_pending_contradiction(), check if the same(old_node_id, new_node_id/new_content)pair already exists inpending_contradictions. Skip if so.action: "contradict"from the enrichment LLM only)Phase 2: Surface real contradictions effectively
Once the signal is real (not noise):
3. Cap session start output: show at most 3 contradictions, prioritized by recency/confidence. Include count: "3 of N pending — use
recall_contradict listto see all."4. Interactive resolution: agent asks user about each one individually with full context, not a dump of all at once.
5. CLI command:
synapt contradictionsfor out-of-band review.Phase 3: Bulk cleanup