Skip to content

feat: revisit project knowledge system — analysis and refinements for v2 #177

@dean0x

Description

@dean0x

Overview

Before releasing v2, revisit the project knowledge system from scratch. The current implementation (#99) delivered decisions and pitfalls but left gaps worth rethinking holistically rather than patching incrementally.

Current State

Three knowledge mechanisms exist, partially overlapping:

1. .memory/knowledge/decisions.md (ADR-NNN format) ✅

  • 2 entries, append-only, read by Coder/Debug, written by /implement Phase 10
  • Working as designed

2. .memory/knowledge/pitfalls.md (PF-NNN format) ✅

  • 6 entries, read by Coder/Reviewer/Debug, written by /code-review Phase 5 and /debug Phase 6
  • Working as designed

3. .memory/PROJECT-PATTERNS.md (old format, orphaned) ⚠️

  • 81 lines of install/hook/memory patterns from March 2026
  • Nothing generates or reads it anymore — background hook reference removed, no agent references it
  • Stale artifact occupying .memory/ alongside the knowledge system that replaced it

Known Gaps from #99

Original Criteria Status
patterns.md with P-NNN format + code examples Never created
PROJECT-PATTERNS.md relationship resolved Unresolved — file still exists, orphaned

What to Rethink

This is not just about filling gaps — revisit the design from scratch:

  • Is a three-file model (decisions/patterns/pitfalls) the right shape? Or is two files sufficient? Is there a better taxonomy?
  • What's the right extraction mechanism? Currently knowledge-persistence skill defines the procedure, and each command has a dedicated phase. Is this too ceremony-heavy? Too light?
  • Signal-to-noise for patterns: The old PROJECT-PATTERNS.md accumulated 81 lines of patterns, most of which are either obvious from the code or too generic to be actionable. If we add patterns.md, how do we keep it high-signal?
  • Cross-workflow flow: Currently /implement reads knowledge and writes decisions, /code-review reads and writes pitfalls, /debug reads and writes pitfalls. Is this the right flow? Should /code-review also produce decisions? Should /implement also produce pitfalls?
  • Capacity and pruning: The skill caps at 50 entries per file. No pruning mechanism exists. What happens when we hit 50?
  • Relationship to self-learning system: Learning observations and project knowledge are separate systems. Should they be? The learning system detects workflow patterns; project knowledge captures architectural decisions. Are there synergies?

Tasks

  • Analyze the current implementation end-to-end (skill, agents, commands, hooks)
  • Review what PROJECT-PATTERNS.md captured vs what decisions.md/pitfalls.md capture — identify the gap
  • Decide: two-file vs three-file model
  • Decide: keep/remove/rework PROJECT-PATTERNS.md
  • Decide: extraction improvements (if any)
  • Implement refinements
  • Update agents/commands/skills as needed
  • Verify cross-workflow knowledge flow

References

  • Persistent Project Knowledge (Stateful Agents / Persistent Minds) #99 (original spec, now closed)
  • shared/skills/knowledge-persistence/SKILL.md — extraction procedure
  • .memory/knowledge/ — current knowledge files
  • .memory/PROJECT-PATTERNS.md — orphaned old mechanism
  • shared/agents/coder.md:40-41 — Coder reads knowledge
  • plugins/devflow-code-review/commands/code-review.md:163-168 — Phase 5 records pitfalls
  • plugins/devflow-implement/commands/implement.md:312-319 — Phase 10 records decisions
  • plugins/devflow-debug/commands/debug.md:26-28, 136-139 — loads + records pitfalls

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions