Summary
Build a persistent, structured knowledge base that accumulates learnings across missions. After each Stand Down, extract patterns and index them. Expose via a pre-mission "Mission Intelligence Brief" that surfaces relevant past patterns before Sailing Orders are issued.
Motivation
Nelson missions currently produce rich data (sailing orders, battle plans, quarterdeck reports, captain's logs, damage reports, stand-down summaries) but that data is captured and then sits dormant. Every mission starts from a blank slate. The stand-down.json schema already captures reusable_patterns.adopt and reusable_patterns.avoid, but these are only surfaced in captain's log prose.
This is the most significant architectural gap identified across all four research tracks.
Detailed Design
Memory Store Structure
.nelson/memory/
patterns.json — Accumulated pattern library (adopt/avoid)
missions-index.json — Lightweight index of all completed missions
standing-order-stats.json — Violation frequency and correlation data
Pattern Extraction Pipeline
After each nelson-data.py stand-down, automatically:
- Extract
reusable_patterns from stand-down.json
- Extract standing order violation events from
mission-log.json
- Extract damage control procedure invocations
- Compute mission quality metrics (parallelism ratio, budget accuracy, outcome achievement)
- Append to
patterns.json with mission context (type, size, mode, domain tags)
Mission Intelligence Brief
New command: nelson-data.py brief
python3 nelson-data.py brief --mission-dir {mission-dir} --context "auth module refactor"
Output (compact, context-efficient):
MISSION INTELLIGENCE BRIEF
Based on 25 completed missions:
Relevant patterns:
- Auth refactor missions average 3 ships, 1 relief (similar: mission 2026-03-27)
- Standing order `captain-at-the-capstan` triggered in 4/25 missions (16%)
- Database migration tasks average 1.2 standing order violations
- subagents mode: 85% success rate for research missions
- agent-team mode: 90% success rate for implementation missions with 4+ tasks
Recommended configuration:
- Mode: subagents (for research) or agent-team (for implementation)
- Expected budget: 150-200K tokens based on similar missions
- Watch for: captain-at-the-capstan (most common violation in this domain)
Integration with Dynamic Context Injection
The brief can be injected automatically via the dynamic context injection mechanism (issue #5):
```!
python3 nelson-data.py brief --mission-dir $(ls -td .nelson/missions/*/ 2>/dev/null | head -1) 2>/dev/null
```
Cross-Mission Analytics
Extend nelson-data.py with analysis subcommands:
nelson-data.py analytics --metric success-rate — outcome achievement over time
nelson-data.py analytics --metric standing-orders — violation frequency and trends
nelson-data.py analytics --metric efficiency — tokens per task, parallelism ratio
Rationale
- HMS Audacious identified cross-mission memory as High impact / Medium effort and explicitly differentiated
- HMS Daring identified cross-mission eval datasets (from Mastra) as a Tier 2 improvement
- HMS Astute called the lack of feedback loop the most significant architectural gap
- HMS Diamond noted Nelson's structured data layer is unique — this extends it into institutional memory
Effort Estimate
Large
Impact
Very High — transforms Nelson from a stateless framework into a learning system
Dependencies
Requires: Typed handoff packet (#7) for structured mission data
Enables: Mission replay/fork/template (#13), learned standing orders (#14), confidence-weighted trust calibration (#15)
Summary
Build a persistent, structured knowledge base that accumulates learnings across missions. After each Stand Down, extract patterns and index them. Expose via a pre-mission "Mission Intelligence Brief" that surfaces relevant past patterns before Sailing Orders are issued.
Motivation
Nelson missions currently produce rich data (sailing orders, battle plans, quarterdeck reports, captain's logs, damage reports, stand-down summaries) but that data is captured and then sits dormant. Every mission starts from a blank slate. The
stand-down.jsonschema already capturesreusable_patterns.adoptandreusable_patterns.avoid, but these are only surfaced in captain's log prose.This is the most significant architectural gap identified across all four research tracks.
Detailed Design
Memory Store Structure
Pattern Extraction Pipeline
After each
nelson-data.py stand-down, automatically:reusable_patternsfromstand-down.jsonmission-log.jsonpatterns.jsonwith mission context (type, size, mode, domain tags)Mission Intelligence Brief
New command:
nelson-data.py briefpython3 nelson-data.py brief --mission-dir {mission-dir} --context "auth module refactor"Output (compact, context-efficient):
Integration with Dynamic Context Injection
The brief can be injected automatically via the dynamic context injection mechanism (issue #5):
Cross-Mission Analytics
Extend
nelson-data.pywith analysis subcommands:nelson-data.py analytics --metric success-rate— outcome achievement over timenelson-data.py analytics --metric standing-orders— violation frequency and trendsnelson-data.py analytics --metric efficiency— tokens per task, parallelism ratioRationale
Effort Estimate
Large
Impact
Very High — transforms Nelson from a stateless framework into a learning system
Dependencies
Requires: Typed handoff packet (#7) for structured mission data
Enables: Mission replay/fork/template (#13), learned standing orders (#14), confidence-weighted trust calibration (#15)