The previous report were done on copies of this db. I upgraded my main install to 3.12 and this was the results of running consolidate.
Cortex 3.12.0 — consolidate field report
Date: 2026-04-16
Version: cortex 3.12.0 (MCP plugin, alloy Postgres backend)
Store size: 66,277 memories (25,603 episodic / 40,675 semantic, 3,111 protected)
Context: First consolidate run after upgrade to 3.12.0; store includes a recent large claude-mem backfill import.
Summary
consolidate completes with status: "partial" due to one hard failure (emergence), plus three cycles whose output looks suspicious (cls, memify, homeostatic). Separately, post-consolidate recall behavior suggests homeostatic scaling is not effectively re-balancing a bimodal heat distribution.
Total duration: 1037s (17.3 min) on 66k memories.
Issue 1 — emergence stage crash (hard failure, causes status: partial)
"emergence": {
"error": "AttributeError: module 'mcp_server.core.emergence_tracker' has no attribute 'generate_emergence_report'",
"duration_ms": 0
}
- Stage runs last in the
consolidate pipeline.
AttributeError on mcp_server.core.emergence_tracker.generate_emergence_report.
duration_ms: 0 — blows up at import/attribute-lookup, not mid-execution.
- Causes the whole
consolidate call to report status: "partial" with failed_stages: ["emergence"] even though 8 preceding cycles succeeded.
Likely cause: 3.12.0 refactor renamed/moved generate_emergence_report but the caller wasn't updated. Fix is one of:
- Restore the attribute on
emergence_tracker.
- Update the caller to the new name/location.
- Guard with
hasattr() + skip-with-warning so the emergence stage doesn't poison an otherwise-healthy consolidate.
Repro:
consolidate(decay=true, compress=true, cls=true, memify=true, deep=false)
on a store of ~66k memories.
Issue 2 — cls produces zero output across every axis
"cls": {
"patterns_found": 0,
"new_semantics_created": 0,
"skipped_inconsistent": 0,
"skipped_duplicate": 0,
"causal_edges_found": 0,
"episodic_scanned": 2000,
"duration_ms": 16110
}
- Scanned 2,000 episodic memories in 16 s and produced zero patterns, zero new semantics, zero causal edges, and zero skips of any kind (
skipped_inconsistent and skipped_duplicate both 0).
- All-zeros across every axis is suspicious. The store has 25,603 episodic + 40,675 semantic memories — in a corpus this size, finding zero patterns and zero near-duplicates simultaneously is implausible.
- Suggests one of:
- Threshold tightened too aggressively in 3.12.0.
- Scan window isn't selecting the right memories (e.g., only hot ones, only recent ones, only one domain).
- Cycle early-returns before doing work due to a silent config miss, and just reports the zeros.
Ask: what signal confirms cls "actually ran" vs. "early-returned and wrote zeros"?
Issue 3 — memify strengthened/pruned zero, reweighted thousands
"memify": { "pruned": 0, "strengthened": 0, "reweighted": 2671, "duration_ms": 23769 }
- 2,671 reweighted is plausible, but 0 strengthened AND 0 pruned over 24 s is odd. May be intentional gating (reweight-only path) but worth confirming.
Issue 4 — heat distribution bimodality after decay; recall pinned to import bucket
"homeostatic": {
"scaling_applied": true,
"health_score": 0.2566,
"mean_heat": 0.6553,
"std_heat": 0.319,
"bimodality": 0.8461,
"memories_scanned": 66277,
"duration_ms": 688447
}
Symptom at the user level: recall("hcrdashboard") and recall("UniFi dashboard Rust axum accessibility") return zero project-relevant results. Every top-8 hit is a backfilled claude-mem / homebase / thedotmack / simmer memory with heat ≥ 0.93. Hcrdashboard-specific memories either don't surface at all or are buried below the backfill bucket.
Readings consistent with a scaling bug:
bimodality: 0.8461 — very high; heat is two distinct populations, not one distribution.
health_score: 0.2566 — low.
scaling_applied: true — the cycle claims to have renormalized, yet the bimodality and recall behavior say it didn't pull down the over-heated cluster.
Ask: does scaling_applied: true mean "successfully renormalized the distribution" or just "ran without raising"? If the former, the renormalization isn't doing what its name implies on this store. If the latter, the flag is misleading.
User workaround attempted: running decay=true (62,820 decayed in 249 s) did not meaningfully change recall ordering — the backfill bucket is still at ~0.94 heat after one full cycle.
Performance breakdown
Total consolidate: 1,037,502 ms (17.3 min) on 66,277 memories.
| Stage |
Duration (ms) |
% of total |
homeostatic |
688,447 |
66.4 % |
decay |
249,461 |
24.0 % |
memify |
23,769 |
2.3 % |
cascade |
17,921 |
1.7 % |
plasticity |
17,495 |
1.7 % |
cls |
16,110 |
1.6 % |
compression |
4,421 |
0.4 % |
pruning |
3,235 |
0.3 % |
emergence |
0 |
0.0 % (crashed) |
If the 3.12.0 release notes advertised consolidate performance improvements, the remaining bottleneck is homeostatic — two-thirds of wall time, scanning every memory. 688 s on 66 k rows scales poorly.
Ask: is the homeostatic cycle doing work that needs to touch every row on every run, or can it be incrementalized / batched / sampled like plasticity (2,000 sampled) and cls (2,000 scanned)?
What succeeded (for completeness)
| Stage |
Counts |
decay |
62,820 memories decayed; 8,069 entities decayed; 66,267 metabolic updates |
plasticity |
964 LTP / 3,224 LTD / 4,188 edges updated; 676,656 co-access pairs; 2,000 sampled |
pruning |
2,587 edges pruned; 1,454 entities archived |
compression |
176 compressed to gist; 3,111 protected skipped; 38,536 semantic skipped |
cascade |
502 advanced (labile → early_ltp); 1,234 scanned; 730 heartbeats written |
These look healthy. No regressions observed in these stages.
Environment
- macOS (Darwin 25.4.0)
- MCP plugin: cortex 3.12.0
- Storage backend: alloy Postgres (DATABASE_URL-based, not SQLite)
- Vector search: enabled (
has_vector_search: true)
- Last prior consolidation: 2026-04-15 14:06:55 UTC (~24 h before this run)
- Recent activity: claude-mem backfill import; ~330 new memories since last consolidate
Priority (from my seat)
- P0 — fix
emergence crash. It's a one-attribute error flipping otherwise-healthy runs to partial.
- P1 — confirm or fix
homeostatic scaling. If recall stays pinned to the import bucket after consolidate, the scaling cycle isn't delivering its promised renormalization, which is the most user-visible symptom.
- P2 — investigate
cls all-zeros. Could be a real finding, could be silent early-return; either way the result is indistinguishable from "didn't run" and deserves a diagnostic signal.
- P3 —
homeostatic performance. 66 % of wall time on a single cycle is worth a profile pass.
The previous report were done on copies of this db. I upgraded my main install to 3.12 and this was the results of running consolidate.
Cortex 3.12.0 —
consolidatefield reportDate: 2026-04-16
Version: cortex 3.12.0 (MCP plugin, alloy Postgres backend)
Store size: 66,277 memories (25,603 episodic / 40,675 semantic, 3,111 protected)
Context: First
consolidaterun after upgrade to 3.12.0; store includes a recent large claude-mem backfill import.Summary
consolidatecompletes withstatus: "partial"due to one hard failure (emergence), plus three cycles whose output looks suspicious (cls,memify,homeostatic). Separately, post-consolidate recall behavior suggests homeostatic scaling is not effectively re-balancing a bimodal heat distribution.Total duration: 1037s (17.3 min) on 66k memories.
Issue 1 —
emergencestage crash (hard failure, causesstatus: partial)consolidatepipeline.AttributeErroronmcp_server.core.emergence_tracker.generate_emergence_report.duration_ms: 0— blows up at import/attribute-lookup, not mid-execution.consolidatecall to reportstatus: "partial"withfailed_stages: ["emergence"]even though 8 preceding cycles succeeded.Likely cause: 3.12.0 refactor renamed/moved
generate_emergence_reportbut the caller wasn't updated. Fix is one of:emergence_tracker.hasattr()+ skip-with-warning so the emergence stage doesn't poison an otherwise-healthy consolidate.Repro:
on a store of ~66k memories.
Issue 2 —
clsproduces zero output across every axisskipped_inconsistentandskipped_duplicateboth 0).Ask: what signal confirms cls "actually ran" vs. "early-returned and wrote zeros"?
Issue 3 —
memifystrengthened/pruned zero, reweighted thousandsIssue 4 — heat distribution bimodality after decay; recall pinned to import bucket
Symptom at the user level:
recall("hcrdashboard")andrecall("UniFi dashboard Rust axum accessibility")return zero project-relevant results. Every top-8 hit is a backfilled claude-mem / homebase / thedotmack / simmer memory withheat ≥ 0.93. Hcrdashboard-specific memories either don't surface at all or are buried below the backfill bucket.Readings consistent with a scaling bug:
bimodality: 0.8461— very high; heat is two distinct populations, not one distribution.health_score: 0.2566— low.scaling_applied: true— the cycle claims to have renormalized, yet the bimodality and recall behavior say it didn't pull down the over-heated cluster.Ask: does
scaling_applied: truemean "successfully renormalized the distribution" or just "ran without raising"? If the former, the renormalization isn't doing what its name implies on this store. If the latter, the flag is misleading.User workaround attempted: running
decay=true(62,820 decayed in 249 s) did not meaningfully change recall ordering — the backfill bucket is still at ~0.94 heat after one full cycle.Performance breakdown
Total
consolidate: 1,037,502 ms (17.3 min) on 66,277 memories.homeostaticdecaymemifycascadeplasticityclscompressionpruningemergenceIf the 3.12.0 release notes advertised
consolidateperformance improvements, the remaining bottleneck ishomeostatic— two-thirds of wall time, scanning every memory. 688 s on 66 k rows scales poorly.Ask: is the homeostatic cycle doing work that needs to touch every row on every run, or can it be incrementalized / batched / sampled like
plasticity(2,000 sampled) andcls(2,000 scanned)?What succeeded (for completeness)
decayplasticitypruningcompressioncascadeThese look healthy. No regressions observed in these stages.
Environment
has_vector_search: true)Priority (from my seat)
emergencecrash. It's a one-attribute error flipping otherwise-healthy runs topartial.homeostaticscaling. If recall stays pinned to the import bucket after consolidate, the scaling cycle isn't delivering its promised renormalization, which is the most user-visible symptom.clsall-zeros. Could be a real finding, could be silent early-return; either way the result is indistinguishable from "didn't run" and deserves a diagnostic signal.homeostaticperformance. 66 % of wall time on a single cycle is worth a profile pass.