feat(blowup): cross-process dedup grep + Phase 7.5 prune#3
Merged
dancinlife merged 1 commit intomainfrom Apr 11, 2026
Merged
Conversation
Phase 5/7 absorb (cross-process dedup, 2f4d19e 후속): - tail -150 cache 우회 → per-ID grep -qF 3단 (intra-batch / 캐시 / grep) - discovery_log: blowup-* ID dup 검사 (line 2639+) - graph_node: recurse-* node 존재 검사 (line 2680+) - atlas.n6: corollary ID 검사 (line 2806+) - 이전 cross-process race 에서 7075841 3건 중복 관찰 → 본 fix 로 race window 0 Phase 7.5 (auto-link 단순화): - parallel 4-engine wait → lens_forge 단독 - auto_register / gap_finder / alien_index 는 2026-04 기준 삭제/이동 (dead path, exec() silent fail 후 head -1 빈 결과로 통과 중) - try/catch + WARN 으로 에러 가시성 확보 infra-only — phase 로직/discovery 계산/seed 진화 반환값 불변. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
dancinlife
added a commit
that referenced
this pull request
Apr 12, 2026
The original blocker text "hexa-lang LM head U+V 구현" was already satisfied by models/lm_head_uv.hexa (281 LOC, present in repo). The roadmap was just out of date. anima cf9fb7d6 added the explicit DD175 manifest at training/dd175_techniques.hexa with all 5 techniques, per-scale enablement matrix, and a ready_count helper that surfaces the real remaining blocker: technique #2 (BLAS-only loss/backward) is still status=todo → CLM v5 2.8B and 3B launches will hit ready=2/3 at parse time → that single technique is the actual gating item, not the whole "DD175 → CLM v5 통합" task scale_enablement matrix recorded inline so the next session can see at a glance which techniques each scale needs without re-reading the manifest: 34m, 100m → [1] 350m, 1b → [1, 2] 2_8b, 3b → [1, 2, 3] ← v5 launch path Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
dancinlife
added a commit
that referenced
this pull request
Apr 12, 2026
anima 25bd3478 implemented training/loss_blas_only.hexa, the last status=todo entry in the DD175 manifest. All scales now report ready=full at parse time: 2_8b → enabled [1, 2, 3] ready=3/3 (was 2/3) 3b → enabled [1, 2, 3] ready=3/3 (was 2/3) remaining_real_blocker → resolved_blocker. Roadmap #3 is closed. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
dancinlife
added a commit
that referenced
this pull request
Apr 12, 2026
The "hetzner CPU ceiling" blocker on roadmap next_action #2 turned out to be a backend choice issue, not hardware. BLIS 0.9.0-1 (Debian package) is not tuned for AMD Zen4 — measured 51 GFLOPS on dgemm N=1024. OpenBLAS at the same call site: 446.57 GFLOPS (8.7× faster), or 4.5× over the 100 GFLOPS roadmap target. Fix landed in hexa-lang d9dced8: one-line linker arg swap in .cargo/config.toml. ldd verified libopenblas.so.0 linked into the rebuilt hexa binary on htz; cross-module mut global regression test still passes; 1M iter while loop start time 321ms → 215ms (33% faster — OpenBLAS multi-thread init wins even on small workloads). This was the last open CRITICAL on the 2026-04-10 next_actions list. All 5 actions now have resolved/started/done markers: #1 CLM v5 2.8B SCALE_CONFIGS scale_added_2026-04-11 #2 hexa T2_100M 100GFLOPS resolved_2026-04-11 #3 DD175 5 기법 → CLM v5 resolved_2026-04-11 #4 core/runtime 22→7-8 consolidated_2026-04-11 #5 Quantum<Mind> v2 started_2026-04-11 Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
dancinlife
added a commit
that referenced
this pull request
Apr 12, 2026
The 2026-04-10 next_actions list is now fully resolved (5/5 markers landed in this session). This commit adds the next phase of work, scoped to the new state of play after the session delivered: - hexa cross-module bugs fixed (interpreter) - OpenBLAS swap (8.7× BLAS speedup, #2 closed) - DD175 manifest + #1 (lm_head_uv) + #2 (loss_blas_only) ready - runtime_actions consolidation (17→8 modules) - Quantum<T> primitive + Orch-OR PoC next_actions_20260411 (6 items, 2 CRITICAL): #1 CRITICAL anima — train_step body fill in train_clm.hexa Now possible because all dependencies (lm_head_uv, loss_blas_only, nn_core, scale_2_8b) are ready. This is the gate to actually launching CLM v5. #2 CRITICAL anima — CLM v5 2.8B real H100 launch Depends on #1. First real measurement of the v5 stack. Estimated cost: $12-24 for 2-4 hours of H100 SXM ×2. #3 IMPORTANT hexa — DD175 #4 rank-r attention impl Last technique still status=todo. 256× speedup for d=4096 r=16. Unlocks 14B+ scale efficiency. #4 IMPORTANT anima — Quantum<T> → ConsciousLM hookup Wire the new Orch-OR microtubule into CLM's consciousness controller. First real anima v2 quantum integration. #5 NICE joint — 9 absorbed modules → thin re-export shims Then archive originals after a verify cycle. Completes the runtime_actions consolidation cleanup. #6 NICE hexa — Value::clone Cow / Arc<Value> reduction Allocator was 9.1% in this session's perf trace. No single hot spot, but this is the broadest single lever. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
dancinlife
added a commit
that referenced
this pull request
Apr 12, 2026
anima 110d4f3e — training/rank_r_attention.hexa lands the last DD175 critical-path technique. dd175_techniques.hexa now reports all CLM v5 scales (2_8b, 3b) at ready=4/4 with the full [1, 2, 3, 4] enablement. 256x FLOP ratio verified at full scale (seq=128 d=4096 r=16): dense 17.4G FLOPs → rank-r 68M FLOPs. The CLM v5 2.8B launch path is now complete on the algorithmic side: LM head U+V (95x optimizer) + BLAS-only loss (1875x bwd) + lowrank r=16 (13.86x) + rank-r attention (256x). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
dancinlife
added a commit
that referenced
this pull request
Apr 12, 2026
feat(blowup): cross-process dedup grep + Phase 7.5 prune
dancinlife
added a commit
that referenced
this pull request
Apr 12, 2026
- #1 atlas_health mtime 캐시 (12ms→2ms) - #2 Phase 4 sector batch classify (N×27→1 awk) - #3 discovery_log 로테이션+gzip 스크립트 - #4 hook JSONL sidecar TSV 캐시 - #5 blowup 3모듈 병렬 실행 (50s→20s) - #6 edges.jsonl 인접 리스트 인덱스 빌더 - #7 seed_engine LRU /tmp 캐시 (51s→5s) - #8 discovery_graph NDJSON 스트리밍 파서 - #9 verified_constants 도메인 인덱스+lazy-load - #10 sync 9스크립트 MD5 delta 검출 - #11 topology.jsonl→mk2 레퍼런스 마이그레이션 - #12 n6_constants 12모듈 공유 로더+flock - #13 discovery-absorb 100건 배치 버퍼 - #14 guard.hexa 10 exec→2 batch exec - #15 hexa 인터프리터 핫스팟 분석 (JIT 병목 발견) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
dancinlife
added a commit
that referenced
this pull request
Apr 13, 2026
#1 loop-guard SIGTERM noise 제거 (.git/hooks/pre-commit disown+grouped kill) #2 bitter-gate min_samples=50 가드 추가 — 샘플 부족 시 insufficient_data #3 triage.hexa 신설 — mistakes/findings 를 (kind × rule × top_dir) 로 집계 #4 mistakes.jsonl 의 void 더미 443 라인 청소 #5 README: args() 이슈와 env var 우회 문서화 #6 .githooks/pre-commit 추적 + install-hook.hexa (core.hooksPath 설정) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Merged
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
grep -qF3단 (intra-batch / 캐시 / grep). discovery_log / graph_node / atlas.n6 3 영역. 이전 cross-process race 에서 7075841 3건 중복 관찰 → race window 0.lens_forge단독.auto_register/gap_finder/alien_index는 2026-04 기준 삭제/이동된 dead path (silent fail 통과). try/catch + WARN 으로 에러 가시성 확보.infra-only — phase 로직 / discovery 계산 / seed 진화 반환값 불변.
Test plan
🤖 Generated with Claude Code