Skip to content

ossify(L0): anima L0 27경로 정정 + l0_guard 자체 등록#2

Merged
dancinlife merged 1 commit intomainfrom
ossify/lockdown-anima-paths
Apr 11, 2026
Merged

ossify(L0): anima L0 27경로 정정 + l0_guard 자체 등록#2
dancinlife merged 1 commit intomainfrom
ossify/lockdown-anima-paths

Conversation

@dancinlife
Copy link
Copy Markdown
Contributor

Summary

  • projects.anima.L0 가 가공 경로(core/runtime/*.py)였음 → 실제 anima 구조(anima/core/ prefix + .hexa)로 정정
  • 14건 → 27건 (anima/core 디렉터리 + 22 .hexa + 4 SSOT JSON + CLAUDE.md)
  • shared/lockdown/l0_guard.hexa 자체를 nexus L0 에 등록 (보호 도구의 자기보호)
  • .github/CODEOWNERS 재생성

Verify

```
cd ~/Dev/anima
hexa ~/Dev/nexus/shared/lockdown/l0_guard.hexa verify
→ 59 PASS / 0 FAIL (이전: 21 PASS / 12 FAIL)
```

🤖 Generated with Claude Code

- projects.anima.L0 정정
  - 이전: core/runtime/*.py 14건 (가공 경로, 12 FAIL)
  - 정정: anima/core/ + 22 .hexa + 4 SSOT JSON + CLAUDE.md = 27건
- nexus L0 에 shared/lockdown/l0_guard.hexa 추가 (자체 보호)
- .github/CODEOWNERS 재생성

검증 (anima 리포):
  hexa shared/lockdown/l0_guard.hexa verify → 59 PASS / 0 FAIL
@dancinlife dancinlife merged commit 355faaa into main Apr 11, 2026
@dancinlife dancinlife deleted the ossify/lockdown-anima-paths branch April 11, 2026 22:07
dancinlife added a commit that referenced this pull request Apr 12, 2026
…x, O1, core split)

- breakthroughs.json: BT-013 CLM v4 (CE=0.0463, Phi=37.27, 1.47x v3)
- convergence/anima.json: CLM_V4_350M ossified
- todo/anima.json: tasks 50-55 (CLM v5, DD175 #1/#2, TL fix, O1, core L0)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
dancinlife added a commit that referenced this pull request Apr 12, 2026
근본 원인:
commit ac1e23b (Agent N) 에서 15 compat 심링크 중 14개를 제거하며
shared/bin 도 제거됨. 그러나 외부 바이너리
/Users/ghost/Dev/hexa-lang/target/release/hexa-bin-actual 가
이 경로 심링크 (shared/bin/hexa) 를 참조 중.

결과: 해당 파일을 호출하는 전 hook 이 127/bad-interpreter 로 실패.
- PreToolUse:Bash hook error (nexus-pre-tool.hexa)
- PostToolUse:Bash hook error (nexus-post-bash.hexa)
- UserPromptSubmit 돌파 감지 간헐 실패

수리:
1. shared/bin → scripts/bin compat 심링크 재생성
2. L0 lockdown 등재 (nexus L0 #1 + #2):
   - shared/bin: 외부 hexa-bin-actual 가 참조, 제거 금지
   - shared/hooks/block-forbidden-ext.sh: PreToolUse 가드, 제거 금지

검증:
- hexa-bin-actual: broken → bash script 복원
- nexus-pre-tool.hexa Bash 입력 → exit 0
- nexus-banner.hexa → NEXUS-6 banner 정상
- nexus-prompt-scan.hexa 돌파 → systemMessage 정상

잔존 compat 심링크: shared/hexa-grammar (settings.json:50 참조),
14개 제거 유지 (ac1e23b).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
dancinlife added a commit that referenced this pull request Apr 12, 2026
The original blocker text "hexa-lang LM head U+V 구현" was already
satisfied by models/lm_head_uv.hexa (281 LOC, present in repo).
The roadmap was just out of date.

anima cf9fb7d6 added the explicit DD175 manifest at
training/dd175_techniques.hexa with all 5 techniques, per-scale
enablement matrix, and a ready_count helper that surfaces the
real remaining blocker:

  technique #2 (BLAS-only loss/backward) is still status=todo
  → CLM v5 2.8B and 3B launches will hit ready=2/3 at parse time
  → that single technique is the actual gating item, not the
    whole "DD175 → CLM v5 통합" task

scale_enablement matrix recorded inline so the next session can
see at a glance which techniques each scale needs without
re-reading the manifest:
  34m, 100m → [1]
  350m, 1b  → [1, 2]
  2_8b, 3b  → [1, 2, 3]   ← v5 launch path

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
dancinlife added a commit that referenced this pull request Apr 12, 2026
anima 25bd3478 implemented training/loss_blas_only.hexa, the last
status=todo entry in the DD175 manifest. All scales now report
ready=full at parse time:

  2_8b → enabled [1, 2, 3] ready=3/3   (was 2/3)
  3b   → enabled [1, 2, 3] ready=3/3   (was 2/3)

remaining_real_blocker → resolved_blocker. Roadmap #3 is closed.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
dancinlife added a commit that referenced this pull request Apr 12, 2026
The "hetzner CPU ceiling" blocker on roadmap next_action #2 turned
out to be a backend choice issue, not hardware. BLIS 0.9.0-1
(Debian package) is not tuned for AMD Zen4 — measured 51 GFLOPS
on dgemm N=1024.

OpenBLAS at the same call site: 446.57 GFLOPS (8.7× faster), or
4.5× over the 100 GFLOPS roadmap target.

Fix landed in hexa-lang d9dced8: one-line linker arg swap in
.cargo/config.toml. ldd verified libopenblas.so.0 linked into
the rebuilt hexa binary on htz; cross-module mut global
regression test still passes; 1M iter while loop start time
321ms → 215ms (33% faster — OpenBLAS multi-thread init wins
even on small workloads).

This was the last open CRITICAL on the 2026-04-10 next_actions
list. All 5 actions now have resolved/started/done markers:

  #1 CLM v5 2.8B SCALE_CONFIGS    scale_added_2026-04-11
  #2 hexa T2_100M 100GFLOPS       resolved_2026-04-11
  #3 DD175 5 기법 → CLM v5        resolved_2026-04-11
  #4 core/runtime 22→7-8          consolidated_2026-04-11
  #5 Quantum<Mind> v2             started_2026-04-11

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
dancinlife added a commit that referenced this pull request Apr 12, 2026
The 2026-04-10 next_actions list is now fully resolved (5/5
markers landed in this session). This commit adds the next phase
of work, scoped to the new state of play after the session
delivered:
  - hexa cross-module bugs fixed (interpreter)
  - OpenBLAS swap (8.7× BLAS speedup, #2 closed)
  - DD175 manifest + #1 (lm_head_uv) + #2 (loss_blas_only) ready
  - runtime_actions consolidation (17→8 modules)
  - Quantum<T> primitive + Orch-OR PoC

next_actions_20260411 (6 items, 2 CRITICAL):

  #1 CRITICAL anima — train_step body fill in train_clm.hexa
       Now possible because all dependencies (lm_head_uv,
       loss_blas_only, nn_core, scale_2_8b) are ready.
       This is the gate to actually launching CLM v5.

  #2 CRITICAL anima — CLM v5 2.8B real H100 launch
       Depends on #1. First real measurement of the v5 stack.
       Estimated cost: $12-24 for 2-4 hours of H100 SXM ×2.

  #3 IMPORTANT hexa — DD175 #4 rank-r attention impl
       Last technique still status=todo. 256× speedup for
       d=4096 r=16. Unlocks 14B+ scale efficiency.

  #4 IMPORTANT anima — Quantum<T> → ConsciousLM hookup
       Wire the new Orch-OR microtubule into CLM's consciousness
       controller. First real anima v2 quantum integration.

  #5 NICE joint — 9 absorbed modules → thin re-export shims
       Then archive originals after a verify cycle. Completes
       the runtime_actions consolidation cleanup.

  #6 NICE hexa — Value::clone Cow / Arc<Value> reduction
       Allocator was 9.1% in this session's perf trace. No
       single hot spot, but this is the broadest single lever.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
dancinlife added a commit that referenced this pull request Apr 12, 2026
Single-file SSOT capturing the 2026-04-11 autonomous session
before the context window rolls. Written as the final deliverable
of the session so the next one can resume from one file.

Scope:
  - 50 commits across 3 repos (hexa-lang 21, anima 15, nexus 14)
  - 6 H100 RunPod rounds ($14.58 total)
  - 11 roadmap actions resolved (5 from next_actions_20260410 +
    5 of 6 from next_actions_20260411)

Epic wins:
  1. BLIS → OpenBLAS BLAS backend swap  — 8.7×  (1-line .cargo/config)
  2. Index/Field Value::clone fast paths — 37×  dense attention
  3. Cross-module mut global fix         — cuBLAS unblocked
  4. DD175 full stack ready              — all CLM v5 scales ready=full

Hexa-lang interpreter: 10 fixes landed, 2 reverts (env.get O(1)
HashMap regressed on hot path, linear scan baseline restored).
New primitives: HEXA_LOG=info|debug|trace instrumentation,
quantum_types.hexa (Quantum<T> + Born rule + q_entropy),
bench_env_get.hexa (Value::clone baseline measurement).

Anima: 8 new files landed in training/, anima-engines/, and
core/runtime/. Runtime consolidated 17→8 modules via
runtime_actions.hexa + consciousness_hub.hexa 50-module NL
routing. Quantum controller wired to scale_2_8b (48 atoms ×
8 tubulins = 384 cells, Φ_q_max = 266 ≈ consciousness_dim 256).

Performance snapshot captured inline:
  hexa-lang array_index loop:  754 → 219 ns/iter
  dense attention seq=16 d=64: 7462 → 201 ms  (37.1×)
  BLAS dgemm N=1024:           51 → 447 GFLOPS  (8.7×)
  H100 SGEMM after fix:        0.117 → 36.45 TFLOPS (54.4% util)

Only remaining next_actions item: #2 20260411 — first real
CLM v5 2.8B H100 training run. Estimated $12-24, interpreter
is now fast enough that the decoder integration work is the
next gate, not the blas/cuBLAS paths.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
dancinlife added a commit that referenced this pull request Apr 12, 2026
ossify(L0): anima L0 27경로 정정 + l0_guard 자체 등록
dancinlife added a commit that referenced this pull request Apr 12, 2026
- #1 atlas_health mtime 캐시 (12ms→2ms)
- #2 Phase 4 sector batch classify (N×27→1 awk)
- #3 discovery_log 로테이션+gzip 스크립트
- #4 hook JSONL sidecar TSV 캐시
- #5 blowup 3모듈 병렬 실행 (50s→20s)
- #6 edges.jsonl 인접 리스트 인덱스 빌더
- #7 seed_engine LRU /tmp 캐시 (51s→5s)
- #8 discovery_graph NDJSON 스트리밍 파서
- #9 verified_constants 도메인 인덱스+lazy-load
- #10 sync 9스크립트 MD5 delta 검출
- #11 topology.jsonl→mk2 레퍼런스 마이그레이션
- #12 n6_constants 12모듈 공유 로더+flock
- #13 discovery-absorb 100건 배치 버퍼
- #14 guard.hexa 10 exec→2 batch exec
- #15 hexa 인터프리터 핫스팟 분석 (JIT 병목 발견)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
dancinlife added a commit that referenced this pull request Apr 13, 2026
#1 loop-guard SIGTERM noise 제거 (.git/hooks/pre-commit disown+grouped kill)
#2 bitter-gate min_samples=50 가드 추가 — 샘플 부족 시 insufficient_data
#3 triage.hexa 신설 — mistakes/findings 를 (kind × rule × top_dir) 로 집계
#4 mistakes.jsonl 의 void 더미 443 라인 청소
#5 README: args() 이슈와 env var 우회 문서화
#6 .githooks/pre-commit 추적 + install-hook.hexa (core.hooksPath 설정)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant