Skip to content

feat(cli): larql edit + apply-patch — rank-1 fact editing (Phase B of RFC-0001)#7

Merged
mikeumus merged 1 commit intomainfrom
feat/edit-command-v2
Apr 18, 2026
Merged

feat(cli): larql edit + apply-patch — rank-1 fact editing (Phase B of RFC-0001)#7
mikeumus merged 1 commit intomainfrom
feat/edit-command-v2

Conversation

@mikeumus
Copy link
Copy Markdown

Phase B rebased directly onto main after Phase A merge. Supersedes #4 (auto-closed when its base branch was deleted on #3 merge). Cherry-picks e4d5eed from feat/edit-command.

Same content as #4: larql edit (rank-1 editor) + larql apply-patch + LastPositionInjectingFfn + binary patch file format. See #4 for full description.

Compile-checked against current main.

… RFC-0001)

Implements Phase B of RFC-0001 (#2): single-fact rank-1 editor with
portable patch file format. Builds on Phase A's LastPositionAblatingFfn
(#3) and adds the symmetric LastPositionInjectingFfn for scale search.

### New library module: `larql-inference/src/edit.rs`
- `EditPatch` struct (serializable via serde)
- `compute_rank1(k, d, scale, layer, provenance) -> EditPatch`
- `write_patch(path, &patch)` / `read_patch(path) -> EditPatch` with a
  simple binary format: LQPATCH magic + JSON meta + little-endian f32
  vectors for d and k_norm. ~55 KB for Gemma 4 4B.
- `apply_patch(&mut ModelWeights, &EditPatch)`: installs the rank-1
  outer product into `down_proj.weight` in place, handling both
  `[hidden, intermediate]` and `[intermediate, hidden]` layouts.

### New FFN wrapper: `larql-inference/src/ffn/injecting.rs`
- `LastPositionInjectingFfn` — adds a fixed delta vector to the inner
  backend's last-row output at one target layer. Symmetric to the
  ablating wrapper from PR #3. Used for auto-scale search.

### New CLI commands
- `larql edit <model> --src "..." --tgt "..." --new-token " Tokyo" --output f2t.lqpatch`
  Runs Phase A crown discovery (or accepts `--layer`), captures k at the
  crown layer for both prompts, computes d = W_down @ (k_tgt - k_src),
  linearly searches [0.5, 1, 1.5, 2, 2.5, 3, 4] for the minimum scale
  that flips the source's top-1 to --new-token, emits the patch.
- `larql apply-patch <model> --patch f2t.lqpatch --prompt "..."`
  Non-destructively installs one or more patches into the loaded
  weights, optionally runs a test prediction. Supports `--reverse`
  to subtract a patch (verifies reversibility).

### Supporting change
- Added `InferenceModel::weights_mut()` accessor so apply-patch can
  mutate the in-memory weight map without reloading.

Methodology validated in Python across Divinci-AI/server
notebooks/CHAPTER_20_HONEY.md (Phase 140c: France→Tokyo with 11/11
specificity at 0.9% weight perturbation) and CHAPTER_18_THE_EDIT.md
(Phase 130 scale search). The Rust port preserves the same math.

Compile-checked with `cargo check --package larql-cli`.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
@mikeumus mikeumus merged commit 7c597f8 into main Apr 18, 2026
@mikeumus mikeumus deleted the feat/edit-command-v2 branch April 18, 2026 00:01
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant