Skip to content

refactor(cognition,#1295): generate_recipe PR-1 — Rust types+prompt+parser+validator slice#1298

Merged
joelteply merged 1 commit into
canaryfrom
refactor/generate-recipe-rust-1295
May 16, 2026
Merged

refactor(cognition,#1295): generate_recipe PR-1 — Rust types+prompt+parser+validator slice#1298
joelteply merged 1 commit into
canaryfrom
refactor/generate-recipe-rust-1295

Conversation

@joelteply
Copy link
Copy Markdown
Contributor

Summary

Carrier-types choice

Unlike #1289's prompt (pure data → pure prompt), RecipeGenerateServerCommand's TS prompt depends on runtime registry state: `TemplateRegistry.list()` for the available-templates block + `RecipeLoader.getInstance().getAllRecipes()` for collision-avoidance hints. Carrying this state across the IPC boundary as explicit `RecipeGenerationRequest` fields (rather than as Rust-side global state) keeps the prompt builder + validator pure, testable, and parity-checkable.

What's in this PR (4 modules)

  • types.rs — RecipeTemplateInfo, RecipeGenerateHints, RecipeGenerationRequest, RecipeGenerationResponse, RecipeDefinitionShape (5 ts-rs camelCase exports). 5 round-trip tests.
  • prompt.rs — `build_recipe_system_prompt` + `build_recipe_user_prompt` mirror TS byte-for-byte (schema block, available-templates list, standard-pipeline pattern, rules, hints rendering). 8 tests covering anchors + 0/N templates + every hint combination.
  • parser.rs — `parse_recipe_from_ai_response` extracts the JSON envelope via the same `/\{[\s\S]*\}/` regex as TS, tolerates prose preamble + markdown fences. Typed `ParseError::NoJsonEnvelope` / `MalformedJson` with raw_preview capped at 500 chars (mirrors TS `slice(0, 500)`). 7 edge-case tests.
  • validator.rs — Structural validation (required fields, kebab-case uniqueId, pipeline shape, RAG template messageHistory, strategy enum + arrays, role schema, in-request duplicate check). FS collision check + sentinel-template existence stay TS-side (PR-3 shim). 12 tests.

What stays TS-side (PR-3 shim concerns)

  • Filesystem collision check (`existing-recipes.some(r => r.uniqueId === recipe.uniqueId)`) — pure FS state
  • Sentinel-template existence (`TemplateRegistry.has(tmpl)`) — runtime-registry state
  • File save (`fs.writeFileSync(...)`) — JTAG framework concern
  • Recipe cache reload — RecipeLoader concern

Why no fallback

Per #1262 (no-CPU-fallback audit), the TS path's silent error-on-malformed-JSON returns `{ success: false, error: '...' }`. Rust returns typed `Err` — PR-2 IPC handler maps it to `validationErrors[]` for the JTAG envelope, preserving diagnostic info.

Next

  • PR-2: `cognition/generate-recipe` IPC command in modules/cognition.rs wiring `AIProviderRegistry::generate_text` + the prompt+parser+validator (Anthropic default, 0.4 temperature, 4000 max_tokens — matches TS).
  • PR-3: `RecipeGenerateServerCommand.ts` becomes thin shim that gathers templates + existing recipe IDs, calls Rust, FS-collision-checks + saves on success path.

Test plan

  • cargo test cognition::generate_recipe — 40/40 pass
  • cargo check — clean
  • ts-rs auto-emits all 5 types to shared/generated/cognition/
  • precommit: TS compile + browser-ping + chat-roundtrip PASSED
  • pre-push: TS + ESLint baseline + Rust compile + Rust tests PASSED
  • Reviewer: confirm carrier-types shape (templates + existing IDs as request fields) aligns with the architecture for the next batch of registry-dependent oxidizers

🤖 Generated with Claude Code

…e in Rust

First slice of RecipeGenerateServerCommand.ts (371 LOC) → Rust per the
oxidization mission (#1248 umbrella). Same shape as #1289 (rate_proposals):
pure-functions slice first, IPC handler in PR-2, TS shim collapse in PR-3.

Per the carrier-types design block on #1295: the runtime registry state
that the TS prompt depends on (TemplateRegistry.list output, existing
recipe IDs from RecipeLoader.getInstance().getAllRecipes()) crosses the
IPC boundary as explicit RecipeGenerationRequest fields. Keeps the
prompt builder + validator pure, testable, and parity-checkable.

What's in this PR (4 modules, 40 tests):

- types.rs (5 ts-rs exports)
  - RecipeTemplateInfo, RecipeGenerateHints, RecipeGenerationRequest,
    RecipeGenerationResponse, RecipeDefinitionShape
  - All camelCase serde + ts-rs auto-export to shared/generated/cognition/
  - 5 round-trip / shape-acceptance tests

- prompt.rs (build_recipe_system_prompt + build_recipe_user_prompt)
  - System prompt mirrors TS buildSystemPrompt byte-for-byte (schema
    block, available-templates list, standard-pipeline pattern, rules)
  - User prompt mirrors TS buildUserPrompt (description + optional hints
    rendered as bulleted "Hints:" block)
  - 8 tests covering anchors, template rendering with 0/N entries, all
    hint types, partial hints, empty-hints skip-block

- parser.rs (parse_recipe_from_ai_response → RecipeDefinitionShape)
  - Same regex anchor as TS: /\{[\s\S]*\}/ extracts JSON envelope
  - Tolerates prose preamble + markdown fences (matches TS behavior)
  - Typed ParseError::NoJsonEnvelope / MalformedJson with raw_preview
    capped at 500 chars (mirrors TS slice(0, 500))
  - 7 tests covering happy-path + prose preamble + fence + no-JSON +
    malformed + unknown-fields-tolerated + missing-optionals + cap

- validator.rs (validate_recipe_structure → Vec<String>)
  - Mirrors TS validateRecipe checks: required fields, kebab-case
    uniqueId, pipeline shape, RAG template messageHistory, strategy
    enum + required arrays, role type + requires
  - In-request duplicate check via existing_recipe_ids carrier
  - Filesystem collision check + sentinel-template existence stay
    TS-side (PR-3 shim) — they're pure FS / runtime-registry concerns
  - 12 tests covering happy path, every required-field gap, kebab-case
    rejection, empty pipeline, malformed steps, invalid enums, missing
    strategy arrays, role schema, in-request duplicate

## Why no fallback

Per #1262, the TS path's silent error-on-malformed-JSON returns
{ success: false, error: '...' }. Rust returns typed Err — PR-2 IPC
handler maps it to validationErrors[] for the JTAG envelope.

## Next

- PR-2: cognition/generate-recipe IPC command wiring
  AIProviderRegistry::generate_text + the prompt+parser+validator
- PR-3: RecipeGenerateServerCommand.ts becomes thin shim that gathers
  templates + existing recipe IDs, calls Rust, FS collision-checks +
  saves on success

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
@joelteply joelteply merged commit c7117de into canary May 16, 2026
4 checks passed
@joelteply joelteply deleted the refactor/generate-recipe-rust-1295 branch May 16, 2026 03:29
joelteply pushed a commit that referenced this pull request May 16, 2026
…strator

Wires the prompt+parser+validator shipped in PR-1 (#1298) to
AIProviderRegistry::generate_text via the cognition/generate-recipe IPC
command. Stacked on PR-1 (rebase to canary once PR-1 merges).

Same shape as #1289 PR-2 (rate_proposals IPC). Shares the
AIProviderRegistry singleton with shared_analysis + rate_proposals,
so concurrent generator calls go through the same registry read-lock
— no new contention surface.

What's in this PR:

- cognition/generate_recipe/orchestrator.rs — generate_recipe_with_ai()
  - Builds system + user prompts via PR-1
  - Calls global_registry().read().await + generate_text() with
    Anthropic default + 0.4 temperature + 4000 max_tokens (matches
    TS RecipeGenerateServerCommand defaults exactly)
  - default_model_for_provider() mirrors TS switch lines 360-369
  - Parses with PR-1 parser; on parse failure returns Err with the
    typed ParseError as string
  - Applies unique_id_override AFTER parse, BEFORE validation
    (matches TS sequence at lines 80-82 / 85)
  - Runs PR-1 validator with carrier existing_recipe_ids
  - Returns { recipe, validationErrors }

- modules/cognition.rs — new "cognition/generate-recipe" command branch
  parsing { request, provider?, model?, temperature? } and delegating
  to the orchestrator

- 4 new orchestrator tests covering default-model parity, pinned
  generation constants, unique_id_override semantics

44/44 cognition::generate_recipe tests pass (was 40 in PR-1, +4 new).

## Why no fallback

Per #1262, the TS path returned { success: false, error: '...' } on AI
failure, masking provider outages. This Rust path returns typed Err on
inference failure — the JTAG shim in PR-3 maps it to a validationErrors[]
entry, preserving the failure mode for debugging.

## Validation errors NOT propagated as Err

Validation failures are returned in the response (not Err) so the shim
can render them via the JTAG envelope. Mirrors TS behavior exactly:
validationErrors go alongside the recipe; success: false reflects the
validation gate, not a parse failure.

## Next: PR-3

RecipeGenerateServerCommand.ts (371 LOC) becomes thin shim that:
- Gathers TemplateRegistry.list() + RecipeLoader.getInstance()
  .getAllRecipes().map(r => r.uniqueId) into RecipeGenerationRequest
- Calls Commands.execute('cognition/generate-recipe', { request, ... })
- On success path: FS collision check + sentinel-template existence
  check + saveRecipe + RecipeLoader.clearCache + reload

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
joelteply pushed a commit that referenced this pull request May 16, 2026
…strator

Wires the prompt+parser+validator shipped in PR-1 (#1298) to
AIProviderRegistry::generate_text via the cognition/generate-recipe IPC
command. Stacked on PR-1 (rebase to canary once PR-1 merges).

Same shape as #1289 PR-2 (rate_proposals IPC). Shares the
AIProviderRegistry singleton with shared_analysis + rate_proposals,
so concurrent generator calls go through the same registry read-lock
— no new contention surface.

What's in this PR:

- cognition/generate_recipe/orchestrator.rs — generate_recipe_with_ai()
  - Builds system + user prompts via PR-1
  - Calls global_registry().read().await + generate_text() with
    Anthropic default + 0.4 temperature + 4000 max_tokens (matches
    TS RecipeGenerateServerCommand defaults exactly)
  - default_model_for_provider() mirrors TS switch lines 360-369
  - Parses with PR-1 parser; on parse failure returns Err with the
    typed ParseError as string
  - Applies unique_id_override AFTER parse, BEFORE validation
    (matches TS sequence at lines 80-82 / 85)
  - Runs PR-1 validator with carrier existing_recipe_ids
  - Returns { recipe, validationErrors }

- modules/cognition.rs — new "cognition/generate-recipe" command branch
  parsing { request, provider?, model?, temperature? } and delegating
  to the orchestrator

- 4 new orchestrator tests covering default-model parity, pinned
  generation constants, unique_id_override semantics

44/44 cognition::generate_recipe tests pass (was 40 in PR-1, +4 new).

Per #1262, the TS path returned { success: false, error: '...' } on AI
failure, masking provider outages. This Rust path returns typed Err on
inference failure — the JTAG shim in PR-3 maps it to a validationErrors[]
entry, preserving the failure mode for debugging.

Validation failures are returned in the response (not Err) so the shim
can render them via the JTAG envelope. Mirrors TS behavior exactly:
validationErrors go alongside the recipe; success: false reflects the
validation gate, not a parse failure.

RecipeGenerateServerCommand.ts (371 LOC) becomes thin shim that:
- Gathers TemplateRegistry.list() + RecipeLoader.getInstance()
  .getAllRecipes().map(r => r.uniqueId) into RecipeGenerationRequest
- Calls Commands.execute('cognition/generate-recipe', { request, ... })
- On success path: FS collision check + sentinel-template existence
  check + saveRecipe + RecipeLoader.clearCache + reload

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
joelteply added a commit that referenced this pull request May 16, 2026
…strator (#1301)

Wires the prompt+parser+validator shipped in PR-1 (#1298) to
AIProviderRegistry::generate_text via the cognition/generate-recipe IPC
command. Stacked on PR-1 (rebase to canary once PR-1 merges).

Same shape as #1289 PR-2 (rate_proposals IPC). Shares the
AIProviderRegistry singleton with shared_analysis + rate_proposals,
so concurrent generator calls go through the same registry read-lock
— no new contention surface.

What's in this PR:

- cognition/generate_recipe/orchestrator.rs — generate_recipe_with_ai()
  - Builds system + user prompts via PR-1
  - Calls global_registry().read().await + generate_text() with
    Anthropic default + 0.4 temperature + 4000 max_tokens (matches
    TS RecipeGenerateServerCommand defaults exactly)
  - default_model_for_provider() mirrors TS switch lines 360-369
  - Parses with PR-1 parser; on parse failure returns Err with the
    typed ParseError as string
  - Applies unique_id_override AFTER parse, BEFORE validation
    (matches TS sequence at lines 80-82 / 85)
  - Runs PR-1 validator with carrier existing_recipe_ids
  - Returns { recipe, validationErrors }

- modules/cognition.rs — new "cognition/generate-recipe" command branch
  parsing { request, provider?, model?, temperature? } and delegating
  to the orchestrator

- 4 new orchestrator tests covering default-model parity, pinned
  generation constants, unique_id_override semantics

44/44 cognition::generate_recipe tests pass (was 40 in PR-1, +4 new).

## Why no fallback

Per #1262, the TS path returned { success: false, error: '...' } on AI
failure, masking provider outages. This Rust path returns typed Err on
inference failure — the JTAG shim in PR-3 maps it to a validationErrors[]
entry, preserving the failure mode for debugging.

## Validation errors NOT propagated as Err

Validation failures are returned in the response (not Err) so the shim
can render them via the JTAG envelope. Mirrors TS behavior exactly:
validationErrors go alongside the recipe; success: false reflects the
validation gate, not a parse failure.

## Next: PR-3

RecipeGenerateServerCommand.ts (371 LOC) becomes thin shim that:
- Gathers TemplateRegistry.list() + RecipeLoader.getInstance()
  .getAllRecipes().map(r => r.uniqueId) into RecipeGenerationRequest
- Calls Commands.execute('cognition/generate-recipe', { request, ... })
- On success path: FS collision check + sentinel-template existence
  check + saveRecipe + RecipeLoader.clearCache + reload

Co-authored-by: Test <test@test.com>
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant