refactor(cognition,#1295): generate_recipe PR-1 — Rust types+prompt+parser+validator slice#1298
Merged
Merged
Conversation
…e in Rust First slice of RecipeGenerateServerCommand.ts (371 LOC) → Rust per the oxidization mission (#1248 umbrella). Same shape as #1289 (rate_proposals): pure-functions slice first, IPC handler in PR-2, TS shim collapse in PR-3. Per the carrier-types design block on #1295: the runtime registry state that the TS prompt depends on (TemplateRegistry.list output, existing recipe IDs from RecipeLoader.getInstance().getAllRecipes()) crosses the IPC boundary as explicit RecipeGenerationRequest fields. Keeps the prompt builder + validator pure, testable, and parity-checkable. What's in this PR (4 modules, 40 tests): - types.rs (5 ts-rs exports) - RecipeTemplateInfo, RecipeGenerateHints, RecipeGenerationRequest, RecipeGenerationResponse, RecipeDefinitionShape - All camelCase serde + ts-rs auto-export to shared/generated/cognition/ - 5 round-trip / shape-acceptance tests - prompt.rs (build_recipe_system_prompt + build_recipe_user_prompt) - System prompt mirrors TS buildSystemPrompt byte-for-byte (schema block, available-templates list, standard-pipeline pattern, rules) - User prompt mirrors TS buildUserPrompt (description + optional hints rendered as bulleted "Hints:" block) - 8 tests covering anchors, template rendering with 0/N entries, all hint types, partial hints, empty-hints skip-block - parser.rs (parse_recipe_from_ai_response → RecipeDefinitionShape) - Same regex anchor as TS: /\{[\s\S]*\}/ extracts JSON envelope - Tolerates prose preamble + markdown fences (matches TS behavior) - Typed ParseError::NoJsonEnvelope / MalformedJson with raw_preview capped at 500 chars (mirrors TS slice(0, 500)) - 7 tests covering happy-path + prose preamble + fence + no-JSON + malformed + unknown-fields-tolerated + missing-optionals + cap - validator.rs (validate_recipe_structure → Vec<String>) - Mirrors TS validateRecipe checks: required fields, kebab-case uniqueId, pipeline shape, RAG template messageHistory, strategy enum + required arrays, role type + requires - In-request duplicate check via existing_recipe_ids carrier - Filesystem collision check + sentinel-template existence stay TS-side (PR-3 shim) — they're pure FS / runtime-registry concerns - 12 tests covering happy path, every required-field gap, kebab-case rejection, empty pipeline, malformed steps, invalid enums, missing strategy arrays, role schema, in-request duplicate ## Why no fallback Per #1262, the TS path's silent error-on-malformed-JSON returns { success: false, error: '...' }. Rust returns typed Err — PR-2 IPC handler maps it to validationErrors[] for the JTAG envelope. ## Next - PR-2: cognition/generate-recipe IPC command wiring AIProviderRegistry::generate_text + the prompt+parser+validator - PR-3: RecipeGenerateServerCommand.ts becomes thin shim that gathers templates + existing recipe IDs, calls Rust, FS collision-checks + saves on success Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This was referenced May 16, 2026
joelteply
pushed a commit
that referenced
this pull request
May 16, 2026
…strator Wires the prompt+parser+validator shipped in PR-1 (#1298) to AIProviderRegistry::generate_text via the cognition/generate-recipe IPC command. Stacked on PR-1 (rebase to canary once PR-1 merges). Same shape as #1289 PR-2 (rate_proposals IPC). Shares the AIProviderRegistry singleton with shared_analysis + rate_proposals, so concurrent generator calls go through the same registry read-lock — no new contention surface. What's in this PR: - cognition/generate_recipe/orchestrator.rs — generate_recipe_with_ai() - Builds system + user prompts via PR-1 - Calls global_registry().read().await + generate_text() with Anthropic default + 0.4 temperature + 4000 max_tokens (matches TS RecipeGenerateServerCommand defaults exactly) - default_model_for_provider() mirrors TS switch lines 360-369 - Parses with PR-1 parser; on parse failure returns Err with the typed ParseError as string - Applies unique_id_override AFTER parse, BEFORE validation (matches TS sequence at lines 80-82 / 85) - Runs PR-1 validator with carrier existing_recipe_ids - Returns { recipe, validationErrors } - modules/cognition.rs — new "cognition/generate-recipe" command branch parsing { request, provider?, model?, temperature? } and delegating to the orchestrator - 4 new orchestrator tests covering default-model parity, pinned generation constants, unique_id_override semantics 44/44 cognition::generate_recipe tests pass (was 40 in PR-1, +4 new). ## Why no fallback Per #1262, the TS path returned { success: false, error: '...' } on AI failure, masking provider outages. This Rust path returns typed Err on inference failure — the JTAG shim in PR-3 maps it to a validationErrors[] entry, preserving the failure mode for debugging. ## Validation errors NOT propagated as Err Validation failures are returned in the response (not Err) so the shim can render them via the JTAG envelope. Mirrors TS behavior exactly: validationErrors go alongside the recipe; success: false reflects the validation gate, not a parse failure. ## Next: PR-3 RecipeGenerateServerCommand.ts (371 LOC) becomes thin shim that: - Gathers TemplateRegistry.list() + RecipeLoader.getInstance() .getAllRecipes().map(r => r.uniqueId) into RecipeGenerationRequest - Calls Commands.execute('cognition/generate-recipe', { request, ... }) - On success path: FS collision check + sentinel-template existence check + saveRecipe + RecipeLoader.clearCache + reload Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
joelteply
pushed a commit
that referenced
this pull request
May 16, 2026
…strator Wires the prompt+parser+validator shipped in PR-1 (#1298) to AIProviderRegistry::generate_text via the cognition/generate-recipe IPC command. Stacked on PR-1 (rebase to canary once PR-1 merges). Same shape as #1289 PR-2 (rate_proposals IPC). Shares the AIProviderRegistry singleton with shared_analysis + rate_proposals, so concurrent generator calls go through the same registry read-lock — no new contention surface. What's in this PR: - cognition/generate_recipe/orchestrator.rs — generate_recipe_with_ai() - Builds system + user prompts via PR-1 - Calls global_registry().read().await + generate_text() with Anthropic default + 0.4 temperature + 4000 max_tokens (matches TS RecipeGenerateServerCommand defaults exactly) - default_model_for_provider() mirrors TS switch lines 360-369 - Parses with PR-1 parser; on parse failure returns Err with the typed ParseError as string - Applies unique_id_override AFTER parse, BEFORE validation (matches TS sequence at lines 80-82 / 85) - Runs PR-1 validator with carrier existing_recipe_ids - Returns { recipe, validationErrors } - modules/cognition.rs — new "cognition/generate-recipe" command branch parsing { request, provider?, model?, temperature? } and delegating to the orchestrator - 4 new orchestrator tests covering default-model parity, pinned generation constants, unique_id_override semantics 44/44 cognition::generate_recipe tests pass (was 40 in PR-1, +4 new). Per #1262, the TS path returned { success: false, error: '...' } on AI failure, masking provider outages. This Rust path returns typed Err on inference failure — the JTAG shim in PR-3 maps it to a validationErrors[] entry, preserving the failure mode for debugging. Validation failures are returned in the response (not Err) so the shim can render them via the JTAG envelope. Mirrors TS behavior exactly: validationErrors go alongside the recipe; success: false reflects the validation gate, not a parse failure. RecipeGenerateServerCommand.ts (371 LOC) becomes thin shim that: - Gathers TemplateRegistry.list() + RecipeLoader.getInstance() .getAllRecipes().map(r => r.uniqueId) into RecipeGenerationRequest - Calls Commands.execute('cognition/generate-recipe', { request, ... }) - On success path: FS collision check + sentinel-template existence check + saveRecipe + RecipeLoader.clearCache + reload Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
joelteply
added a commit
that referenced
this pull request
May 16, 2026
…strator (#1301) Wires the prompt+parser+validator shipped in PR-1 (#1298) to AIProviderRegistry::generate_text via the cognition/generate-recipe IPC command. Stacked on PR-1 (rebase to canary once PR-1 merges). Same shape as #1289 PR-2 (rate_proposals IPC). Shares the AIProviderRegistry singleton with shared_analysis + rate_proposals, so concurrent generator calls go through the same registry read-lock — no new contention surface. What's in this PR: - cognition/generate_recipe/orchestrator.rs — generate_recipe_with_ai() - Builds system + user prompts via PR-1 - Calls global_registry().read().await + generate_text() with Anthropic default + 0.4 temperature + 4000 max_tokens (matches TS RecipeGenerateServerCommand defaults exactly) - default_model_for_provider() mirrors TS switch lines 360-369 - Parses with PR-1 parser; on parse failure returns Err with the typed ParseError as string - Applies unique_id_override AFTER parse, BEFORE validation (matches TS sequence at lines 80-82 / 85) - Runs PR-1 validator with carrier existing_recipe_ids - Returns { recipe, validationErrors } - modules/cognition.rs — new "cognition/generate-recipe" command branch parsing { request, provider?, model?, temperature? } and delegating to the orchestrator - 4 new orchestrator tests covering default-model parity, pinned generation constants, unique_id_override semantics 44/44 cognition::generate_recipe tests pass (was 40 in PR-1, +4 new). ## Why no fallback Per #1262, the TS path returned { success: false, error: '...' } on AI failure, masking provider outages. This Rust path returns typed Err on inference failure — the JTAG shim in PR-3 maps it to a validationErrors[] entry, preserving the failure mode for debugging. ## Validation errors NOT propagated as Err Validation failures are returned in the response (not Err) so the shim can render them via the JTAG envelope. Mirrors TS behavior exactly: validationErrors go alongside the recipe; success: false reflects the validation gate, not a parse failure. ## Next: PR-3 RecipeGenerateServerCommand.ts (371 LOC) becomes thin shim that: - Gathers TemplateRegistry.list() + RecipeLoader.getInstance() .getAllRecipes().map(r => r.uniqueId) into RecipeGenerationRequest - Calls Commands.execute('cognition/generate-recipe', { request, ... }) - On success path: FS collision check + sentinel-template existence check + saveRecipe + RecipeLoader.clearCache + reload Co-authored-by: Test <test@test.com> Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Carrier-types choice
Unlike #1289's prompt (pure data → pure prompt), RecipeGenerateServerCommand's TS prompt depends on runtime registry state: `TemplateRegistry.list()` for the available-templates block + `RecipeLoader.getInstance().getAllRecipes()` for collision-avoidance hints. Carrying this state across the IPC boundary as explicit `RecipeGenerationRequest` fields (rather than as Rust-side global state) keeps the prompt builder + validator pure, testable, and parity-checkable.
What's in this PR (4 modules)
What stays TS-side (PR-3 shim concerns)
Why no fallback
Per #1262 (no-CPU-fallback audit), the TS path's silent error-on-malformed-JSON returns `{ success: false, error: '...' }`. Rust returns typed `Err` — PR-2 IPC handler maps it to `validationErrors[]` for the JTAG envelope, preserving diagnostic info.
Next
Test plan
🤖 Generated with Claude Code