feat: add --model and --variant flags to create-prd command#388
Conversation
- Propagates --model to agent via execute flags, matching run behavior - Propagates --variant to agent initialize options - Validates model before chat session starts - Updates docs and CLI help text
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
📝 WalkthroughWalkthroughThis PR adds ChangesModel Selection Feature
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #388 +/- ##
==========================================
+ Coverage 48.19% 48.27% +0.07%
==========================================
Files 117 117
Lines 38496 38511 +15
==========================================
+ Hits 18554 18590 +36
+ Misses 19942 19921 -21
🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
🧹 Nitpick comments (3)
src/commands/create-prd.tsx (1)
155-161: ⚡ Quick winConsider clarifying error context when model validation fails.
When model validation fails, the specific error is printed at line 158, but returning
nullcauses the generic "Chat mode requires an AI agent" message at lines 200-204 to also appear. While both messages are shown, the second message is slightly misleading. Consider either:
- Adding a distinct return code to differentiate "agent missing" from "model invalid"
- Exiting immediately after printing the model error
Example:
if (options.model) { const modelError = agent.validateModel(options.model); if (modelError) { console.error(`Invalid --model value: ${modelError}`); - return null; + console.error(''); + console.error('Run "ralph-tui plugins agents" to see supported models.'); + process.exit(1); } }This is a minor UX issue since the specific error is still visible, but improving the error flow would enhance clarity.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/commands/create-prd.tsx` around lines 155 - 161, The model validation branch using options.model and agent.validateModel currently prints the model error and returns null, which then triggers the generic "Chat mode requires an AI agent" flow; instead, after logging the specific model error, exit immediately with a non-zero status (e.g., process.exit(1)) so execution stops and the misleading generic message does not appear — update the block around options.model / agent.validateModel to terminate the process after printing the error rather than returning null.src/chat/engine.test.ts (2)
329-351: 💤 Low valueLGTM! Task engine model propagation verified.
The test suite confirms that
createTaskChatEnginepropagates the--modelflag consistently withcreatePrdChatEngine. The structure mirrors the PRD tests appropriately.♻️ Optional: More explicit assertion pattern
For line 349, same suggestion as above:
- const flags = getCapturedOptions()?.flags; - expect(flags === undefined || !flags.includes('--model')).toBe(true); + const flags = getCapturedOptions()?.flags; + if (flags !== undefined) { + expect(flags).not.toContain('--model'); + }This separates the undefined check from the array membership check for improved clarity.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/chat/engine.test.ts` around lines 329 - 351, Replace the ambiguous assertion in the second test with a clearer check: change the line that currently does expect(flags === undefined || !flags.includes('--model')).toBe(true) to either two explicit assertions (expect(flags).toBeUndefined() or expect(flags).not.toContain('--model')) or a single clearer check like expect(flags || []).not.toContain('--model'); update the test in the createTaskChatEngine suite (references: createTaskChatEngine, getCapturedOptions, flags) so it explicitly verifies absence of the '--model' flag.
291-327: 💤 Low valueLGTM! Comprehensive model propagation coverage.
The test suite thoroughly verifies that
createPrdChatEnginecorrectly forwards the--modelflag in various formats (simple names, provider/model, OpenRouter triple) and omits it when not provided. The assertions are correct and well-commented.♻️ Optional: More explicit assertion pattern
For lines 323-325, consider a slightly more explicit pattern:
- const flags = getCapturedOptions()?.flags; - // Either undefined or an array without --model is acceptable - expect(flags === undefined || !flags.includes('--model')).toBe(true); + const flags = getCapturedOptions()?.flags; + // Either undefined or an array without --model is acceptable + if (flags !== undefined) { + expect(flags).not.toContain('--model'); + }The current approach is correct and uses short-circuit evaluation properly, but separating the undefined check can improve readability.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/chat/engine.test.ts` around lines 291 - 327, Update the last test in the describe block that uses createPrdChatEngine/getCapturedOptions/sendMessage to make the assertion more explicit: instead of the short-circuit expect(flags === undefined || !flags.includes('--model')).toBe(true), split into two explicit assertions such as first asserting flags is undefined (expect(flags).toBeUndefined()) or, if defined, asserting it does not include '--model' (expect(flags).not.toContain('--model')); keep using the same getCapturedOptions().flags variable and sendMessage call so test behavior is unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Nitpick comments:
In `@src/chat/engine.test.ts`:
- Around line 329-351: Replace the ambiguous assertion in the second test with a
clearer check: change the line that currently does expect(flags === undefined ||
!flags.includes('--model')).toBe(true) to either two explicit assertions
(expect(flags).toBeUndefined() or expect(flags).not.toContain('--model')) or a
single clearer check like expect(flags || []).not.toContain('--model'); update
the test in the createTaskChatEngine suite (references: createTaskChatEngine,
getCapturedOptions, flags) so it explicitly verifies absence of the '--model'
flag.
- Around line 291-327: Update the last test in the describe block that uses
createPrdChatEngine/getCapturedOptions/sendMessage to make the assertion more
explicit: instead of the short-circuit expect(flags === undefined ||
!flags.includes('--model')).toBe(true), split into two explicit assertions such
as first asserting flags is undefined (expect(flags).toBeUndefined()) or, if
defined, asserting it does not include '--model'
(expect(flags).not.toContain('--model')); keep using the same
getCapturedOptions().flags variable and sendMessage call so test behavior is
unchanged.
In `@src/commands/create-prd.tsx`:
- Around line 155-161: The model validation branch using options.model and
agent.validateModel currently prints the model error and returns null, which
then triggers the generic "Chat mode requires an AI agent" flow; instead, after
logging the specific model error, exit immediately with a non-zero status (e.g.,
process.exit(1)) so execution stops and the misleading generic message does not
appear — update the block around options.model / agent.validateModel to
terminate the process after printing the error rather than returning null.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: 1d64cde5-75be-4f2e-874a-aedae5217071
📒 Files selected for processing (9)
src/chat/engine.test.tssrc/chat/engine.tssrc/cli.tsxsrc/commands/create-prd-utils.tssrc/commands/create-prd.test.tsxsrc/commands/create-prd.tsxsrc/tui/components/PrdChatApp.tsxtests/commands/create-prd.test.tswebsite/content/docs/cli/create-prd.mdx
Summary
--modeland--variantCLI flags tocreate-prd/primecommand, mirroring the existing flags onralph-tui run--modelis forwarded to the agent at execute time viaagentOptions.flags, supporting simple names (opus), provider/model format (anthropic/claude-3-5-sonnet), and OpenRouter triple format (moonshotai/kimi-k2.6:siliconflow/fp8)--variantis injected into agent plugin options duringinitialize()so agents like Gemini can apply reasoning-effort levels (minimal,high,max)createPrdChatEngineandcreateTaskChatEngineboth accept amodeloption and build the--modelflag internally via a sharedbuildAgentFlagshelperPrdChatAppaccepts and threads through themodelprop; dependency array updated accordinglycreate-prd.mdx) updated with new flags and usage examplesTesting
src/chat/engine.test.tscoveringcreatePrdChatEngineandcreateTaskChatEnginemodel propagation (with model, without model, provider/model format, OpenRouter format)src/commands/create-prd.test.tsxandtests/commands/create-prd.test.tscoveringparseCreatePrdArgsfor--modeland--variantindividually and in combination with--agent--modeland--variantappear in the help stringralph-tui create-prd --agent claude --model opusSummary by CodeRabbit
New Features
--modelflag to override the AI agent model for PRD creation.--variantflag to select model variant/reasoning effort.Documentation
--modeland--variantoptions.Tests