feat: implement conditional model tag display logic#241
feat: implement conditional model tag display logic#241AnthonyRonning merged 1 commit intomasterfrom
Conversation
- Make Gemma show "starter" tag for starter users, "pro" tag for others - Remove tags from Llama models (no badges) - Keep other models showing "pro" tag - Add getModelBadges() function for dynamic badge determination - Use existing billingStatus from model selector context Fixes #240 🤖 Generated with [Claude Code](https://claude.ai/code) Co-authored-by: Anthony <AnthonyRonning@users.noreply.github.com>
WalkthroughImplements dynamic badge derivation in ModelSelector.tsx by introducing getModelBadges(modelId). Gemma’s badge now depends on billing status (Starter vs. Pro), Llama shows no badges, and other models use configured badges or default to Pro. getDisplayName now renders badges from this function; unknown models still show Coming Soon. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor U as User
participant MS as ModelSelector
participant BS as Billing State
participant CFG as MODEL_CONFIG
U->>MS: Open model menu / render list
MS->>BS: Read current plan (starter/non-starter)
MS->>MS: getModelBadges(modelId)
alt modelId is Gemma
MS->>MS: if plan includes "starter" => ["Starter"] else ["Pro"]
else modelId is Llama
MS->>MS: []
else known model
MS->>CFG: Read config.badges
MS->>MS: Use config.badges or ["Pro"]
else unknown model
MS->>MS: ["Coming Soon"]
end
MS-->>U: Render names with derived badges
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes Possibly related PRs
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (1)
🧰 Additional context used📓 Path-based instructions (2)**/*.{ts,tsx,js,jsx}📄 CodeRabbit inference engine (CLAUDE.md)
Files:
**/*.{ts,tsx}📄 CodeRabbit inference engine (CLAUDE.md)
Files:
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
🔇 Additional comments (2)
Tip 👮 Agentic pre-merge checks are now available in preview!Pro plan users can now enable pre-merge checks in their settings to enforce checklists before merging PRs.
Please see the documentation for more information. Example: reviews:
pre_merge_checks:
custom_checks:
- name: "Undocumented Breaking Changes"
mode: "warning"
instructions: |
Pass/fail criteria: All breaking changes to public APIs, CLI flags, environment variables, configuration keys, database schemas, or HTTP/GraphQL endpoints must be documented in the "Breaking Change" section of the PR description and in CHANGELOG.md. Exclude purely internal or private changes (e.g., code not exported from package entry points or explicitly marked as internal).Please share your feedback with us on this Discord post. Comment |
There was a problem hiding this comment.
Greptile Summary
This PR implements dynamic model tag display logic for the ModelSelector component to address issue #240 regarding deprecated starter plan badges. The changes replace hardcoded model badges with a context-aware system that displays appropriate tags based on the user's billing status.
The implementation removes the hardcoded 'Starter' badge from the Gemma 3 27B model configuration and introduces a new getModelBadges() function that dynamically determines which badges to display. For Gemma models, it shows "Starter" for users on starter plans and "Pro" for all others. Llama models display no badges at all, while other models either use their existing configured badges or default to "Pro".
The solution leverages the existing billingStatus context from the ModelSelector component, avoiding additional API calls and maintaining performance. The badge logic is integrated into the existing getDisplayName() function, which handles the rendering of model names and their associated tags in the UI dropdown.
This change fits well with the codebase's existing pattern of using billing status context throughout the application (similar to how it's used in pricing components) and maintains the component's existing structure while making the badge system more user-appropriate and less confusing.
Confidence score: 4/5
- This PR is safe to merge with minimal risk as it only affects UI display logic without changing core functionality
- Score reflects well-structured conditional logic and proper use of existing context, though lacks explicit error handling for edge cases
- Pay close attention to the ModelSelector.tsx file to ensure the billing status context is always available when the component renders
1 file reviewed, 2 comments
| // Llama models: no badges | ||
| if (modelId.includes("llama") || modelId.includes("Llama")) { | ||
| return []; |
There was a problem hiding this comment.
style: Case sensitivity check for Llama models could miss edge cases. Consider using a single case-insensitive approach: modelId.toLowerCase().includes('llama')
| // Llama models: no badges | |
| if (modelId.includes("llama") || modelId.includes("Llama")) { | |
| return []; | |
| // Llama models: no badges | |
| if (modelId.toLowerCase().includes("llama")) { | |
| return []; |
| } | ||
|
|
||
| // Other models: use their existing badges or default to ["Pro"] | ||
| return config?.badges || ["Pro"]; |
There was a problem hiding this comment.
logic: Models without explicit badges now default to ['Pro'] which may not be accurate for all models. Consider checking model access requirements before defaulting.
Implements conditional model tag display logic as requested in issue #240.
Changes
Closes #240
Generated with Claude Code
Summary by CodeRabbit