Skip to content

feat: implement conditional model tag display logic#241

Merged
AnthonyRonning merged 1 commit intomasterfrom
claude/issue-240-20250917-1642
Sep 17, 2025
Merged

feat: implement conditional model tag display logic#241
AnthonyRonning merged 1 commit intomasterfrom
claude/issue-240-20250917-1642

Conversation

@AnthonyRonning
Copy link
Contributor

@AnthonyRonning AnthonyRonning commented Sep 17, 2025

Implements conditional model tag display logic as requested in issue #240.

Changes

  • Gemma shows "starter" tag for starter users, "pro" for others
  • Llama models show no tags
  • Other models show "pro" tag
  • Uses existing billing status from model selector context

Closes #240

Generated with Claude Code

Summary by CodeRabbit

  • New Features
    • Model badges are now dynamic based on your billing plan.
    • Gemma models show “Starter” on starter plans and “Pro” otherwise.
    • Llama models display no badges.
    • Other models retain their existing badges unless overridden by the new logic.
    • Unknown models continue to show a “Coming Soon” badge.
    • Existing display, lock, and vision indicators are unchanged.

- Make Gemma show "starter" tag for starter users, "pro" tag for others
- Remove tags from Llama models (no badges)
- Keep other models showing "pro" tag
- Add getModelBadges() function for dynamic badge determination
- Use existing billingStatus from model selector context

Fixes #240

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-authored-by: Anthony <AnthonyRonning@users.noreply.github.com>
@coderabbitai
Copy link

coderabbitai bot commented Sep 17, 2025

Walkthrough

Implements dynamic badge derivation in ModelSelector.tsx by introducing getModelBadges(modelId). Gemma’s badge now depends on billing status (Starter vs. Pro), Llama shows no badges, and other models use configured badges or default to Pro. getDisplayName now renders badges from this function; unknown models still show Coming Soon.

Changes

Cohort / File(s) Summary
Model selector logic
frontend/src/components/ModelSelector.tsx
Added getModelBadges(modelId) to compute badges from billing status. Updated getDisplayName to use derived badges. Removed static Gemma badge from MODEL_CONFIG. Logic: Gemma → Starter if plan includes "starter", else Pro; Llama → no badges; others → existing config badges or default Pro; unknown → Coming Soon.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  actor U as User
  participant MS as ModelSelector
  participant BS as Billing State
  participant CFG as MODEL_CONFIG

  U->>MS: Open model menu / render list
  MS->>BS: Read current plan (starter/non-starter)
  MS->>MS: getModelBadges(modelId)
  alt modelId is Gemma
    MS->>MS: if plan includes "starter" => ["Starter"] else ["Pro"]
  else modelId is Llama
    MS->>MS: []
  else known model
    MS->>CFG: Read config.badges
    MS->>MS: Use config.badges or ["Pro"]
  else unknown model
    MS->>MS: ["Coming Soon"]
  end
  MS-->>U: Render names with derived badges
Loading

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Possibly related PRs

Poem

A rabbit taps the badge display,
Flips “Starter” only when bills say,
For Gemma’s glow, the rules are clear—
Pro for most, none for Llama here.
With whiskered nods and tidy code,
The selector hops in dynamic mode. 🐇✨

Pre-merge checks and finishing touches

❌ Failed checks (1 warning, 1 inconclusive)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Linked Issues Check ❓ Inconclusive The implementation covers the core requirements from issue [#240]: Gemma's badge is derived from billing (starter vs. pro) and Llama models are omitted, and it appears to use the model selector's billing status; however the handling of "all other models" is ambiguous because the code reportedly uses existing config.badges or defaults to ["Pro"], which may not guarantee every non‑Gemma/non‑Llama model shows "Pro" as required. Please confirm whether getModelBadges explicitly returns ["Pro"] for all non‑Gemma, non‑Llama models or update it to do so and add a brief test or comment documenting the expectation so compliance with [#240] can be verified.
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The PR title "feat: implement conditional model tag display logic" is concise, clear, and accurately summarizes the primary change (conditional badge/tag display behavior) without noise or irrelevant detail.
Out of Scope Changes Check ✅ Passed Changes are limited to ModelSelector badge logic and the Gemma MODEL_CONFIG entry and directly relate to the linked issue's objectives, with no other files or unrelated functionality modified, so I find no out‑of‑scope changes.
✨ Finishing touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch claude/issue-240-20250917-1642

📜 Recent review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between bc9fa42 and 301f7e6.

📒 Files selected for processing (1)
  • frontend/src/components/ModelSelector.tsx (1 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.{ts,tsx,js,jsx}

📄 CodeRabbit inference engine (CLAUDE.md)

**/*.{ts,tsx,js,jsx}: Use 2-space indentation, double quotes, and a 100-character line limit for formatting
Use camelCase for variable and function names
Use try/catch with specific error types for error handling

Files:

  • frontend/src/components/ModelSelector.tsx
**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

Use strict TypeScript typing and avoid any when possible

Files:

  • frontend/src/components/ModelSelector.tsx
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: build-ios
  • GitHub Check: build-macos (universal-apple-darwin)
  • GitHub Check: build-linux
  • GitHub Check: Cloudflare Pages
🔇 Additional comments (2)
frontend/src/components/ModelSelector.tsx (2)

251-253: LGTM: badges now derived dynamically at render-time.

This cleanly centralizes badge logic without extra API calls.


224-243: Make Llama detection case-insensitive; simplify check.

Matches "Meta-Llama" and "llama-3.3-70b" found in the repo.

-  // Llama models: no badges
-  if (modelId.includes("llama") || modelId.includes("Llama")) {
+  // Llama models: no badges (case-insensitive)
+  if (modelId.toLowerCase().includes("llama")) {
     return [];
   }

Tip

👮 Agentic pre-merge checks are now available in preview!

Pro plan users can now enable pre-merge checks in their settings to enforce checklists before merging PRs.

  • Built-in checks – Quickly apply ready-made checks to enforce title conventions, require pull request descriptions that follow templates, validate linked issues for compliance, and more.
  • Custom agentic checks – Define your own rules using CodeRabbit’s advanced agentic capabilities to enforce organization-specific policies and workflows. For example, you can instruct CodeRabbit’s agent to verify that API documentation is updated whenever API schema files are modified in a PR. Note: Upto 5 custom checks are currently allowed during the preview period. Pricing for this feature will be announced in a few weeks.

Please see the documentation for more information.

Example:

reviews:
  pre_merge_checks:
    custom_checks:
      - name: "Undocumented Breaking Changes"
        mode: "warning"
        instructions: |
          Pass/fail criteria: All breaking changes to public APIs, CLI flags, environment variables, configuration keys, database schemas, or HTTP/GraphQL endpoints must be documented in the "Breaking Change" section of the PR description and in CHANGELOG.md. Exclude purely internal or private changes (e.g., code not exported from package entry points or explicitly marked as internal).

Please share your feedback with us on this Discord post.


Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Greptile Summary

This PR implements dynamic model tag display logic for the ModelSelector component to address issue #240 regarding deprecated starter plan badges. The changes replace hardcoded model badges with a context-aware system that displays appropriate tags based on the user's billing status.

The implementation removes the hardcoded 'Starter' badge from the Gemma 3 27B model configuration and introduces a new getModelBadges() function that dynamically determines which badges to display. For Gemma models, it shows "Starter" for users on starter plans and "Pro" for all others. Llama models display no badges at all, while other models either use their existing configured badges or default to "Pro".

The solution leverages the existing billingStatus context from the ModelSelector component, avoiding additional API calls and maintaining performance. The badge logic is integrated into the existing getDisplayName() function, which handles the rendering of model names and their associated tags in the UI dropdown.

This change fits well with the codebase's existing pattern of using billing status context throughout the application (similar to how it's used in pricing components) and maintains the component's existing structure while making the badge system more user-appropriate and less confusing.

Confidence score: 4/5

  • This PR is safe to merge with minimal risk as it only affects UI display logic without changing core functionality
  • Score reflects well-structured conditional logic and proper use of existing context, though lacks explicit error handling for edge cases
  • Pay close attention to the ModelSelector.tsx file to ensure the billing status context is always available when the component renders

1 file reviewed, 2 comments

Edit Code Review Bot Settings | Greptile

Comment on lines +235 to +237
// Llama models: no badges
if (modelId.includes("llama") || modelId.includes("Llama")) {
return [];
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

style: Case sensitivity check for Llama models could miss edge cases. Consider using a single case-insensitive approach: modelId.toLowerCase().includes('llama')

Suggested change
// Llama models: no badges
if (modelId.includes("llama") || modelId.includes("Llama")) {
return [];
// Llama models: no badges
if (modelId.toLowerCase().includes("llama")) {
return [];

}

// Other models: use their existing badges or default to ["Pro"]
return config?.badges || ["Pro"];
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

logic: Models without explicit badges now default to ['Pro'] which may not be accurate for all models. Consider checking model access requirements before defaulting.

@AnthonyRonning AnthonyRonning merged commit b03f381 into master Sep 17, 2025
9 checks passed
@AnthonyRonning AnthonyRonning deleted the claude/issue-240-20250917-1642 branch September 17, 2025 17:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Only show starter tag for Gemma

1 participant