Skip to content

Comments

fix(status): show runtime model context limit instead of stale session value#23299

Open
SidQin-cyber wants to merge 1 commit intoopenclaw:mainfrom
SidQin-cyber:fix/status-context-tokens-runtime-model
Open

fix(status): show runtime model context limit instead of stale session value#23299
SidQin-cyber wants to merge 1 commit intoopenclaw:mainfrom
SidQin-cyber:fix/status-context-tokens-runtime-model

Conversation

@SidQin-cyber
Copy link
Contributor

@SidQin-cyber SidQin-cyber commented Feb 22, 2026

Summary

  • Problem: When a session uses a runtime model different from the default (e.g. default is MiniMax M2.5 at 200k, runtime is google/gemini-3-pro-preview at 1M), the status display shows the default model's context limit instead of the runtime model's.
  • Why: In `buildSessionRows`, `entry?.contextTokens` (set at session creation time for the default model) was checked before `lookupContextTokens(model)` (which resolves the actual runtime model's context window). When the model changes at runtime, the stored session value is stale.
  • What changed: Swapped the priority so `lookupContextTokens(resolvedModel)` is checked first, falling back to `entry?.contextTokens` only if the lookup doesn't know the model.
  • What did NOT change: The `defaults` section in the overview still shows the configured default model and its context. Per-session model resolution (`resolveSessionModelRef`) is unchanged.

Change Type (select all)

  • Bug fix
  • Feature
  • Refactor
  • Docs
  • Security hardening
  • Chore/infra

Scope (select all touched areas)

  • Gateway / orchestration
  • Skills / tool execution
  • Auth / tokens
  • Memory / storage
  • Integrations
  • API / contracts
  • UI / DX
  • CI/CD / infra

Linked Issue/PR

User-visible / Behavior Changes

  • `openclaw status` Sessions table now shows the correct context limit for the runtime model, not the stale value from session creation

Security Impact (required)

  • New permissions/capabilities? `No`
  • Secrets/tokens handling changed? `No`
  • New/changed network calls? `No`
  • Command/tool execution surface changed? `No`
  • Data access scope changed? `No`

Repro + Verification

Steps

  1. Configure a default model (e.g. MiniMax M2.5, 200k context)
  2. Start a session and override the model to one with a different context window (e.g. google/gemini-3-pro-preview, 1M context)
  3. Run `openclaw status`

Expected

  • Session row shows 1M context limit (matching runtime model)

Actual (before fix)

  • Session row shows 200k context limit (from default model, stored at session creation)

Evidence

Before (line 134-135 in `status.summary.ts`):
```typescript
const contextTokens =
entry?.contextTokens ?? lookupContextTokens(model) ?? configContextTokens ?? null;
```

After:
```typescript
const contextTokens =
lookupContextTokens(model) ?? entry?.contextTokens ?? configContextTokens ?? null;
```

`lookupContextTokens(model)` uses the resolved runtime model, giving the correct context window. The stale `entry?.contextTokens` only applies as fallback for unknown/custom models.

Human Verification (required)

  • Verified scenarios: Confirmed `resolveSessionModelRef` correctly resolves runtime model; `lookupContextTokens` returns correct value for known models
  • Edge cases checked: Unknown/custom models still fall back to stored value or config default
  • What I did not verify: Live gateway testing with model override

Compatibility / Migration

  • Backward compatible? `Yes`
  • Config/env changes? `No`
  • Migration needed? `No`

Failure Recovery (if this breaks)

If `lookupContextTokens` returns an unexpected value, the display would show wrong context but has no functional impact (display-only). Fallback chain preserved.

Risks and Mitigations

Low risk — single line priority change in fallback chain, with existing fallbacks preserved.

Greptile Summary

Fixed stale context limit display in openclaw status by prioritizing runtime model lookup over session-stored value.

Key Changes:

  • Swapped fallback priority in status.summary.ts:135 from entry?.contextTokens ?? lookupContextTokens(model) to lookupContextTokens(model) ?? entry?.contextTokens
  • Now shows correct context limit (1M) when runtime model differs from default (200k)
  • resolveSessionModelRef correctly resolves runtime model from entry.model, so lookupContextTokens(model) gets the right value

Issues Found:

  • src/commands/sessions.ts:265 still uses the old priority order and should be updated for consistency

Confidence Score: 4/5

  • This PR is safe to merge with one minor inconsistency to fix
  • The fix correctly addresses the stale context limit display issue by prioritizing runtime model lookup. The logic is sound - resolveSessionModelRef gets the runtime model, then lookupContextTokens returns its context window. However, there's an identical pattern in sessions.ts:265 that wasn't updated, creating an inconsistency where the two commands may show different context limits for the same session.
  • Check src/commands/sessions.ts:265 - needs the same priority swap for consistency

Last reviewed commit: 6c49fda

…n value

Closes openclaw#23122

When a session's model is changed at runtime (e.g. via model override),
entry.contextTokens still holds the value from session creation time.
Prioritize lookupContextTokens(resolvedModel) so the status display
reflects the actual runtime model's context window.

Co-authored-by: Cursor <cursoragent@cursor.com>
@openclaw-barnacle openclaw-barnacle bot added commands Command implementations size: XS labels Feb 22, 2026
Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 file reviewed, 1 comment

Edit Code Review Agent Settings | Greptile

@greptile-apps
Copy link
Contributor

greptile-apps bot commented Feb 22, 2026

Additional Comments (1)

src/commands/sessions.ts
Inconsistent with the fix in status.summary.ts:135. This should also prioritize lookupContextTokens(model) first to show the runtime model's context limit.

    const contextTokens = lookupContextTokens(model) ?? row.contextTokens ?? configContextTokens;
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/commands/sessions.ts
Line: 265

Comment:
Inconsistent with the fix in `status.summary.ts:135`. This should also prioritize `lookupContextTokens(model)` first to show the runtime model's context limit.

```suggestion
    const contextTokens = lookupContextTokens(model) ?? row.contextTokens ?? configContextTokens;
```

How can I resolve this? If you propose a fix, please make it concise.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

commands Command implementations size: XS

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Context limit in status display shows wrong value when runtime model differs from default

2 participants