Skip to content

Conversation

@xl-openai
Copy link
Collaborator

@xl-openai xl-openai commented Feb 10, 2026

Added multi-limit support end-to-end by carrying limit_name in rate-limit snapshots and handling multiple buckets instead of only codex.
Extended /usage client parsing to consume additional_rate_limits
Updated TUI /status and in-memory state to store/render per-limit snapshots
Extended app-server rate-limit read response: kept rate_limits and added rate_limits_by_name.
Adjusted usage-limit error messaging for non-default codex limit buckets

@xl-openai xl-openai force-pushed the xl/skill-limit branch 3 times, most recently from f0abfb0 to 5a1de97 Compare February 10, 2026 06:57
@xl-openai xl-openai marked this pull request as ready for review February 10, 2026 06:57
Copy link
Contributor

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 5a1de97fb3

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines 1584 to 1586
.find(|snapshot| snapshot.limit_name.as_deref() == Some("codex"))
.cloned()
.unwrap_or_else(|| snapshots[0].clone());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Fall back when codex snapshot has no limit data

This selection always prefers the "codex" entry, but get_rate_limits_many now always includes a synthetic codex snapshot even when /usage returns rate_limit: null and only non-codex buckets have real window data. In that case rate_limits in GetAccountRateLimitsResponse is returned empty while rate_limits_by_name has populated limits, which regresses legacy clients that still read only rate_limits. Prefer codex only when it actually contains window/credit data, otherwise fall back to a populated snapshot.

Useful? React with 👍 / 👎.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rate_limits is intentionally the canonical codex bucket for legacy clients; falling back to additional buckets would change its semantics and misrepresent what legacy clients are actually able to interpret.

HeaderValue::from_static("1704074400"),
);

let snapshot = parse_rate_limit_for_limit(&headers, Some("codex_other")).expect("snapshot");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Use matching limit name in secondary-header parser test

This test sets x-codex-secondary-* headers but calls parse_rate_limit_for_limit(..., Some("codex_other")), so the parser looks for x-codex-other-* keys and cannot populate primary, making the subsequent expect("primary") fail. The limit name and header prefix need to match (for example codex_secondary with x-codex-secondary-*).

Useful? React with 👍 / 👎.

@xl-openai xl-openai force-pushed the xl/skill-limit branch 6 times, most recently from 9f74d29 to dd06980 Compare February 11, 2026 01:52
@xl-openai
Copy link
Collaborator Author

@codex review

Copy link
Contributor

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: dd069802d6

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +1571 to +1572
self.rate_limit_snapshots_by_limit_id
.insert(limit_id, display);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Prune stale limit buckets when applying updates

This path only upserts a single bucket into rate_limit_snapshots_by_limit_id, and buckets are cleared only when snapshot is None. With the new multi-limit polling flow, snapshots are emitted one-by-one from /usage, so if a bucket disappears from a later response (for example, codex_other is no longer returned), there is no tombstone and the old entry remains in memory, causing /status to keep rendering stale limits indefinitely.

Useful? React with 👍 / 👎.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it is OK for now.

Comment on lines +248 to +249
let suffix = "-primary-used-percent";
let prefix = header_name.strip_suffix(suffix)?;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Detect non-default limit ids beyond primary headers

parse_rate_limit_updates discovers additional buckets via header_name_to_limit_id, but this helper only matches headers ending in -primary-used-percent. If a non-default bucket is reported with only secondary-window headers, that bucket is never discovered, so no snapshot is emitted and consumers miss the update entirely. Limit-id discovery should include secondary header families (or use a broader prefix-based detection) to avoid dropping valid buckets.

Useful? React with 👍 / 👎.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should no happen.

Comment on lines 2209 to 2210
limit_id: None,
limit_name: None,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

are the core rate limits name/idless?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok i see you store them rate_limits

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can update the test with a real value of limit_id/name.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we still update v1 schemas?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Must be auto-generated from some command I run. Let me revert.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just write-app-server-schema actually it will still generate v1 (otherwise C1 will fail).

impl std::fmt::Display for UsageLimitReachedError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
// Pro users might hit a non-standard codex metered bucket.
if matches!(self.plan_type, Some(PlanType::Known(KnownPlan::Pro)))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does it have to gate to pro or can it be more general?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good call!

@xl-openai xl-openai merged commit fdd0cd1 into main Feb 11, 2026
36 of 38 checks passed
@xl-openai xl-openai deleted the xl/skill-limit branch February 11, 2026 04:09
@github-actions github-actions bot locked and limited conversation to collaborators Feb 11, 2026
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants