Skip to content

GPT-5.4 Fast mode often feels no faster than Standard, but still consumes credits at 2x #18692

@GGBondBlueWhale

Description

@GGBondBlueWhale

What version of the Codex App are you using (From “About Codex” dialog)?

Version 26.415.40636 (1799)

What subscription do you have?

Pro

What platform is your computer?

macOS

What issue are you seeing?

I’m using Codex with GPT-5.4 and I have Fast mode turned on, but in practice it often feels very slow and sometimes not meaningfully faster than Standard at all.

The current UI says Fast mode is about 1.5x faster while consuming credits at 2x, but my real-world experience lately is that the speed difference is often negligible. That makes the pricing feel unfair during periods of apparent server congestion or degraded Fast-mode capacity.

Screenshot showing Fast mode selected:

Image

What steps can reproduce the bug?

  1. Open Codex App.
  2. Select GPT-5.4.
  3. Enable Fast mode from the speed menu.
  4. Use Codex normally across multiple prompts/tasks.
  5. Compare the perceived latency with Standard mode.

In my case, Fast mode often does not feel close to the advertised improvement, while still charging at the higher 2x usage rate.

What is the expected behavior?

If Fast mode is charging 2x credits, it should consistently provide a clearly noticeable speed benefit over Standard.

If the backend is congested and Fast mode cannot provide something close to its normal performance, then I think one of these should happen instead:

  • temporarily do not charge the full 2x rate, or
  • show a clear indicator that Fast capacity is currently degraded, or
  • automatically fall back to a lower billing multiplier when the speed benefit is not actually being delivered.

Additional information

I also want to ask whether other Pro users are seeing the same thing recently.

This seems related to Fast-mode billing / visibility discussions, but my main complaint here is specifically about the combination of:

  • very little real speed improvement,
  • GPT-5.4 feeling slow even with Fast enabled, and
  • still being charged at the higher 2x Fast-mode rate.

From a user perspective, paying 2x while not actually receiving the expected speedup feels pretty bad.

I also believe OpenAI has a responsibility to refund all Codex usage consumed by Pro and Plus users during periods when Fast mode performance is abnormally degraded.

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workingrate-limitsIssues related to rate limits, quotas, and token usage reporting

    Type

    No type
    No fields configured for issues without a type.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions