Summary
I am a paying Codex user and I rely on Codex for real daily development work.
Over the last two days, Codex / GPT-5.5 has become noticeably worse: more confused, less reliable, and less useful for tasks that it handled better before.
This is not a minor annoyance. My Codex subscription renewal is already due, and this degradation is directly affecting my decision to renew. I am seriously considering whether to keep paying for Codex or switch my coding workflow to Claude Code instead.
What I am seeing
- Codex feels significantly less intelligent than usual.
- It loses the thread more often.
- It needs more corrections for the same type of work.
- It gives lower-quality answers than it did a few days ago.
- It feels unreliable for real daily work.
- The degradation is strong enough that I am questioning whether I should renew my plan.
Why this matters
I use Codex as part of my real work, not just for testing or casual usage.
My USD 200/month ChatGPT Pro / Codex subscription renewal has already expired / is due now, and I need to decide whether it is worth paying for another month.
Right now, Codex feels noticeably degraded, especially with GPT-5.5. Before renewing, I need to know whether this is:
- a known temporary service-side degradation,
- a GPT-5.5 model regression,
- a Codex app / Windows issue,
- an account-specific issue,
- or something wrong with my local setup.
Expected behavior
Codex should remain reliable enough for paid daily development work.
If there is a temporary degradation or known regression, users should be informed clearly, especially when it affects paid renewal decisions.
Actual behavior
During the last two days, Codex / GPT-5.5 has felt much worse than usual and less dependable for the same type of development tasks I normally perform.
Environment
- OS: Microsoft Windows 11 Pro
- OS version: 10.0.26200
- OS build: 26200
- System type: x64-based PC
- Device: HP Pavilion Gaming Laptop 15-dk1xxx
- Codex app version: 26.506.3741.0
- Codex CLI version: 0.130.0-alpha.5
- Codex executable path: C:\Users\Pc\AppData\Local\OpenAI\Codex\bin\codex.exe
- PowerShell version: 7.6.1
- Model: GPT-5.5 / Codex
- Subscription: ChatGPT Pro, USD 200/month, using Codex, renewal currently due
Request
Please clarify whether this is a known issue affecting Codex / GPT-5.5 users, or whether I should investigate something specific to my local Windows installation or account.
This directly affects whether I renew my Codex subscription or move my coding workflow to Claude Code.
Summary
I am a paying Codex user and I rely on Codex for real daily development work.
Over the last two days, Codex / GPT-5.5 has become noticeably worse: more confused, less reliable, and less useful for tasks that it handled better before.
This is not a minor annoyance. My Codex subscription renewal is already due, and this degradation is directly affecting my decision to renew. I am seriously considering whether to keep paying for Codex or switch my coding workflow to Claude Code instead.
What I am seeing
Why this matters
I use Codex as part of my real work, not just for testing or casual usage.
My USD 200/month ChatGPT Pro / Codex subscription renewal has already expired / is due now, and I need to decide whether it is worth paying for another month.
Right now, Codex feels noticeably degraded, especially with GPT-5.5. Before renewing, I need to know whether this is:
Expected behavior
Codex should remain reliable enough for paid daily development work.
If there is a temporary degradation or known regression, users should be informed clearly, especially when it affects paid renewal decisions.
Actual behavior
During the last two days, Codex / GPT-5.5 has felt much worse than usual and less dependable for the same type of development tasks I normally perform.
Environment
Request
Please clarify whether this is a known issue affecting Codex / GPT-5.5 users, or whether I should investigate something specific to my local Windows installation or account.
This directly affects whether I renew my Codex subscription or move my coding workflow to Claude Code.