Skip to content

Codex is less truthful lately - it says it will do something and it does not do it #6439

@PaulSolt

Description

@PaulSolt

What version of Codex is running?

codex-cli 0.56.0

What subscription do you have?

ChatGPT Pro

Which model were you using?

gpt-5-codex high

What platform is your computer?

Darwin 24.6.0 arm64 arm

What issue are you seeing?

codex has been less truthful lately. It'll say it's doing something and it just stops, doesn't do it. I have to ask again. Or it will say something is done, and it clearly isn't.

It feels like it's not as smart all the time and reverts to dumber models.

What steps can reproduce the bug?

Uploaded thread: Or mention your thread ID 019a68af-2949-7223-a154-73673f11b2f0 in an existing issue.

What is the expected behavior?

It should follow through. It should not say it's reading something, and not read it. I've had instances where it told me it read, but it never did a tool call. When I questioned it's understanding it scored itself 60% truthful after the exercise, which proved it didn't read when I asked it to.

Additional information

I don't like working with less intelligent or less truthful models. I want them to do the work they say they will. not hand wave. That's what Claude does and it is so frustrating.

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workingmodel-behaviorIssues related to behaviors exhibited by the model

    Type

    No type
    No fields configured for issues without a type.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions