Skip to content

fix(ce-work-beta): defer model and reasoning effort to Codex config#704

Merged
tmchow merged 2 commits intomainfrom
tmchow/ce-work-beta-gpt-check
Apr 27, 2026
Merged

fix(ce-work-beta): defer model and reasoning effort to Codex config#704
tmchow merged 2 commits intomainfrom
tmchow/ce-work-beta-gpt-check

Conversation

@tmchow
Copy link
Copy Markdown
Collaborator

@tmchow tmchow commented Apr 27, 2026

Summary

  • work_delegate_model and work_delegate_effort are now optional with no baked-in default. When unset, the skill omits -m and -c 'model_reasoning_effort=...' from codex exec so Codex resolves from ~/.codex/config.toml (and ultimately the CLI's own default).
  • Users can still pin a model or effort via .compound-engineering/config.local.yaml if they want to deviate from their global Codex config — that path is unchanged.
  • Example configs and the ce-setup template now describe the keys as "omit to use ~/.codex/config.toml default" rather than advertising gpt-5.4/high as our defaults.

The previous behavior forced two ongoing problems: chasing new model names inside skill content as Codex evolves, and silently overriding whatever default the user already configured for Codex globally.

Files changed

  • plugins/compound-engineering/skills/ce-work-beta/SKILL.md — config keys reframed as optional; resolved state may be unset.
  • plugins/compound-engineering/skills/ce-work-beta/references/codex-delegation-workflow.md-m and -c lines lifted out of the codex exec template into a "Conditional flags" block that inserts each line only when the corresponding skill-state value is set.
  • .compound-engineering/config.local.example.yaml, plugins/compound-engineering/skills/ce-setup/references/config-template.yaml — comments updated.

Stable/beta sync: not propagating to ce-work — codex delegation only exists in beta.

Test plan

  • With no work_delegate_model/work_delegate_effort in config: confirm codex exec invocation has neither -m nor -c 'model_reasoning_effort=...' (Codex picks up ~/.codex/config.toml).
  • With work_delegate_model: gpt-5.4 set: confirm -m "gpt-5.4" is included.
  • With work_delegate_effort: medium set: confirm -c 'model_reasoning_effort="medium"' is included.
  • bun run release:validate passes (already verified locally).

🤖 Generated with Claude Code

Drop the baked-in `gpt-5.4` model default and `high` reasoning-effort
default from `work_delegate_model` and `work_delegate_effort`. The skill
no longer passes `-m` or `-c 'model_reasoning_effort=...'` to `codex exec`
when the user has not set these in `.compound-engineering/config.local.yaml` --
Codex then resolves from `~/.codex/config.toml` (and ultimately the
CLI's own default).

Avoids two ongoing problems: chasing new model names in skill content
as Codex evolves, and silently overriding whatever default the user
configured globally for Codex.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 649fc6a475

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread plugins/compound-engineering/skills/ce-work-beta/SKILL.md Outdated
The previous edit dropped hard defaults for `work_delegate_model` and
`work_delegate_effort` but left the global "fall through to the hard
default" rule unmodified. A typo such as `work_delegate_effort: hgh`
had no deterministic fallback and could lead the orchestrator to pass
an invalid effort flag through to `codex exec`.

State explicitly that unrecognized or unparseable values for these two
keys resolve to unset, and that the corresponding flag must be omitted
from the `codex exec` invocation rather than substituted with the
invalid value.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
@tmchow tmchow merged commit 4b5f28d into main Apr 27, 2026
2 checks passed
@github-actions github-actions Bot mentioned this pull request Apr 27, 2026
michaelvolz pushed a commit to michaelvolz/compound-engineering-plugin-windows-version that referenced this pull request Apr 28, 2026
…veryInc#704)

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant