checkpoint: into wallentx/termux-target from release/0.126.0 @ d763bfd051ab#46
Merged
wallentx merged 26 commits intowallentx/termux-targetfrom Apr 25, 2026
Conversation
## Why `MultiAgentV2` follow-up messages are delivered to agents as assistant-authored `InterAgentCommunication` envelopes. When `followup_task` used `interrupt: true`, the interrupted-turn guidance was still persisted as a contextual user message, so model-visible history made a system-generated interruption boundary look user-authored. This keeps interruption guidance consistent with the rest of the v2 inter-agent message stream while preserving the legacy marker shape for non-v2 sessions. ## What changed - Make `interrupted_turn_history_marker` feature-aware. - Record the interrupted-turn marker as an assistant `OutputText` message when `Feature::MultiAgentV2` is enabled. - Keep the existing user contextual fragment for non-v2 sessions. - Apply the same feature-aware marker to interrupted fork snapshots. - Add coverage for the live `followup_task` interrupt path and the helper-level v2 marker shape. ## Testing - `cargo test -p codex-core multi_agent_v2_followup_task_interrupts_busy_child_without_losing_message -- --nocapture` - `cargo test -p codex-core multi_agent_v2_interrupted_marker_uses_assistant_output_message -- --nocapture` - `cargo test -p codex-core interrupted_fork_snapshot -- --nocapture`
…9360) ## Summary - Thread `agent_max_threads` into `ToolsConfig` and `SpawnAgentToolOptions`. - Render the configured `max_concurrent_threads_per_session` value in the MultiAgentV2 `spawn_agent` description. - Cover the description text in `codex-tools` unit tests and `codex-core` tool spec tests. ## Validation - `just fmt` - `cargo test -p codex-tools` - `cargo test -p codex-core spawn_agent_description` - `git diff --check` ## Notes - `cargo test -p codex-core` was also attempted, but unrelated environment-sensitive tests failed with the active local environment. Examples: approvals reviewer defaults observed `AutoReview` instead of `User`, request-permissions event tests did not emit events, and proxy-env tests saw `http://127.0.0.1:50604` from the active proxy environment. Co-authored-by: Codex <noreply@openai.com>
## Why Agent interruptions currently always persist a model-visible interrupted-turn marker before emitting `TurnAborted`. That marker is useful by default because it gives the next model turn context about a deliberately interrupted task, but some deployments need to suppress that history injection entirely while still keeping the client-visible interruption event. ## What changed - Add `[agents] interrupt_message = false` to disable the model-visible interrupted-turn marker. - Resolve the setting into `Config::agent_interrupt_message_enabled`, defaulting to `true` so existing behavior is unchanged. - Apply the setting to both live interrupted turns and interrupted fork snapshots. - Keep emitting `TurnAborted` even when the history marker is disabled. - Regenerate `core/config.schema.json` for the new `agents.interrupt_message` field. ## Testing - `cargo test -p codex-core load_config_resolves_agent_interrupt_message -- --nocapture` - `cargo test -p codex-core disabled_interrupted_fork_snapshot_appends_only_interrupt_event -- --nocapture` - `cargo test -p codex-core multi_agent_v2_interrupted_marker_uses_developer_input_message -- --nocapture` - `cargo test -p codex-core multi_agent_v2_followup_task_can_disable_interrupted_marker -- --nocapture` - `cargo test -p codex-core multi_agent_v2_followup_task_interrupts_busy_child_without_losing_message -- --nocapture` - `cargo check -p codex-core`
Fix a bug where the `turn/interrupt` RPC hangs when interrupting a turn that has already completed. Before this change, `turn/interrupt` requests were queued in app-server and only answered when a later TurnAborted event arrived. If the target turn was already complete, core treated Op::Interrupt as a no-op, so no abort event was emitted and the RPC could hang indefinitely. This change fixes that in two places: * Reject turn/interrupt immediately with `INVALID_REQUEST` when the requested turn is no longer the active turn. * Resolve any already-accepted pending interrupt requests when the turn reaches TurnComplete, covering the case where a turn finishes naturally after the interrupt request is accepted but before it aborts. I tested this by adding a failing test in 707487c. You may view the results here: https://github.com/openai/codex/actions/runs/24585182419/ <img width="1512" height="310" alt="CleanShot 2026-04-17 at 16 33 30@2x" src="https://github.com/user-attachments/assets/f4a88228-b2a4-41f4-9aaa-ec82814096af" />
Respects the workspace setting for plugins in Codex Plugins menu disappears Plugins do not load Plugins do not load in composer no plugins loaded <img width="809" height="226" alt="Screenshot 2026-04-23 at 3 20 45 PM" src="https://github.com/user-attachments/assets/3a4dba8e-69c3-4046-a77e-f13ab77f84b4" /> no plugins in menu <img width="293" height="204" alt="Screenshot 2026-04-23 at 3 20 35 PM" src="https://github.com/user-attachments/assets/5cb9bf52-ad72-488f-b90c-5eb457da09a3" />
## Why The elevated Windows command runner currently trusts the first process that connects to its parent-created named pipes. Tightening the pipe ACL already narrows who can reach that boundary, but verifying the connected client PID gives the parent one more fail-closed check: it only accepts the exact runner process it just spawned. ## What changed - validate `GetNamedPipeClientProcessId` after `ConnectNamedPipe` and reject clients whose PID does not match the spawned runner - also did some code de-duplication to route the one-shot elevated capture flow in `windows-sandbox-rs/src/elevated_impl.rs` through `spawn_runner_transport()` so both elevated codepaths use the same pipe bootstrap and PID validation Using the transport unification here also reduces duplication in the elevated Windows IPC bootstrap, so future hardening to the runner handshake only needs to land in one place. ## Validation - `cargo test -p codex-windows-sandbox` - manual testing: one-shot elevated path via `target/debug/codex.exe exec` running a randomized shell command and confirming captured output - manual testing: elevated session path via `target/debug/codex.exe -c 'windows.sandbox="elevated"' sandbox windows -- python -u -c ...` with stdin/stdout round-trips (`READY`, then `GOT:...` for two input lines) --------- Co-authored-by: viyatb-oai <viyatb@openai.com>
## Summary Updates the bundled OpenAI Docs system skill for GPT-5.5. ## Changes - Updates the bundled latest-model fallback - Replaces bundled upgrade guidance with GPT-5.5 migration guidance - Replaces bundled prompting guidance with GPT-5.5 prompting guidance ## Test plan - Ran `node scripts/resolve-latest-model-info.js` - Verified bundled files match the OpenAI Docs skill fallback content
## Summary This PR hardens package-manager usage across the repo to reduce dependency supply-chain risk. It also removes the stale `codex-cli` Docker path, which was already broken on `main`, instead of keeping a bitrotted container workflow alive. ## What changed - Updated pnpm package manager pins and workspace install settings. - Removed stale `codex-cli` Docker assets instead of trying to keep a broken local container path alive. - Added uv settings and lockfiles for the Python SDK packages. - Updated Python SDK setup docs to use `uv sync`. ## Why This is primarily a security hardening change. It reduces package-install and supply-chain risk by ensuring dependency installs go through pinned package managers, committed lockfiles, release-age settings, and reviewed build-script controls. For `codex-cli`, the right follow-up was to remove the local Docker path rather than keep patching it: - `codex-cli/Dockerfile` installed `codex.tgz` with `npm install -g`, which bypassed the repo lockfile and age-gated pnpm settings. - The local `codex-cli/scripts/build_container.sh` helper was already broken on `main`: it called `pnpm run build`, but `codex-cli/package.json` does not define a `build` script. - The container path itself had bitrotted enough that keeping it would require extra packaging-specific behavior that was not otherwise needed by the repo. ## Gaps addressed - Global npm installs bypassed the repo lockfile in Docker and CLI reinstall paths, including `codex-cli/Dockerfile` and `codex-cli/bin/codex.js`. - CI and Docker pnpm installs used `--frozen-lockfile`, but the repo was missing stricter pnpm workspace settings for dependency build scripts. - Python SDK projects had `pyproject.toml` metadata but no committed `uv.lock` coverage or uv age/index settings in `sdk/python` and `sdk/python-runtime`. - The secure devcontainer install path used npm/global install behavior without a local locked package-manager boundary. - The local `codex-cli` Docker helper was already broken on `main`, so this PR removes that stale Docker path instead of preserving a broken surface. - pnpm was already pinned, but not to the current repo-wide pnpm version target. ## Verification - `pnpm install --frozen-lockfile` - `.devcontainer/codex-install`: `pnpm install --prod --frozen-lockfile` - `.devcontainer/codex-install`: `./node_modules/.bin/codex --version` - `sdk/python`: `uv lock --check`, `uv sync --locked --all-extras --dry-run`, `uv build` - `sdk/python-runtime`: `uv lock --check`, `uv sync --locked --dry-run`, `uv build --wheel` - `pnpm -r --filter ./sdk/typescript run build` - `pnpm -r --filter ./sdk/typescript run lint` - `pnpm -r --filter ./sdk/typescript run test` - `node --check codex-cli/bin/codex.js` - `docker build -f .devcontainer/Dockerfile.secure -t codex-secure-test .` - `cargo build -p codex-cli` - repo-wide package-manager audit
## Why `openai.gpt-5.4-cmb` is served through the Amazon Bedrock provider, whose request validator currently accepts `function` and `mcp` tool specs but rejects Responses `custom` tools. The CMB catalog entry reuses the bundled `gpt-5.4` metadata, which marks `apply_patch_tool_type` as `freeform`. That causes Codex to include an `apply_patch` tool with `type: "custom"`, so even heavily disabled sessions can fail before the model runs with: ```text Invalid tools: unknown variant `custom`, expected `function` or `mcp` ``` This is provider-specific: the model should still expose `apply_patch`, but for Bedrock it needs to use the JSON/function tool shape instead of the freeform/custom shape. ## What Changed - Override the `openai.gpt-5.4-cmb` static catalog entry to set `apply_patch_tool_type` to `function` after inheriting the rest of the `gpt-5.4` model metadata. - Update the catalog test expectation so the CMB entry continues to track `gpt-5.4` metadata except for this Bedrock-specific tool shape override. ## Verification - `cargo test -p codex-model-provider` - `just fix -p codex-model-provider`
## Why `thread/fork` responses intentionally include copied history so the caller can render the fork immediately, but `thread/started` is a lifecycle notification. The v2 `Thread` contract says notifications should return `turns: []`, and the fork path was reusing the response thread directly, causing copied turns to be emitted through `thread/started` as well. ## What Changed - Route app-server `thread/started` notification construction through a helper that clears `thread.turns` before sending. - Keep `thread/fork` responses unchanged so callers still receive copied history. - Add persistent and ephemeral fork coverage that asserts `thread/started` emits an empty `turns` array while the response retains fork history. ## Testing - `just fmt` - `cargo test -p codex-app-server`
## Summary - Switch Unix socket app-server connections to perform the standard WebSocket HTTP Upgrade handshake - Update the Unix socket test to exercise a real upgrade over the Unix stream - Refresh the app-server README to describe the new Unix socket behavior ## Testing - `cargo test -p codex-app-server transport::unix_socket_tests` - `just fmt` - `git diff --check`
…nai#19170) Selection menus in the TUI currently let disabled rows interfere with numbering and default focus. This makes mixed menus harder to read and can land selection on rows that are not actionable. This change updates the shared selection-menu behavior in list_selection_view so disabled rows are not selected when these views open, and prevents them from being numbered like selectable rows. - Disabled rows no longer receive numeric labels - Digit shortcuts map to enabled rows only - Default selection moves to the first enabled row in mixed menus - Updated affected snapshot - Added snapshot coverage for a plugin detail error popup - Added a focused unit test for shared selection-view behavior --------- Co-authored-by: Codex <noreply@openai.com>
## Why The profile conversion path still required a `cwd` even when it was only translating a legacy `SandboxPolicy` into a `PermissionProfile`. That made profile producers invent an ambient `cwd`, which is exactly the anchoring we are trying to remove from permission-profile data. A legacy workspace-write policy can be represented symbolically instead: `:cwd = write` plus read-only `:project_roots` metadata subpaths. This PR creates that cwd-free base so the rest of the stack can stop threading cwd through profile construction. Callers that actually need a concrete runtime filesystem policy for a specific cwd still have an explicitly named cwd-bound conversion. ## What Changed - `PermissionProfile::from_legacy_sandbox_policy` now takes only `&SandboxPolicy`. - `FileSystemSandboxPolicy::from_legacy_sandbox_policy` is now the symbolic, cwd-free projection for profiles. - The old concrete projection is retained as `FileSystemSandboxPolicy::from_legacy_sandbox_policy_for_cwd` for runtime/boundary code that must materialize legacy cwd behavior. - Workspace-write profiles preserve `CurrentWorkingDirectory` and `ProjectRoots` special entries instead of materializing cwd into absolute paths. ## Verification - `cargo check -p codex-protocol -p codex-core -p codex-app-server-protocol -p codex-app-server -p codex-exec -p codex-exec-server -p codex-tui -p codex-sandboxing -p codex-linux-sandbox -p codex-analytics --tests` - `just fix -p codex-protocol -p codex-core -p codex-app-server-protocol -p codex-app-server -p codex-exec -p codex-exec-server -p codex-tui -p codex-sandboxing -p codex-linux-sandbox -p codex-analytics` --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/19414). * openai#19395 * openai#19394 * openai#19393 * openai#19392 * openai#19391 * __->__ openai#19414
- Route cold thread/resume and thread/fork source loading through ThreadStore reads instead of direct rollout path operations - Keep lookups that explicitly specify a rollout-path using the local thread store methods but return an invalid-request error for remote ThreadStore configurations - Add some additional unit tests for code path coverage
## Why We already prefer shipping the MUSL Linux builds, and the in-repo release consumers resolve Linux release assets through the MUSL targets. Keeping the GNU release jobs around adds release time and extra assets without serving the paths we actually publish and consume. This is also easier to reason about as a standalone change: future work can point back to this PR as the intentional decision to stop publishing `x86_64-unknown-linux-gnu` and `aarch64-unknown-linux-gnu` release artifacts. ## What changed - Removed the `x86_64-unknown-linux-gnu` and `aarch64-unknown-linux-gnu` entries from the `build` matrix in `.github/workflows/rust-release.yml`. - Added a short comment in that matrix documenting that Linux release artifacts intentionally ship MUSL-linked binaries. ## Verification - Reviewed `.github/workflows/rust-release.yml` to confirm that the release workflow now only builds Linux release artifacts for `x86_64-unknown-linux-musl` and `aarch64-unknown-linux-musl`.
## Summary - Mirrors openai/skills#374 in the Codex bundled OpenAI Docs skill - Adds `gpt-image-2` as the best image generation/edit model - Updates `gpt-image-1.5` to less expensive image generation/edit quality ## Test plan - `git diff --check`
## Why The VS Code extension and desktop app do not need the full TUI binary, and `codex-app-server` is materially smaller than standalone `codex`. We still want to publish it as an official release artifact, but building it by tacking another `--bin` onto the existing release `cargo build` invocations would lengthen those jobs. This change keeps `codex-app-server` on its own release bundle so it can build in parallel with the existing `codex` and helper bundles. ## What changed - Made `.github/workflows/rust-release.yml` bundle-aware so each macOS and Linux MUSL target now builds either the existing `primary` bundle (`codex` and `codex-responses-api-proxy`) or a standalone `app-server` bundle (`codex-app-server`). - Preserved the historical artifact names for the primary macOS/Linux bundles so `scripts/stage_npm_packages.py` and `codex-cli/scripts/install_native_deps.py` continue to find release assets under the paths they already expect, while giving the new app-server artifacts distinct names. - Added a matching `app-server` bundle to `.github/workflows/rust-release-windows.yml`, and updated the final Windows packaging job to download, sign, stage, and archive `codex-app-server.exe` alongside the existing release binaries. - Generalized the shared signing actions in `.github/actions/linux-code-sign/action.yml`, `.github/actions/macos-code-sign/action.yml`, and `.github/actions/windows-code-sign/action.yml` so each workflow row declares its binaries once and reuses that list for build, signing, and staging. - Added `codex-app-server` to `.github/dotslash-config.json` so releases also publish a generated DotSlash manifest for the standalone app-server binary. - Kept the macOS DMG focused on the existing `primary` bundle; `codex-app-server` ships as the regular standalone archives and DotSlash manifest. ## Verification - Parsed the modified workflow and action YAML files locally with `python3` + `yaml.safe_load(...)`. - Parsed `.github/dotslash-config.json` locally with `python3` + `json.loads(...)`. - Reviewed the resulting release matrices, artifact names, and packaging paths to confirm that `codex-app-server` is built separately on macOS, Linux MUSL, and Windows, while the existing npm staging and Windows `codex` zip bundling contracts remain intact.
Termux rust-v0.126.0-alpha.1
…ckpoint/wallentx_termux-target_from_release_0.126.0_d763bfd051ab # Conflicts: # codex-rs/Cargo.toml
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Termux release checkpoint
release/0.126.0d763bfd051ab7888ad54b28763df572ced6c5f92wallentx/termux-targetThis PR carries release-train conflict fixes and follow-up changes back into the reusable Termux patch branch.
Merge conflicts
GitHub Actions could not create the checkpoint merge commit automatically, so this PR was created from the source branch state for manual conflict resolution.
Conflicted paths from the failed merge attempt:
Release-only workflow files and metadata under
.githubwere restored to the destination branch versions before opening this PR.