Skip to content

fix: fix runtime detection priority, resolve binaries from APM runtimes dir, add llm support#608

Open
edenfunf wants to merge 3 commits intomicrosoft:mainfrom
edenfunf:fix/runtime-detection-and-llm-support
Open

fix: fix runtime detection priority, resolve binaries from APM runtimes dir, add llm support#608
edenfunf wants to merge 3 commits intomicrosoft:mainfrom
edenfunf:fix/runtime-detection-and-llm-support

Conversation

@edenfunf
Copy link
Copy Markdown
Contributor

@edenfunf edenfunf commented Apr 7, 2026

Summary

Fixes apm run start selecting the wrong runtime or failing to execute one on Windows, and adds llm as a supported runtime for users who cannot use codex with GitHub Models.

Closes #605.

Background — codex v0.116+ and GitHub Models

codex removed wire_api = "chat" (Chat Completions) in v0.116 and now only supports wire_api = "responses" (OpenAI Responses API). GitHub Models exposes only the Chat Completions endpoint — the /responses path returns 404. This is a fundamental upstream incompatibility; there is no config workaround.

apm runtime setup codex always installs the latest release, which is now in the incompatible range. The setup scripts are updated to document this so users are not silently directed to a broken binary.

Changes

src/apm_cli/core/script_runner.py

Runtime detection (_detect_installed_runtime)

  • Check ~/.apm/runtimes/ before PATH. A system-level copilot shim (e.g. from gh extension install github/gh-copilot) was winning over an APM-managed binary because shutil.which() searched PATH first.
  • Exclude APM-managed codex from auto-detection. Any codex installed via apm runtime setup will be v0.116+ and will not work with GitHub Models. PATH codex is still tried as a last resort (covers older or differently-configured binaries).
  • Prefer llm over other PATH runtimes. llm uses Chat Completions and is fully compatible with GitHub Models via the GITHUB_TOKEN env var.

Executable resolution (_execute_runtime_command)

  • On Windows, resolve the binary path from ~/.apm/runtimes/ before falling back to shutil.which(). Without this, the correct runtime name was detected but the executable could not be found if ~/.apm/runtimes was not in the session PATH.

llm support (_generate_runtime_command)

  • Add llm -m github/gpt-4o <prompt_file> as the generated command for the llm runtime.

Additional cleanup

  • Remove anchored ^ regex in _get_runtime_name and _transform_command; runtime keywords can appear mid-string when an absolute path is prepended on Windows.
  • Remove the blanket symlink rejection in _discover_prompt_file and _resolve_prompt_file; it was too broad and blocked legitimate use cases (e.g. symlinked prompt packages).
  • Improve the "prompt not found" error message with actionable steps.

scripts/runtime/setup-codex.ps1 / setup-codex.sh

  • Document the codex/GitHub Models incompatibility in a comment so operators know why GitHub Models is not listed as a working provider for current codex builds.

Test Plan

  • uv run pytest tests/unit/test_script_runner.py -x -v — all tests pass
  • apm runtime setup codex followed by apm run start — selects llm (or copilot) instead of the broken codex binary
  • With only llm available in PATH and no APM-managed runtimes: apm run startllm -m github/gpt-4o is executed
  • With a system PATH copilot stub and a working APM-managed copilot binary: apm run start — APM-managed binary wins
  • Absolute path to runtime binary works on Windows when ~/.apm/runtimes is not in PATH

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR updates APM’s workflow script runner to improve runtime selection/execution (especially on Windows), adds llm as a supported runtime, and adjusts the Codex setup scripts to address GitHub Models compatibility concerns raised in #605.

Changes:

  • Update runtime detection and Windows executable resolution to prefer APM-managed runtimes and add llm support.
  • Relax prompt discovery/resolution rules (including allowing symlinks) and improve “prompt not found” messaging.
  • Change Codex setup default versioning and document (intended) GitHub Models compatibility constraints.
Show a summary per file
File Description
src/apm_cli/core/script_runner.py Runtime detection/command generation tweaks, Windows exe resolution, prompt discovery changes, and improved error messaging.
scripts/runtime/setup-codex.sh Changes Codex default version selection and adds compatibility notes.
scripts/runtime/setup-codex.ps1 Same as above for Windows PowerShell installer.

Copilot's findings

Comments suppressed due to low confidence (3)

src/apm_cli/core/script_runner.py:867

  • The runtime detection docstring and logic contain non-ASCII em dashes, which violates the repo's printable-ASCII requirement. Replace these with ASCII-only punctuation (e.g., "--") in docstrings/comments too, since they ship in source files.
        Checks APM-managed runtimes (~/.apm/runtimes/) first, then PATH.
        This ensures explicitly APM-installed binaries take priority over
        system-level stubs (e.g. GitHub CLI copilot extensions).

        Priority:
          1. APM runtimes dir: copilot  (codex excluded — v0.116+ is
             incompatible with GitHub Models' Chat Completions API)
          2. PATH: llm > copilot > codex  (llm uses Chat Completions, works
             with GitHub Models even when codex dropped that API)

src/apm_cli/core/script_runner.py:898

  • Non-ASCII em dash in this comment violates the repo's printable-ASCII-only rule for source files. Replace it with ASCII punctuation (e.g., "--").
        # 2. Fall back to PATH — prefer llm (uses Chat Completions, works with
        #    GitHub Models even when codex has dropped that API format)
        if shutil.which("llm"):

src/apm_cli/core/script_runner.py:931

  • Non-ASCII em dash in this comment violates the repo's printable-ASCII-only rule for source files. Replace it with ASCII punctuation (e.g., "--").
        elif runtime == "llm":
            # llm CLI — uses Chat Completions, compatible with GitHub Models
            return f"llm -m github/gpt-4o {prompt_file}"
  • Files reviewed: 3/3 changed files
  • Comments generated: 12

Copy link
Copy Markdown
Collaborator

@sergio-sisternes-epam sergio-sisternes-epam left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the thorough work here, @edenfunf! We had a parallel fix for #605 in #651, but your PR covers more ground so we're closing ours in favor of this one.

A few things to address:

Must-fix

  1. Missing .cmd/.bat in Windows resolution — The runtimes dir check only looks for {name} and {name}.exe, but setup-llm.ps1 installs llm.cmd. Without including .cmd (and .bat), apm run will fail to find APM-managed runtimes on Windows when ~/.apm/runtimes isn't in PATH.

  2. Codex pin is stale — The scripts reference rust-v0.117.0, but we've already merged rust-v0.118.0 with wire_api="responses" in #663. Please rebase onto main to pick that up.

  3. Symlink removal needs containment — Removing the blanket symlink rejection is fine, but please add ensure_path_within() checks (from apm_cli.utils.path_security) after resolving symlink targets. Without that, a malicious dependency could symlink a prompt file to sensitive paths.

Suggestion

  1. Scope is too wide — This PR mixes a bug fix (#605), a new feature (llm runtime), and a security policy change (symlink handling) in one shot. That makes review harder and increases merge risk. For future contributions, we'd recommend one concern per PR — it's easier to review, faster to merge, and safer to revert if needed. Happy to review this as-is, but please keep this in mind going forward.

…times dir, add llm support

Three related issues caused apm run start to fail or pick the wrong runtime:

1. _detect_installed_runtime checked shutil.which() for all candidates,
   so a broken copilot stub in the system PATH (e.g. from gh extension)
   could be selected over a working APM-managed binary. Changed priority
   to check ~/.apm/runtimes/ first; only fall back to PATH when nothing
   is found there.

2. On Windows, even when _detect_installed_runtime returned the right
   name, _execute_runtime_command still resolved the executable path via
   shutil.which(). If ~/.apm/runtimes is not in the session PATH the
   binary is not found. Fixed by checking APM runtimes dir for the
   executable before calling shutil.which().

3. llm was not a recognised runtime so apm run start raised
   "Unsupported runtime: llm" when llm was the only available tool.
   Added llm -m github/gpt-4o as the generated command.

   codex v0.116+ dropped wire_api="chat" support (Chat Completions) in
   favour of the Responses API, which GitHub Models does not support.
   The detection order now excludes APM-managed codex from auto-selection
   (any codex installed via apm runtime setup will be v0.116+); PATH codex
   remains as a last resort for users with an older or differently-
   configured binary. setup-codex scripts are updated to document this
   incompatibility so users are not silently directed to a broken runtime.

Additional cleanup in script_runner.py:
- Remove anchored regex (^) in _get_runtime_name and _transform_command;
  runtime keywords can appear mid-command when paths are prepended
- Remove symlink rejection in _discover_prompt_file and _resolve_prompt_file;
  the blanket block was too broad and broke legitimate use cases
- Improve the "prompt not found" error message with actionable next steps
- Fix 1: include .cmd and .bat in Windows APM runtimes dir lookup so
  llm.cmd (installed by setup-llm.ps1) is found when ~/.apm/runtimes
  is not in PATH
- Fix 2: rebase already picked up rust-v0.118.0 pin from origin/main
  (microsoft#663); resolve the setup-codex conflicts in favour of main
- Fix 3: add ensure_path_within() checks in _discover_prompt_file and
  _resolve_prompt_file (PromptCompiler) to catch symlinks that resolve
  outside the project directory; also filter unsafe dependency matches
  from rglob results rather than silently including them
- Fix runtime mis-detection: reorder _transform_runtime_command to
  check copilot before codex, preventing "copilot --model codex ..."
  from being routed to the codex path; replace substring match in
  _detect_runtime with Path.stem comparison so hyphenated tool names
  (run-codex-tool) are not mistakenly identified as a known runtime
@edenfunf edenfunf force-pushed the fix/runtime-detection-and-llm-support branch from f92015a to e49bce0 Compare April 10, 2026 17:26
@edenfunf
Copy link
Copy Markdown
Contributor Author

Thanks for the thorough review, @sergio-sisternes-epam !

I've addressed all three must-fix items:

.cmd/.bat in Windows resolution — added .cmd and .bat to the APM runtimes dir candidate list in _execute_runtime_command, so llm.cmd installed by setup-llm.ps1 is found correctly when ~/.apm/runtimes isn't in PATH.

Stale codex pin — rebased onto main to pick up the rust-v0.118.0 pin from #663. Resolved the conflict in favour of main's version and comment.

Symlink containment — added ensure_path_within() checks in both _discover_prompt_file and _resolve_prompt_file (PromptCompiler). Local paths are validated before returning; dependency rglob results are filtered — unsafe paths are silently skipped rather than included.

Also replaced the non-ASCII em dashes (—) in comments and docstrings with -- to comply with the printable-ASCII requirement.

On the scope feedback — you're right, this PR mixes a bug fix, a new feature, and a security policy change. I should have split them. I'll keep that in mind for future contributions and aim for one concern per PR going forward. Sorry about that!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

bug: apm run start picks wrong runtime — system PATH stub takes priority over APM-managed binary, codex v0.116+ incompatible with GitHub Models

3 participants