Skip to content

Conversation

@ThomasK33
Copy link
Member

Fixes missing MUX_MODEL_STRING / MUX_THINKING_LEVEL in explicitly-backgrounded bash tool calls (run_in_background: true) by passing muxEnv through to BackgroundProcessManager.spawn() (secrets still override).

Adds a regression test that starts a background bash process and asserts it can read both env vars.


📋 Implementation Plan

🤖 fix: ensure MUX model + thinking env are set for GPT‑5.2 (esp. background bash)

What’s broken

When running commands via the bash tool in background mode (run_in_background: true), the environment inside the spawned process does not include:

  • MUX_MODEL_STRING
  • MUX_THINKING_LEVEL

This shows up most often when using openai:gpt-5.2 because that model tends to trigger longer-running tool usage (builds/tests/dev servers) where backgrounding is common.

Root cause (code-level)

We do compute the MUX env (including model + thinking) correctly:

  • src/node/runtime/initHook.ts:getMuxEnv() supports { modelString, thinkingLevel } and will set:
    • env.MUX_MODEL_STRING = options.modelString
    • env.MUX_THINKING_LEVEL = options.thinkingLevel

We do pass it for AI tool calls:

  • src/node/services/aiService.ts builds tool config with:
    • muxEnv: getMuxEnv(..., { modelString, thinkingLevel: thinkingLevel ?? "off" })

But the bash tool’s explicit background path drops muxEnv:

  • src/node/services/tools/bash.ts (inside createBashTool, if (run_in_background) { ... })
    • currently calls backgroundProcessManager.spawn(..., { env: config.secrets, ... })
    • does not include config.muxEnv, so MUX_* variables (including model/thinking) never reach the background process.

Foreground execution does include it (env: { ...config.muxEnv, ...config.secrets }), which is why this can look “model-specific” depending on tool usage patterns.

Proposed fixes

Approach A (recommended): Fix background bash env injection (small + correct)

Net LoC (product code): ~10–20

  1. Update src/node/services/tools/bash.ts background path to pass merged env:
    • env: { ...(config.muxEnv ?? {}), ...(config.secrets ?? {}) }
    • preserve current precedence: secrets override muxEnv.
  2. Add/extend unit tests in src/node/services/tools/bash.test.ts:
    • new test under describe("bash tool - background execution") asserting:
      • set config.muxEnv = { MUX_MODEL_STRING: "openai:gpt-5.2", MUX_THINKING_LEVEL: "medium" }
      • run background script: echo "MODEL:$MUX_MODEL_STRING THINKING:$MUX_THINKING_LEVEL"
      • read output via BackgroundProcessManager.getOutput(processId, ..., timeout=2)
      • assert output contains both values.
  3. (Optional) Add a second test that proves config.secrets overrides config.muxEnv for the same key, matching existing foreground behavior.

Why this is enough:

  • It directly addresses the only place where we intentionally bypass runtime.exec(... env: ...) and manually spawn a background process.

Approach B (stretch): Also thread model/thinking into ORPC workspace.executeBash

Net LoC (product code): ~80–150

If the bug report is actually about UI-driven bash (ORPC) rather than AI tool calls, we currently don’t provide any muxEnv there.

Implementation idea:

  1. Extend src/common/orpc/schemas/api.ts workspace.executeBash.input to accept optional:
    • modelString?: string
    • thinkingLevel?: ThinkingLevel
  2. Update src/node/services/workspaceService.ts:executeBash() to pass:
    • muxEnv: getMuxEnv(metadata.projectPath, getRuntimeType(metadata.runtimeConfig), metadata.name, { modelString, thinkingLevel })
  3. Update all call sites in the browser that use api.workspace.executeBash(...) to pass the current model/thinking if readily available.

I’m not recommending this by default because it’s broader surface area and not necessary to fix the bash tool behavior for agents.

Validation plan

  • Run unit tests that cover the bug:
    • bun test src/node/services/tools/bash.test.ts -t "background"
  • Then run repo checks:
    • make typecheck
    • make test

Acceptance criteria

  • In a background bash tool call, echo $MUX_MODEL_STRING and echo $MUX_THINKING_LEVEL return non-empty values matching the active workspace session configuration (e.g., openai:gpt-5.2, medium).
  • Existing foreground bash behavior remains unchanged.

Notes / risks

  • Low risk: change is localized to the background spawn path.
  • Background processes already support env injection via buildWrapperScript (export KEY='value'), so this is just fixing the missing inputs.

Generated with mux • Model: unknown • Thinking: unknown

Ensure run_in_background bash processes receive MUX_MODEL_STRING and MUX_THINKING_LEVEL by merging muxEnv with secrets (matching foreground behavior). Adds a regression test covering gpt-5.2.

Signed-off-by: Thomas Kosiewski <tk@coder.com>

---
_Generated with `mux` • Model: `unknown` • Thinking: `unknown`_

Change-Id: I041d1884f7b0caa4f804fd10b81e5d81522fc1e2
@chatgpt-codex-connector
Copy link

Codex usage limits have been reached for code reviews. Please check with the admins of this repo to increase the limits by adding credits.
Repo admins can enable using credits for code reviews in their settings.

@ThomasK33 ThomasK33 enabled auto-merge December 14, 2025 08:23
@ThomasK33 ThomasK33 added this pull request to the merge queue Dec 14, 2025
@github-merge-queue github-merge-queue bot removed this pull request from the merge queue due to failed status checks Dec 14, 2025
@ThomasK33 ThomasK33 merged commit fe3c94d into main Dec 14, 2025
20 checks passed
@ThomasK33 ThomasK33 deleted the fix-model-config-not-set-as-env branch December 14, 2025 08:37
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant