This issue was opened automatically by the Test Playbooks workflow after the test lmstudio-load-qwen3-coder-linux failed on the main branch.
Failure scope
- Playbook:
vscode-qwen3-coder
- Test id:
lmstudio-load-qwen3-coder-linux
- Device:
halo
- Operating system:
linux
- Runner labels:
self-hosted, Linux, halo
- Runner name:
xsj-aimlab-halo-03
- Commit:
244d5d7862d08e6b6178f61e1d93e625b1e0e75e
- Workflow run: https://github.com/amd/playbooks/actions/runs/24850546969
Hardware / OS to use to reproduce
Run the failing test on a machine that matches the runner labels above (OS = linux, device = halo). The repo's self-hosted runners already advertise these labels; if you reproduce locally, use the same OS family and the same AMD device class.
How to dispatch the same test from CI
Re-run only the failing playbook on the same matrix entry by triggering the workflow with the playbook id:
gh workflow run test-playbooks.yml --repo amd/playbooks -f playbook_id=vscode-qwen3-coder
The workflow's matrix narrows down to this (device, platform) combination automatically based on the playbook's tested_platforms.
How to run just this test locally
python .github/scripts/run_playbook_tests.py --playbook vscode-qwen3-coder --platform linux --device halo
The runner extracts test blocks from playbooks/*/vscode-qwen3-coder/README.md (the failing block starts around line 174).
Failing test (verbatim from the README)
lms unload --all || true
lms ps
ID="qwen3coder-32k-${GITHUB_RUN_ID}"
echo "$ID" > /tmp/lmstudio_model_id.txt
lms load qwen3-coder-30b-a3b-instruct --context-length 32768 --gpu max --identifier "$ID"
lms ps # Verify model is really loaded
lms chat "$ID" -p "Reply with exactly: OK"
Result
stderr (last lines)
No models to unload.
No models are currently loaded.
To load a model, run:
lms load <model path>
[LMStudioClient][LLM][ClientPort][WsClientTransport:AuthenticatedWsClientTransport] WebSocket error: Error: WebSocket connection closed
at <anonymous> (/$bunfs/root/lms:103437:33)
Error: WebSocket connection closed
at <anonymous> (/$bunfs/root/lms:103437:33)
Waking up LM Studio service...
No models are currently loaded.
To load a model, run:
lms load <model path>
Error: Model "qwen3coder-32k-24850546969" not found, load with:
lms load qwen3coder-32k-24850546969
stdout (last lines)
�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠙�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠹�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠸�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠼�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠴�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠦�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠧�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠇�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠏�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠋�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠙�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠹�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠸�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠼�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠴�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠦�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠧�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠇�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠏�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠋�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠙�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠹�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠸�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠼�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠴�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠦�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠧�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠇�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠏�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠋�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠙�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠹�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠸�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠼�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠴�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠦�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠧�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠇�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠏�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠋�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠙�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠹�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠸�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠼�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠴�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠦�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠧�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠇�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠏�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠋�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠙�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠹�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠸�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠼�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠴�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠦�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠧�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠇�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠏�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠋�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠙�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠹�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠸�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠼�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠴�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠦�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠧�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠇�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠏�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠋�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠙�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠹�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠸�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠼�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠴�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠦�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠧�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠇�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠏�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠋�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠙�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠹�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠸�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠼�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠴�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠦�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠧�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠇�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠏�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠋�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠙�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠹�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠸�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠼�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠴�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠦�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠧�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠇�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠏�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠋�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠙�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠹�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠸�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠼�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠴�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠦�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠧�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠇�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠏�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠋�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠙�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠹�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠸�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠼�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠴�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠦�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠧�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠇�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠏�[?25l
Loading qwen3-coder-30b-a3b-instruct ⠋
�[K�[?25h
�[K�[?25h
This issue is opened and deduplicated by .github/scripts/create_failure_issues.py. Close it once the failure is fixed; subsequent failures with the same scope will reopen a fresh issue.
This issue was opened automatically by the Test Playbooks workflow after the test
lmstudio-load-qwen3-coder-linuxfailed on themainbranch.Failure scope
vscode-qwen3-coderlmstudio-load-qwen3-coder-linuxhalolinuxself-hosted,Linux,haloxsj-aimlab-halo-03244d5d7862d08e6b6178f61e1d93e625b1e0e75eHardware / OS to use to reproduce
Run the failing test on a machine that matches the runner labels above (OS =
linux, device =halo). The repo's self-hosted runners already advertise these labels; if you reproduce locally, use the same OS family and the same AMD device class.How to dispatch the same test from CI
Re-run only the failing playbook on the same matrix entry by triggering the workflow with the playbook id:
The workflow's matrix narrows down to this
(device, platform)combination automatically based on the playbook'stested_platforms.How to run just this test locally
The runner extracts test blocks from
playbooks/*/vscode-qwen3-coder/README.md(the failing block starts around line 174).Failing test (verbatim from the README)
1200sResult
1stderr (last lines)
stdout (last lines)
This issue is opened and deduplicated by
.github/scripts/create_failure_issues.py. Close it once the failure is fixed; subsequent failures with the same scope will reopen a fresh issue.