fix: add GPU EP to Kokoro/Orpheus/Silero ORT sessions (#964 series PR #2)#991
Merged
Conversation
) Continues the GPU-fallback-removal series started in #985. PR #1 (#985) fixed the 3 sites with broken `feature = "coreml"` cfg gates (embedding, piper, moonshine). This PR (#2) covers the 4 sites that configured NO Execution Provider at all — they relied on ORT's implicit CPU EP, which is the same silent-fallback shape per Joel's architectural rule (2026-05-01: "lack of GPU integration is forbidden, GPU acceleration in all cases"). Sites updated (all use the centralized helper from #985): - live/audio/tts/kokoro.rs (Kokoro TTS) - live/audio/tts/orpheus.rs (Orpheus SNAC decoder) - live/audio/vad/silero.rs (Silero VAD) - live/audio/vad/silero_raw.rs (Silero VAD raw) Each call site is identical in shape: insert one `build_ort_gpu_execution_providers()` call between `Session::builder()` and `with_optimization_level()`. No other behaviour change. ## Note on Silero VAD perf Silero is small (<2 MB) and per-frame; on its own a CPU EP would arguably be faster than CoreML/CUDA due to host↔GPU transfer overhead. But ORT's runtime decides per-op assignment once it sees the model graph + the GPU device profile, so any genuine perf trade-off is ORT's call. Per the architectural rule, we provide the GPU EP — ORT optimises from there. ## Test - cargo check -p continuum-core --features metal: PASSES (verified locally on M5; new EP-attachment compiles + integrates with the existing helper from #985) ## Out of scope (queued for PR #3 + later in series) - gpu/memory_manager.rs:799 detect_cpu_fallback() — silent "no GPU, use 25% RAM" fallback. Replace with hard-fail. - persona/allocator.rs:165 — explicit "cpu" GPU-type branch. - ROCm / DirectML / OpenVINO EP coverage in ort_providers.rs. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
joelteply
added a commit
that referenced
this pull request
May 2, 2026
…#1000) Per Joel's "100% free OOTB on MacBook Air on up, canary e2e working from curl, Carl's case" — the existing smoke probe only validates the page renders, not that a chat actually gets an AI reply. That's the true Carl-impact gate: if Carl types "hello" + gets nothing, the install isn't shippable, regardless of whether /health returned 200. This extends the smoke script with a 4th phase: 4. End-to-end chat: - Locate jtag binary (3 search paths) - Send a unique probe message to #general - Detect #994's "no listener" warning → exit 6 (distinct failure) - Poll chat/export for an AI reply (default 90s timeout) - On reply: report latency in PASS banner - On timeout: list root-cause diagnostic commands per #964/#980 series Exit codes (extends 0-3 from existing): 4 — chat/send command failed (system not ready for chat at all) 5 — no AI reply within timeout (the main Carl-blocker shape — silent AI) 6 — chat/send accepted but reported NO PERSONAS (#994 warning) — distinct from 5: "no AI" vs "AI didn't respond" CARL_CHAT_TIMEOUT_SEC env override (default 90s) for slow first-runs where DMR is cold-loading the persona model. The diagnostic message on exit 5 lists the post-#980 fix points so a future regression has an obvious starting checklist: - #997's 'local' default routing (cloud fallback dropped) - DMR running (Docker Desktop 4.62+ check from install.sh) - GPU EP cfg (#985/#991 fixed broken cfg gates) - Persona model pulled into DMR - NEW-A SIGABRT (tracked upstream as ggml-org/llama.cpp#22593) Now CI's carl-install-smoke gate proves the OOTB chain works end-to-end, not just up to the page render. Co-authored-by: Test <test@test.com> Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
What
Continues the GPU-fallback-removal series started in #985. PR #1 (#985) fixed the 3 sites with broken `feature = "coreml"` cfg gates (embedding, piper, moonshine). This PR (#2) covers the 4 sites that configured NO Execution Provider at all — they relied on ORT's implicit CPU EP, which is the same silent-fallback shape per Joel's architectural rule (2026-05-01: "lack of GPU integration is forbidden, GPU acceleration in all cases").
Sites updated (all use the centralized helper from #985)
Each call site is identical in shape: insert one `build_ort_gpu_execution_providers()` call between `Session::builder()` and `with_optimization_level()`. No other behaviour change.
Note on Silero VAD perf
Silero is small (<2 MB) and per-frame; on its own a CPU EP would arguably be faster than CoreML/CUDA due to host↔GPU transfer overhead. But ORT's runtime decides per-op assignment once it sees the model graph + the GPU device profile, so any genuine perf trade-off is ORT's call. Per the architectural rule, we provide the GPU EP — ORT optimises from there.
Test
Note on PR cycle
Branch named `mac-pr/...` to disambiguate from another AI working in the same workspace; this PR was rescued via SHA-to-ref push after a parallel-git race contaminated three earlier branch names. Setting up git worktrees per-AI as a permanent fix.
Out of scope (queued for PR #3 + later in series)
🤖 Generated with Claude Code