v0.1 detects SwiftLM presence only. Wire up actual subprocess launch, OpenAI-compatible HTTP pass-through, 100B+ MoE model loading. Spec: `features/inference-engines.md` Engine 2.
v0.1 detects SwiftLM presence only. Wire up actual subprocess launch, OpenAI-compatible HTTP pass-through, 100B+ MoE model loading. Spec:
features/inference-engines.mdEngine 2.