You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Apr 13, 2026. It is now read-only.
The current `tycoon ai` stack is hard-wired to LM Studio's local server at `localhost:1234`. LM Studio is a heavy desktop app — it's fine for development on a high-end machine but is not a realistic dependency for general-purpose users.
Research findings
OpenAI Codex CLI — cloud-only, conflicts with local-first design. Not worth pursuing.
OpenAI gpt-oss-20b — Apache 2.0 open-weight model, 16GB RAM, available on Ollama. Viable locally, but adds Ollama as a dependency.
Ollama — lighter than LM Studio, broader hardware support, CLI-first, no GUI. But still a separate install.
Goal
Find the single shortest path to running a capable model for the specific tasks tycoon needs:
Background
The current `tycoon ai` stack is hard-wired to LM Studio's local server at `localhost:1234`. LM Studio is a heavy desktop app — it's fine for development on a high-end machine but is not a realistic dependency for general-purpose users.
Research findings
Goal
Find the single shortest path to running a capable model for the specific tasks tycoon needs:
These are focused, single-turn tasks with small context windows. They do not need a general-purpose chat model or a large context.
Design questions to answer
Out of scope