A small, local-first terminal emulator with optional AI assistance — transparent, safe, and easy to maintain.
Part of Open-OS (https://open-os.com/): open, smart tools that make technology more interoperable.
open-os cli is a terminal emulator with integrated AI assistance. It runs your shell normally and adds AI features that are explicit and approval-gated.
- Full terminal emulator (bash/zsh/fish via PTY)
- Two AI interaction modes: inline (Ctrl+Space in the terminal) and panel (overlay UI)
- Streaming responses from local LLMs via Ollama
- Commands are never executed without explicit user approval
- First-run setup wizard for model selection
- Configuration persisted at
~/.config/open-os-cli/config.json
Download from the GitHub Releases page.
| File | Size |
|---|---|
open-os-cli-0.3.0.AppImage |
~105 MB |
chmod +x open-os-cli-0.3.0.AppImage
./open-os-cli-0.3.0.AppImageNo installation needed. Works on any Linux distro with FUSE support. To integrate with your system launcher, use AppImageLauncher or move it to ~/Applications/ and create a .desktop entry.
| File | Size |
|---|---|
open-os-cli-0.3.0.pacman |
~73 MB |
sudo pacman -U open-os-cli-0.3.0.pacmanAfter installing, launch with:
open-os-cliTo uninstall:
sudo pacman -R open-os-cli| File | Size |
|---|---|
open-os-cli_0.3.0_amd64.deb |
~73 MB |
sudo dpkg -i open-os-cli_0.3.0_amd64.debAfter installing, launch with:
open-os-cliTo uninstall:
sudo dpkg -r open-os-cli- Ollama running locally for AI features (
ollama serve). The terminal works without it — AI features are optional.
| Layer | Technology | Role |
|---|---|---|
| Window | Electron | Desktop app shell, IPC between processes |
| Terminal | xterm.js + node-pty | Terminal rendering (browser) + real PTY (Node.js) |
| AI | Ollama HTTP API | Local LLM, streaming /api/chat |
| Language | TypeScript | Everything |
| Build | esbuild + electron-builder | Bundling + packaging |
- Node.js 20+
- Build tools for native modules (node-pty):
- Arch Linux:
sudo pacman -S base-devel - Ubuntu/Debian:
sudo apt install build-essential
- Arch Linux:
- Ollama running locally (
ollama serve)
git clone <repo-url> open-os-cli
cd open-os-cli
npm install # installs deps + rebuilds node-pty for Electron
npm start # builds TypeScript + launches the app- The terminal opens with a welcome message
- Press Ctrl+Space — the setup wizard appears if no model is configured
- Select an Ollama model from the list
- Start using AI assistance
Press Ctrl+Space anywhere in the terminal to enter AI mode:
- A colored
open-os >prompt appears - Type your question and press Enter
- The AI response streams directly in the terminal
- If the AI suggests commands, approval options appear:
- [I]nsert — places the command in the terminal prompt
- [A]ccept & Run — executes the command
- [C]ancel — discards and returns to normal mode
Press Escape at any time to cancel.
Click the hint bar at the bottom of the window for an overlay panel:
- Type your question in the input field
- The AI response streams in the panel
- Action buttons appear for suggested commands:
- Insert — writes the command to the terminal
- Accept & Run — executes it
- Cancel — closes the panel
Both modes automatically capture the last 30 lines of terminal output and include them with your query, giving the AI context about what you're working on. The system prompt also includes your OS, distro, and shell to get platform-specific suggestions.
User types in xterm.js
│
▼
[renderer.ts] ───IPC──► [main.ts] ───node-pty──► bash/zsh
│ │
│ ▼
│ shell output
│ │
IPC (pty:data) ◄─────────────┘
│
▼
xterm.js displays output
Ctrl+Space → inline mode / Click → panel mode
User types question → Enter
│
▼
[renderer.ts] ───IPC──► [main.ts] ───HTTP──► Ollama :11434
│ │
IPC (ai:chunk) ◄──────────┘
│ (streaming)
▼
Response displayed (inline or panel)
Approval options appear
│
┌───────────┼───────────┐
▼ ▼ ▼
[Insert] [Run] [Cancel]
pty.write pty.write close
(no \r) (+ \r)
The AI layer never talks directly to the PTY. It only produces suggestions. Execution always goes through the approval gate in the renderer.
open-os-cli/
├── package.json # deps, version, electron-builder config
├── tsconfig.json # TypeScript config
├── build.mjs # esbuild — bundles to dist/
├── build/
│ ├── icon.png # App icon (1024x1024 source)
│ └── icons/ # Generated sizes (16–512px) for hicolor theme
├── .gitignore
├── src/
│ ├── main.ts # Electron main: window + PTY + Ollama + config
│ ├── preload.ts # contextBridge: typed IPC API for renderer
│ └── frontend/
│ ├── index.html # Layout: terminal + panel + hint bar
│ ├── renderer.ts # xterm.js, inline AI, panel, response routing
│ └── styles.css # Electric blue theme, animations
└── README.md
| Concern | File |
|---|---|
| Window creation, menus, hotkeys | main.ts — createWindow() |
| PTY spawn and pipe | main.ts — createPty() |
| Ollama HTTP streaming | main.ts — queryOllama() |
| Model listing | main.ts — listOllamaModels() |
| Config persistence | main.ts — loadConfig() / saveConfig() |
| System info for prompt | main.ts — buildSystemPrompt() |
| IPC bridge | preload.ts — contextBridge |
| Terminal rendering | renderer.ts — xterm.js setup |
| Inline AI mode | renderer.ts — state machine (idle/input/streaming/approval) |
| Panel AI mode | renderer.ts — overlay panel with setup wizard |
| Response routing | renderer.ts — chunks routed by aiQuerySource |
| Welcome message | renderer.ts — showWelcome() |
Settings are stored at ~/.config/open-os-cli/config.json:
{
"model": "llama3:latest"
}The model can be changed at any time by clicking the model label in the panel header.
Ollama connection defaults to localhost:11434.
- No silent execution — AI never runs commands without explicit user approval.
- Transparency — AI output is visually distinct from terminal output.
- Local-first — Uses Ollama for fully local inference. No accounts, no telemetry.
- Small scope — Focused terminal + AI assistance. No plugins, no agents, no automation.
- Keep PRs small and focused
- Prefer simple solutions
- Avoid adding dependencies unless they reduce maintenance
MIT or Apache-2.0, consistent with the Open-OS ecosystem.

