An AI agent that monitors your machine in real time, reasons about what's slowing it down using Gemma, and actually fixes it — with a full rollback safety net.
- Acts, doesn't just advise — Gemma reasons over live telemetry and executes fixes: suspending rogue processes, adjusting priorities, flushing caches. Not a report. An agent.
- Sense → Think → Act → Verify → Rollback loop — every action is measured. If metrics don't improve in 60s, GHOST rolls back automatically.
- Predictive alerts — after 3+ days of data, GHOST learns your machine's patterns and warns you before a slowdown happens.
- Machine persona — builds a behavioral fingerprint over 7 days. Knows your peak hours, worst offenders, battery prognosis.
- Weekly health letter — Gemma writes a plain-English summary of your machine's week, every Monday.
- 100% local + private — your process list, telemetry, and usage patterns never leave your machine.
Pick a Gemma model based on your available VRAM (GPU memory) or system RAM:
| VRAM | Recommended Model |
|---|---|
| 4–6 GB | gemma4:e2b |
| 8–12 GB | gemma4:e4b |
| 16–20 GB | gemma4:26b |
| 24 GB+ | gemma4:31b |
Key point: Gemma only runs every 60–90 seconds for analysis — not continuously. In Lite mode, GHOST typically recovers more RAM than Gemma occupies. Net memory gain on most machines.
- Backend: Go (
gopsutil,modernc/sqlite) — single binary, ~8MB - AI: Gemma 4 via Ollama (local, offline, private)
- Frontend: Electron + React + TypeScript + Recharts
- IPC: stdin/stdout newline-delimited JSON (no ports, no HTTP overhead)
Choose the model tier that matches your hardware:
# Low VRAM / lightweight
ollama pull gemma4:e2b
# Balanced default
ollama pull gemma4:e4b
# High-end workstation
ollama pull gemma4:26b
# Full flagship model
ollama pull gemma4:31bIf a specific Gemma tier (for example gemma4:26b) is not installed locally, you can either pull it with:
ollama pull <model>Or run the backend with any locally available model by setting the GEMMA_MODEL environment variable.
$env:GEMMA_MODEL = 'gemma4:e4b'export GEMMA_MODEL=gemma4:e4bThen start the backend normally.
cd ghost-server
go mod tidy
go build -o ghost-server ./cmd/ghost
# Binary appears as:
# ghost-server/ghost-servercd ghost-client
npm install
# Copy the compiled backend binary next to frontend
cp ../ghost-server/ghost-server ./ghost-server
npm run electron:devcd ghost-client
npm run build
# Distributable appears in:
# ghost-client/dist/# For Windows PowerShell:
./start.bat
# For macOS/Linux Bash:
./start.shGo Backend (single binary)
├── sensor/ — CPU, RAM, temp, process, battery telemetry (every 5s)
├── agent/ — analyze loop (60s), prediction loop (5min), persona loop (6h)
├── gemma/ — Ollama local API client
├── executor/ — safe actions + undo stack + 60s verification
└── storage/ — SQLite: snapshots, actions, persona, weekly letters
Electron Frontend
├── main.ts — spawns Go binary + IPC bridge
├── preload.ts — secure context bridge
└── renderer/
├── Dashboard — live metrics, charts, process table
├── Terminal — streaming SENSE / THINK / ACT logs
├── FixHistory — before/after action deltas
├── PersonaPage — machine behavioral fingerprint
└── WeeklyLetter — Gemma-written health reports
| Risk level | Behavior |
|---|---|
| Safe | Auto-execute (suspend background process, lower priority, flush DNS cache) |
| Medium | Approval required via toast notification |
| High | Explained only — never auto-runs |
Every action stores an undo state.
If metrics don't improve within 60 seconds → automatic rollback.
- 31B dense reasoning — root-cause analysis requires correlating CPU, RAM, thermal, and process telemetry across time. That's multi-step reasoning, not keyword matching.
- Long-context support — feed long telemetry windows into a single prompt with no chunking or RAG pipeline complexity.
- Local-only privacy — your process list and system telemetry never leave your machine.
Cloud-based telemetry analysis is a privacy risk. For this category of software, local inference isn't a bonus feature — it's the correct architecture.
Available Ollama Gemma 4 models:
gemma4:e2bgemma4:e4bgemma4:26bgemma4:31b
Official Ollama model page:
https://ollama.com/library/gemma4
Built for the Gemma 4 Challenge on dev.to: