Skip to content

TheCoderAdi/GHOST

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 

Repository files navigation

GHOST — Gemma Hardware Optimization & System Tuner

An AI agent that monitors your machine in real time, reasons about what's slowing it down using Gemma, and actually fixes it — with a full rollback safety net.


What makes GHOST different

  • Acts, doesn't just advise — Gemma reasons over live telemetry and executes fixes: suspending rogue processes, adjusting priorities, flushing caches. Not a report. An agent.
  • Sense → Think → Act → Verify → Rollback loop — every action is measured. If metrics don't improve in 60s, GHOST rolls back automatically.
  • Predictive alerts — after 3+ days of data, GHOST learns your machine's patterns and warns you before a slowdown happens.
  • Machine persona — builds a behavioral fingerprint over 7 days. Knows your peak hours, worst offenders, battery prognosis.
  • Weekly health letter — Gemma writes a plain-English summary of your machine's week, every Monday.
  • 100% local + private — your process list, telemetry, and usage patterns never leave your machine.

Device / VRAM recommendations

Pick a Gemma model based on your available VRAM (GPU memory) or system RAM:

VRAM Recommended Model
4–6 GB gemma4:e2b
8–12 GB gemma4:e4b
16–20 GB gemma4:26b
24 GB+ gemma4:31b

Key point: Gemma only runs every 60–90 seconds for analysis — not continuously. In Lite mode, GHOST typically recovers more RAM than Gemma occupies. Net memory gain on most machines.


Stack

  • Backend: Go (gopsutil, modernc/sqlite) — single binary, ~8MB
  • AI: Gemma 4 via Ollama (local, offline, private)
  • Frontend: Electron + React + TypeScript + Recharts
  • IPC: stdin/stdout newline-delimited JSON (no ports, no HTTP overhead)

Setup

1. Install prerequisites

Install Go

https://go.dev/dl/

Install Ollama

https://ollama.com

Pull a Gemma model

Choose the model tier that matches your hardware:

# Low VRAM / lightweight
ollama pull gemma4:e2b

# Balanced default
ollama pull gemma4:e4b

# High-end workstation
ollama pull gemma4:26b

# Full flagship model
ollama pull gemma4:31b

If a specific Gemma tier (for example gemma4:26b) is not installed locally, you can either pull it with:

ollama pull <model>

Or run the backend with any locally available model by setting the GEMMA_MODEL environment variable.

PowerShell

$env:GEMMA_MODEL = 'gemma4:e4b'

Bash / macOS

export GEMMA_MODEL=gemma4:e4b

Then start the backend normally.


2. Build the Go backend

cd ghost-server

go mod tidy

go build -o ghost-server ./cmd/ghost

# Binary appears as:
# ghost-server/ghost-server

3. Run the Electron frontend

cd ghost-client

npm install

# Copy the compiled backend binary next to frontend
cp ../ghost-server/ghost-server ./ghost-server

npm run electron:dev

4. Production build

cd ghost-client

npm run build

# Distributable appears in:
# ghost-client/dist/

5. Single Command To Entirely Set Up & Run

# For Windows PowerShell:
./start.bat

# For macOS/Linux Bash:
./start.sh

Architecture

Go Backend (single binary)
├── sensor/     — CPU, RAM, temp, process, battery telemetry (every 5s)
├── agent/      — analyze loop (60s), prediction loop (5min), persona loop (6h)
├── gemma/      — Ollama local API client
├── executor/   — safe actions + undo stack + 60s verification
└── storage/    — SQLite: snapshots, actions, persona, weekly letters

Electron Frontend
├── main.ts     — spawns Go binary + IPC bridge
├── preload.ts  — secure context bridge
└── renderer/
    ├── Dashboard    — live metrics, charts, process table
    ├── Terminal     — streaming SENSE / THINK / ACT logs
    ├── FixHistory   — before/after action deltas
    ├── PersonaPage  — machine behavioral fingerprint
    └── WeeklyLetter — Gemma-written health reports

Safety system

Risk level Behavior
Safe Auto-execute (suspend background process, lower priority, flush DNS cache)
Medium Approval required via toast notification
High Explained only — never auto-runs

Every action stores an undo state.

If metrics don't improve within 60 seconds → automatic rollback.


Why Gemma?

  • 31B dense reasoning — root-cause analysis requires correlating CPU, RAM, thermal, and process telemetry across time. That's multi-step reasoning, not keyword matching.
  • Long-context support — feed long telemetry windows into a single prompt with no chunking or RAG pipeline complexity.
  • Local-only privacy — your process list and system telemetry never leave your machine.

Cloud-based telemetry analysis is a privacy risk. For this category of software, local inference isn't a bonus feature — it's the correct architecture.


Official Gemma 4 Ollama Tags

Available Ollama Gemma 4 models:

  • gemma4:e2b
  • gemma4:e4b
  • gemma4:26b
  • gemma4:31b

Official Ollama model page:

https://ollama.com/library/gemma4


Submission

Built for the Gemma 4 Challenge on dev.to:

https://dev.to/challenges/google-gemma-2026-05-06

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors