Building the Garmin Local Archive took longer than expected. Not because of the core logic β but because of everything around it.
Endless chat sessions with Claude. Thousands of decisions, dead ends, rewrites, and moments where something finally clicked. At some point the chat history alone became unmanageable. How do you find a decision you made six weeks ago across several sessions? How do you translate documentation without sending it to yet another cloud service? How do you hand an entire codebase to an LLM without copy-pasting for an hour?
You build tools.
None of these were planned. Each one appeared because something was genuinely in the way β and the workaround turned out to be useful enough to keep. They have no dependency on the GLA itself. They just happened to be born in the same workshop.
A collection of needful things β helpful, useful, and sometimes maybe just fun ideas.
Sorts, summarizes, and exports Claude chat histories using a local Ollama model. Useful for reviewing decisions, generating project narratives, or building context for new sessions.
β See chat_pipeline/README_chat_pipeline.md
Fetches GitHub traffic data and compares a local folder against a GitHub repo. Generates Plotly dashboards and a diff report.
β See git_analyse/README_git_analyse.md
Generates throw-away GLA dashboards from an interactive config. No Python knowledge required. No Ollama. No changes to GLA itself.
Double-click start.bat β answer a few questions about fields, timeframe, and format β done.
Output lands directly in quick_dash/dashboards/, no GLA GUI needed.
Supports two modes: overview (daily summary values) and intraday (minute-by-minute series).
Output formats: HTML, Excel, JSON. GLA path and data path are saved after the first run.
Constraint: Requires GLA v1.4+ at a configured local path. Generated specialists are not production-ready β exploration only.
β See quick_dash/README_quick_dash.md
Local translation tool with Ollama as primary engine and optional Final-Pass via DeepL, LibreTranslate, MyMemory or Lara Translate. Two-column browser UI with synchronized scrolling β translate text and export both source and translation as Markdown files. Active Ollama model can be switched on the fly via the status bar dropdown.
Includes a terminology engine β domain-specific terms are protected before translation and restored afterwards using mindset-matched lookup tables built from MicrosoftTermCollection and IATE. The status bar shows whether the engine is active for the current language pair.
β Constraint: Designed for iterative translation (paragraph/page level). Long texts are split into chunks automatically (Ollama: 6 000 chars, DeepL: 4 900, MyMemory: 480) with live progress display. Not built for bulk-translating entire books in one pass β local LLM context limits and API quotas still apply.
β See translator/README_translator.md
Small single-purpose scripts that do exactly one thing.
Generates a folder tree of the current directory and writes it to struktur.md.
Merges all files in the current directory into a single Markdown file β useful for feeding a codebase to an LLM.
Replaces all values in JSON files with placeholders while keeping the structure intact β useful for sharing Garmin data samples without exposing personal health data.
One-way sync from a local folder to OneDrive. Local is master β copies new and changed files, removes files deleted locally, cleans up empty folders. Dry-run mode included.
Built with Claude Β· β buy me a coffee

