Enterprise agile / portfolio management platform. Private repo, Mac-only dev.
Prereqs: Node 20+, Go 1.22+ (only if rebuilding the backend binary), Docker Desktop (only if running E2E tests), SSH key ~/.ssh/id_ed25519 with access to mmffdev.com.
git clone git@github.com:mmffdev/vector.git "MMFFDev - Vector"
cd "MMFFDev - Vector"
npm installEnv files (backend/.env.local) are committed — no manual setup.
Double-click MMFF Vector Launcher.app — SwiftUI dashboard with per-component start/stop/restart for SSH tunnel, Go backend (:5100), and Next.js frontend (:5101), plus DB env switching. See .claude/commands/c_launcher.md.
git fetch && git checkout -b mbp17-stream002 origin/main && git push -u origin mbp17-stream002
https://prod.liveshare.vsengsaas.visualstudio.com/join?72101E4DB4E44A69011C3ED3B0423C4430B1
# 1. SSH tunnel to Postgres (one-off setup adds an `mmffdev-pg` alias)
./dev/scripts/ssh_manager.sh # appends SSH config + opens tunnel
# subsequent sessions:
ssh -N -f mmffdev-pg # localhost:5434 → remote :5432
# 2. Frontend (new terminal, repo root)
npm run dev # http://localhost:5101
# 3. Backend (new terminal)
cd backend
go run ./cmd/server # http://localhost:5100Selenium runs in a Docker container that drives a real browser against the live dev server.
docker run -d --name Selenium-Vector \
-p 4444:4444 -p 7900:7900 \
--shm-size 2g \
selenium/standalone-all-browsers:nightly
npm run e2e # node:test runner; specs in e2e/Watch the browser live at http://localhost:7900 (password secret); Grid UI at http://localhost:4444/ui/. See docs/c_selenium.md and dev/planning/plan_selenium_e2e.md.
Claude Code uses OpenAI's open-source Whisper model for audio transcription. It runs locally on your machine — no API costs, fully private.
Setup:
# 1. Install Whisper CLI (one-time)
pip install openai-whisper
# 2. Pre-download a model (choose one by speed/quality tradeoff):
whisper --model base # Fastest (~141MB); good for English
whisper --model small # Better accuracy (~461MB)
whisper --model medium # Even better (~1.4GB)
whisper --model large # Best quality (~2.9GB)If you skip step 2, the model auto-downloads on first transcription (will take a minute the first time).
Usage: In Claude Code, just ask to transcribe an audio file: "Transcribe /path/to/audio.mp3". The local Whisper model will convert speech to text.
Cleanup: If you later switch back to cloud-based transcription, remove the old installation with pip uninstall openai-whisper.
Drift detection and breaking-change protection for the Go router ↔ openapi.yaml ↔ frontend callers contract.
# Install oasdiff (one-time)
go install github.com/oasdiff/oasdiff@latest
# Check router + caller drift against openapi.yaml
npm run api:check
# Take a new snapshot + generate blast-radius report
npm run api:snap
# Install the pre-push hook (blocks pushes with undocumented routes or breaking changes)
npm run api:install-hooksBreaking-change escape hatch: include [breaking] in your commit message (pre-push hook) or PR title/body (GitHub Actions) to allow intentional breaks through.
Dev panel: http://localhost:5101/dev → API Changelog tab.
See docs/c_c_lint_rules.md for full detail.
See .claude/CLAUDE.md for the full topic index.