Skip to content

Vela-Engineering/coach

Vela Coach

A reading of you, from your last 30 meetings.

Vela Coach reads your real Granola meeting transcripts and writes a single, dense character reading of the operator they reveal — situated against academic frameworks for personality, communication, and leadership. The kind of thing a $500/hour coach would write after binging 90 days of your calendar.

Live: coach.vela.partners · License: MIT · Status: alpha

What it does

  1. Sign in with Granola — OAuth 2.0 + PKCE, no API keys to copy.
  2. Bring your own AI — pick OpenAI, Anthropic, or Google Gemini and paste your own API key once. It lives in your browser's localStorage; we never see it persisted on our side. You pay your provider directly.
  3. Pick meetings — last 30 days by default, or a custom range.
  4. Sharpen the read — up to three quick questions if the model is unsure about an MBTI dimension. Skippable.
  5. Read — a single scrollable page synthesizing:
    • MBTI (Myers–Briggs) with per-letter confidence + evidence quotes
    • OCEAN / Big Five (Costa & McCrae 1992; Barrick & Mount 1991)
    • Communication style — directness, hedging rate, listening ratio, specificity
    • Leadership style (Goleman 1998)
    • Situational signature — how you flex across investor / team / customer counterparties
  6. Re-read — analysis can be re-run as you accumulate new meetings; readings auto-save to your browser.
  7. Export — download your reading as HTML (self-contained, mirrors the live UI) or Markdown (LLM-friendly).

How it stays private

  • Tokens, your AI provider key, founder profile, and readings live in your browser's localStorage. We have no database. The Cloud Run container is stateless.
  • Transcripts pass through our server in-memory only — Granola → server → your chosen AI provider → your browser. Nothing is written to disk on our infra. Logs hold counts and HTTP status codes; never content, never keys.
  • Bring your own AI. OpenAI/Anthropic/Gemini billing flows through your account, with your data-use terms — not ours.
  • No analytics, no cookies, no third-party scripts.
  • /reset wipes everything Coach has written to your device.

Full details in PRIVACY.md.

Quickstart

git clone https://github.com/vela-engineering/coach.git
cd coach
npm install
npm run dev -- -p 4070

Open http://localhost:4070, sign in with Granola, then pick an AI provider and paste your API key. Get a key from:

That's it — no .env file required for local dev. The repo's open-source OAuth client is pre-registered for localhost:4070.

Tech stack

  • Framework: Next.js 16 (App Router) + TypeScript
  • Styling: Tailwind CSS v4 + Framer Motion
  • AI: any of OpenAI (gpt-5.5), Anthropic (claude-opus-4-7), or Google Gemini (gemini-3.1-pro-preview) via dynamic-imported provider adapters in src/lib/llm/
  • Data: Granola public REST API (public-api.granola.ai/v1)
  • Auth: Granola MCP OAuth (mcp-auth.granola.ai) — RFC 7591 dynamic client registration
  • Tests: Vitest, 330+ tests over src/__tests__/

Architecture

src/
├── app/
│   ├── api/
│   │   ├── auth/token/      # OAuth token exchange proxy (no logs of payload)
│   │   ├── character/       # Streaming Character Reading endpoint
│   │   ├── chat/            # Coaching follow-up chat
│   │   ├── llm/verify/      # One-shot key probe before saving
│   │   ├── meetings/        # Granola REST proxy
│   │   └── repo-stats/      # GitHub stars (cached server-side)
│   ├── auth/callback/       # OAuth redirect handler
│   └── reset/               # Wipe-localStorage page (with confirm)
├── components/
│   ├── Onboarding.tsx       # Cover → privacy → AI provider → sign in
│   ├── ProviderSetup.tsx    # Pick provider + paste key + verify
│   ├── FounderIntake.tsx    # Founder profile (pre-filled on returning)
│   ├── SharpenRead.tsx      # MBTI follow-up questions
│   ├── CharacterReading.tsx # The reading page
│   ├── CoachView.tsx        # Header + reading shell + export menu
│   ├── ConfirmDialog.tsx    # Reusable destructive-action modal
│   └── ...
└── lib/
    ├── llm/                 # Provider abstraction
    │   ├── types.ts         # LLMClient, LLMProvider, LLMUsage
    │   ├── index.ts         # getLLMClient(provider, key) factory
    │   ├── gemini.ts        # @google/genai adapter
    │   ├── openai.ts        # openai adapter
    │   └── anthropic.ts     # @anthropic-ai/sdk adapter
    ├── apiKey.ts            # Per-provider keys in localStorage
    ├── character.ts         # Analysis engine (provider-agnostic)
    ├── grounding.ts         # Web search (Gemini only — graceful no-op for others)
    ├── granolaRest.ts       # Granola REST client
    ├── relationshipMetrics.ts
    ├── exportHtml.ts        # Self-contained HTML export
    ├── sessionFile.ts       # Markdown export + parse
    ├── reset.ts             # Single source of truth for "wipe all data"
    ├── mbti.ts              # Pure MBTI helpers
    ├── phase.ts             # Phase state machine
    ├── founderProfile.ts    # localStorage profile
    └── sessions.ts          # localStorage session store

Run tests

npm test          # vitest run, 330+ tests
npm run test:watch

Production deploy (Cloud Run)

# Register a new Granola OAuth client for your prod redirect URI:
curl -sS -X POST https://mcp-auth.granola.ai/oauth2/register \
  -H "Content-Type: application/json" \
  -d '{
    "client_name": "Your Coach (production)",
    "redirect_uris": ["https://your-domain/auth/callback"],
    "grant_types": ["authorization_code", "refresh_token"],
    "response_types": ["code"],
    "token_endpoint_auth_method": "none",
    "scope": "openid email profile offline_access"
  }'
# → returns { "client_id": "client_xxx" }

# Build via Cloud Build with that client_id baked in:
gcloud builds submit --config cloudbuild.yaml \
  --substitutions=_IMAGE=us-central1-docker.pkg.dev/$PROJECT_ID/cloud-run-source-deploy/coach:latest,_NEXT_PUBLIC_GRANOLA_CLIENT_ID=client_xxx

# Roll out to Cloud Run — note: NO server-side LLM key needed (BYOK):
gcloud run deploy coach \
  --image=us-central1-docker.pkg.dev/$PROJECT_ID/cloud-run-source-deploy/coach:latest \
  --region=us-central1 --allow-unauthenticated --memory=1Gi --timeout=900 \
  --set-env-vars=NEXT_PUBLIC_GRANOLA_CLIENT_ID=client_xxx

The client_id is public by design (PKCE OAuth, no client secret). See SECURITY.md for the trust model and PRIVACY.md for what does + doesn't touch the server.

Design

Dark, intimate, editorial. Inspired by what a private executive coaching session feels like:

  • Palette — Void black (#08080a) + warm amber candlelight (#d4a04a)
  • Typography — Newsreader serif italic for personality, Outfit sans for body
  • Details — Ambient glow, grain texture, dotted-underline citation links
  • Voice — Second person, citation-grounded, never hedges when evidence is strong

Contributing

Issues, PRs, and discussions welcome. See CONTRIBUTING.md for setup, conventions, and what we don't accept.

For security issues, see SECURITY.md. For conduct, CODE_OF_CONDUCT.md.

Citations

Coach's prose draws explicitly from a small canon. Each claim in the reading footnotes its source:

  • Costa & McCrae (1992) — Revised NEO Personality Inventory
  • Barrick & Mount (1991) — The Big Five Personality Dimensions and Job Performance
  • McCrae & Costa (1989) — Reinterpreting the Myers-Briggs Type Indicator from the Five-Factor Model
  • Furnham (1996) — The Big Five vs the Big Four
  • Goleman (1998) — What Makes a Leader?
  • Pennebaker (2011) — The Secret Life of Pronouns
  • Leary (1957) — Interpersonal Diagnosis of Personality
  • Edmondson (1999) — Psychological Safety and Learning Behavior in Work Teams
  • Schein (2009) — Helping
  • Granovetter (1973) — The Strength of Weak Ties

License

MIT — © 2026 Vela Engineering.

About

What a $500/hour coach would analyze after reading 90 days of your meetings.

Topics

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages