Skip to content

Symbioose/alan-context

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

19 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Alan Context

The health agent that calls you — because when you're burning out, you won't open the app.

Alan Context fuses biometric signals (Thryve wearables) with life context (optional voice notes) into an auditable Karpathy-style markdown wiki. Every morning, Mistral recompiles the wiki and a linter scans it for sustained multi-day patterns. When three independent layers align — physiological, behavioral, contextual — and only then, the agent picks up the phone and calls you. No dashboard. No chatbot. Silence is the feature.

Built solo in 12h for the Alan × Mistral AI Health Hackathon (April 2026).

Team

Name LinkedIn Profile Link
Emile Jouannet www.linkedin.com/in/emile-jouannet-1225aa251

The Problem

In November 2025, I burned out. Not dramatically — I just stopped running, started sleeping badly, and told myself I was fine. For three weeks. My Oura ring saw it coming. My HRV dropped over ten days. Every metric pointed the same direction. Nobody called me.

We have more health data than ever — Oura, Apple Watch, CGMs, Garmin. The signal is there. But when you're burning out, you don't open a dashboard. You don't type into a chatbot. You have no energy. You tell yourself you're fine.

The moment you most need your data is exactly when you have the least energy to question it.

Every existing health product assumes the user will pull: check the app, query the chatbot, read the insight. But pulling is the opposite of what a burned-out human does. You need something that pushes, at the right moment, once.

What It Does

Alan Context is a proactive health agent built around three ideas:

  1. The wiki is the memory. On onboarding, Mistral compiles 90 days of raw wearable data into a personal markdown knowledge graph — baseline, patterns, known historical episodes. No vector DB. No RAG. Plain markdown a doctor can open, read, correct, and audit. Inspired by Karpathy's vision of personal AI memory: a compiler, not for machines, but for humans.

  2. The linter is the judgement. Every morning, new data arrives, Mistral updates the wiki, a linter reads it. Three layers must align — physiological (HRV/resting heart rate sustained abnormal), behavioral (sleep fragmentation or activity drop sustained), contextual (voice note with an explicit stressor matching a known pattern). All three, sustained over multiple days. A single bad day never triggers. Most days: silence.

  3. The call is the action. When the linter fires, Bland AI places an outbound conversational call. The agent opens with an alert + one open question, listens to the user, proposes a teleconsult with Dr. Moreau on Alan, gives one concrete health tip, and closes warmly. The user confirms with one tap. The AI does the reasoning — the human keeps the authority.

How a user interacts

  • Onboarding (2 min) — Connect your wearable via Thryve. Mistral compiles your 90 days of history into the wiki. Baseline established. You close the app.
  • Every day — New data is compiled automatically. Nothing happens. No notification. No dashboard. You forget Alan Context exists.
  • Voice notes (optional) — Leave a voice note whenever you feel like it. Voxtral transcribes it. It becomes your personal journal and sharpens pattern detection.
  • One day, maybe every few months — The three layers align. Your phone rings. The agent asks how you're really feeling, proposes a Dr. Moreau teleconsult, holds the slot. You confirm with one tap.

The demo compresses months of logic into ninety seconds. On stage: one voice note → the graph updates live → the phone rings → a real conversation plays through the speaker. The logic is real; only the timing is accelerated.

Tech Stack

  • Mistral 🚀 — compiler (mistral-large-latest) that turns raw Thryve exports into the wiki, linter that decides whether to call, Voxtral (voxtral-mini-2507) for voice-note transcription with emotional signal
  • Thryve 🚀 — real wearable sandbox (Oura, Garmin, Fitbit, Withings, Whoop, Samsung) — 90 days of daily biometrics seed the wiki
  • ElevenLabs 🚀 — pre-rendered fallback audio (demo parachute if the outbound call stalls)
  • Bland AI — outbound conversational phone call (supports +33 French numbers, which Retell's free tier blocks)
  • FastAPI + Python — backend (mistralai SDK for compilation, REST for Voxtral)
  • React Native (Expo) — mobile app with calendar view + voice recorder
  • Obsidian Graph View — visualization of the wiki on stage (nodes, backlinks, color groups)

Architecture:

vault/raw/    daily ingestion: Thryve biometrics + voice notes
vault/wiki/   Mistral compiles raw → markdown knowledge graph
              ├── index.md            baseline profile + current status
              ├── patterns.md         historical critical patterns
              ├── Pattern Burnout.md  pattern node (Nov 2025 episode)
              ├── Pattern Recovery.md protocol node
              ├── Dr. Moreau.md       referring physician node
              └── daily/YYYY-MM-DD.md one entry per day

linter        Mistral scans the wiki → trigger outbound call only when
              3 layers sustained over multiple days
bland         Conversational outbound call → propose teleconsult + 1 tip
fallback      ElevenLabs mp3 plays locally if Bland stalls

No vector DB. No RAG. No hallucinations. The wiki is plain markdown that a referring doctor can open, read, and correct.

Special Track

  • Alan Play: Living Avatars
  • Alan Play: Mo Studios
  • Alan Play: Personalized Wrapped
  • Alan Play: Health App in a Prompt
  • Alan Precision — Challenge 3: Agentic Health Protocols

Alan Context is a direct fit for the "Open Claw for health" brief. It doesn't conseil — it acts: detects a sustained pattern, places a call, proposes a concrete Dr. Moreau teleconsult on Alan, holds the slot. Real Thryve data in, real outbound phone call out. The user confirms with one tap. Agentic, not advisory.

What We'd Do Next

  • Silent drift protocol — if HRV stays critical for 14+ days with no voice confirmation of recovery, escalate on biometrics alone. Today, voice is the gate; tomorrow, extreme sustained signal is enough.
  • Plug in more sources — the wiki is just markdown, so blood panels (Alan Precision), nutrition, calendar, cycle data all plug in as additional files. Each new source makes the linter smarter without touching the core. The architecture compounds.
  • Voxtral prosody features — today transcription only; next, feed prosody (flatness, hesitation, energy) into the contextual layer so the agent reads how you said it, not just what.
  • Real Alan API integration — today vault/raw/alan/ mocks Alan's teleconsult availability. Swap in the real Alan API so the agent truly holds a live Dr. Moreau slot.
  • Per-user personalization at scale — the wiki is per-user and private. Next step: ship a secure per-user vault with doctor-side read access (audit trail, corrections, validations).
  • Multi-wearable fusion — today Thryve abstracts one device at a time. Next: fuse Oura + Apple Watch + CGM into a unified physiological layer, so the linter reads the strongest signal per metric.

Quickstart

# 1. Backend
cd backend
python -m venv venv && source venv/bin/activate
pip install -r requirements.txt
cp .env.example .env          # fill in API keys

# 2. (One-time) seed the wiki + render the fallback audio
python compile_all.py --fetch   # pull Thryve data, compile daily entries
python compile_weekly.py        # group daily entries → weekly summaries
python generate_fallback.py     # render ElevenLabs mp3 → backend/assets/fallback.mp3

# 3. Run the backend
uvicorn main:app --host 0.0.0.0 --port 8000 --reload

# 4. Mobile
cd ../mobile
npm install
# Edit BACKEND_URL at the top of App.js to your laptop's LAN IP
npm start

Required env vars

Var Why it matters What breaks without it
MISTRAL_API_KEY Compiler + linter + Voxtral Backend cannot do anything
BLAND_API_KEY Outbound conversational calls Phone doesn't ring → fallback audio used
DEMO_USER_PHONE Who Bland calls Nobody receives the call
ELEVENLABS_API_KEY + ELEVENLABS_VOICE_ID Pre-render fallback audio Cannot regenerate fallback.mp3
THRYVE_* (4 vars) Pulling sandbox biometrics Cannot fetch new data (existing raw files still work)
RETELL_* (3 vars) Optional Retell primary (blocked on +33) Falls through to Bland AI automatically

The backend prints a startup warning for any missing key so you see what's broken before the demo.

Linter trigger logic

All three layers must be present AND sustained — never fires on a single bad day:

Layer Requirement
Physiological HRV (RMSSD) below 42ms OR dropping, across multiple daily entries (last 7 + index.md summary)
Behavioral Sleep efficiency <75% OR fragmentation sustained across 3+ entries, or activity drop over several days
Contextual Today's voice note explicitly mentions a concrete stressor (deadline, "slept 4 hours", "cancelled my run", "exhausted") AND matches a known pattern in patterns.md

If the user says they feel fine ("je vais très bien", "I'm good"), the contextual layer is not met — period. Silence is the success metric.

API

Method Path Purpose
GET / Health + config check
GET /status Lightweight state for the mobile client
POST /voice-note Audio upload → full demo pipeline (transcribe → augment → lint → call)
POST /trigger-call Manual outbound call (stage safety net)
POST /demo-reset Clear today's daily + teleconsult node (clean state between dry runs)
POST /play-fallback Mark fallback as played; returns /fallback-audio URL
GET /fallback-audio Serves backend/assets/fallback.mp3
POST /cron Re-compile today + run linter
GET /lint Run the linter only — debugging
GET /day/{day} Thryve metrics + voice notes for YYYY-MM-DD
GET /calendar/{ym} Per-day summary for YYYY-MM (drives mobile calendar)

Operational notes

  • The linter is fail-safe. If Mistral is down or returns invalid JSON, the endpoint returns trigger: false with an error string. It never 500s mid-demo.
  • Voxtral via REST, not SDK. The mistralai SDK v1.2.0 has no .audio attribute. Transcription calls REST directly. Model: voxtral-mini-2507 (voxtral-mini-latest alias returns empty text).
  • Augment-not-recompile. compiler.augment_with_voice preserves today's wiki entry structure and only updates ## Contextual signals, keeping the Obsidian graph stable across rehearsals.
  • Positive voice notes never trigger. The compiler detects positive/neutral notes and writes "No contextual stress signal" instead of a burnout flag. The linter has an explicit rule against false positives on "I'm good".
  • The fallback audio is the parachute. If Bland stalls during the pitch, the mobile app fetches /fallback-audio and plays it locally. Same UX for the jury.
  • Regenerate fallback.mp3 after any brief change. The mp3 is baked at generation time. Run python generate_fallback.py after modifying DEFAULT_FALLBACK_BRIEF in linter.py.

Honest limits

  • "Plugs into Alan teleconsult availability" is mocked via vault/raw/alan/YYYY-MM-DD.md. The linter reads it like real data; swapping for the real Alan API is a one-file change.
  • The demo is single-user. No auth, no rate limiting, CORS open. Hackathon prototype, not production.
  • Voxtral emotional signal is transcription only here. In production, Voxtral's prosody features would feed sentiment into the contextual layer.
  • The 15-second call delay in /voice-note is a demo pacing choice — it gives the jury time to see the graph update before the phone rings. In production, the trigger is immediate.

About

Alan Context — The health agent that calls you. Winner of the Alan Precision Track at the Alan × Mistral AI Health Hackathon 2026.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors