A multimodal health intelligence platform that helps people and providers understand how physical symptoms and emotional patterns evolve together over time.
It is not a symptom checker, not a generic wellness chatbot, and not an AI doctor.
Percept is designed to make care more continuous, visual, and human.
Health has a fragmentation problem.
The body and mind are deeply connected, but most tools treat them as separate systems. Physical symptoms get pushed into anatomy diagrams and search results, while emotional patterns are spread across journals, sessions, mood logs, and memory.
In practice, people are left connecting signals on their own:
- stress and sleep changes
- soreness and movement patterns
- recurring symptoms
- journal themes
- session commitments
- long-term progress
That gap became our thesis:
Can we build a multimodal health intelligence system that makes physical and emotional patterns visible without replacing the judgment of patients or providers?
Percept connects body symptoms, reflections, journals, and provider context into one living health record.
Users can describe what they feel in natural language (for example: tightness, soreness, fatigue, stress, or recurring discomfort). Percept then structures that input across two connected layers:
- Body layer: interactive spatial understanding of where symptoms relate on the body
- Longitudinal layer: timeline-based reflection, mood, and session continuity
Percept maps symptom context onto an interactive body model to surface:
- relevant muscle groups
- connected body regions
- movement relationships
- plain-language context
Users can capture text or voice reflections, set visibility controls, and build continuity over time. Percept helps surface:
- recurring themes
- emotional shifts
- commitments
- symptom patterns
- body-mind relationships worth exploring
Percept organizes shared context into a concise session-prep view so providers can quickly understand:
- what changed
- what repeated
- what returned
- what was committed to
- what the patient chose to share
Human interpretation stays in control.
Percept uses AI as a synthesis layer, not as a replacement for care.
Core capabilities include:
- natural language symptom interpretation
- body-region and muscle-group mapping
- theme extraction from reflections
- emotional pattern highlighting
- session recap generation
- commitment tracking
- longitudinal context synthesis
Core principle: Percept surfaces patterns, but does not diagnose, prescribe, or make clinical decisions.
Percept is a full-stack web app that combines conversational interaction, body visualization, timeline continuity, and AI-backed synthesis.
- React
- Vite
- JavaScript (ES6+)
- CSS3
- Node.js
- Express
- REST API endpoints for chat, triage, narration, and muscle mapping
- Claude API integration for health-context responses and structured synthesis
- ElevenLabs API integration for optional text-to-speech playback
- Web Speech API integration for voice capture
- LocalStorage for persistent timeline continuity in-browser
- Patient-controlled visibility model (private, shared, include-in-brief)
- Natural language input
- Classification as symptom, reflection, or session context
- Body-region or theme mapping
- Structured event creation
- Timeline/context update
- Pattern synthesis
- User/provider-facing explanation
Physical input becomes spatial context. Journal/session input becomes temporal context. Together they form a connected view of health over time.
- Balancing product complexity with simple UX
- Preventing AI overconfidence in a health context
- Making body and timeline visuals informative, not decorative
- Enforcing safety boundaries without weakening usefulness
We intentionally avoided unsupported medical claims and automated diagnosis behavior.
- Unified body + reflection + timeline product loop
- Patient-controlled sharing model
- Session prep that reduces provider context switching
- AI synthesis that prioritizes clarity and restraint
Health intelligence is not about collecting more data. It is about connecting the right signals in the right way.
A single symptom or reflection can be noisy. Patterns across time are where useful insight emerges.
- expand interactive body coverage
- improve evidence-grounded explanation quality
- strengthen theme extraction and continuity summaries
- enhance provider session-prep tooling
- refine privacy and consent controls
- explore wearable and measurement-based integrations
React, Vite, JavaScript (ES6+), CSS3, Node.js, Express.js, Claude API, ElevenLabs API, Web Speech API, LocalStorage, Vercel (frontend), Render (backend), GitHub.
frontend/ React + Vite client app
backend/ Express API for chat, triage, narration, body mapping, and voice helpers
data/ product data assets
- Frontend
cd frontend
npm install
npm run dev- Backend
cd backend
npm install
cp .env.example .env
npm run devFrontend runs on http://localhost:5173 and calls backend via VITE_API_URL.
Backend (backend/.env):
GEMINI_API_KEY(required)ELEVENLABS_API_KEY(optional)ELEVENLABS_VOICE_ID(optional)PORT(optional, defaults to3001)
Frontend:
VITE_API_URL= deployed backend URL