Skip to content

AbhinavGGarg/Percept

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

47 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ChatGPT Image May 3, 2026, 05_14_21 PM

Percept

A multimodal health intelligence platform that helps people and providers understand how physical symptoms and emotional patterns evolve together over time.

It is not a symptom checker, not a generic wellness chatbot, and not an AI doctor.

Percept is designed to make care more continuous, visual, and human.

Inspiration

Health has a fragmentation problem.

The body and mind are deeply connected, but most tools treat them as separate systems. Physical symptoms get pushed into anatomy diagrams and search results, while emotional patterns are spread across journals, sessions, mood logs, and memory.

In practice, people are left connecting signals on their own:

  • stress and sleep changes
  • soreness and movement patterns
  • recurring symptoms
  • journal themes
  • session commitments
  • long-term progress

That gap became our thesis:

Can we build a multimodal health intelligence system that makes physical and emotional patterns visible without replacing the judgment of patients or providers?

What It Does

Percept connects body symptoms, reflections, journals, and provider context into one living health record.

Users can describe what they feel in natural language (for example: tightness, soreness, fatigue, stress, or recurring discomfort). Percept then structures that input across two connected layers:

  • Body layer: interactive spatial understanding of where symptoms relate on the body
  • Longitudinal layer: timeline-based reflection, mood, and session continuity

Body Understanding

Percept maps symptom context onto an interactive body model to surface:

  • relevant muscle groups
  • connected body regions
  • movement relationships
  • plain-language context

Reflection and Journaling

Users can capture text or voice reflections, set visibility controls, and build continuity over time. Percept helps surface:

  • recurring themes
  • emotional shifts
  • commitments
  • symptom patterns
  • body-mind relationships worth exploring

Provider Context

Percept organizes shared context into a concise session-prep view so providers can quickly understand:

  • what changed
  • what repeated
  • what returned
  • what was committed to
  • what the patient chose to share

Human interpretation stays in control.

AI Intelligence Layer

Percept uses AI as a synthesis layer, not as a replacement for care.

Core capabilities include:

  • natural language symptom interpretation
  • body-region and muscle-group mapping
  • theme extraction from reflections
  • emotional pattern highlighting
  • session recap generation
  • commitment tracking
  • longitudinal context synthesis

Core principle: Percept surfaces patterns, but does not diagnose, prescribe, or make clinical decisions.

How We Built It

Percept is a full-stack web app that combines conversational interaction, body visualization, timeline continuity, and AI-backed synthesis.

Frontend

  • React
  • Vite
  • JavaScript (ES6+)
  • CSS3

Backend

  • Node.js
  • Express
  • REST API endpoints for chat, triage, narration, and muscle mapping

AI + Voice

  • Claude API integration for health-context responses and structured synthesis
  • ElevenLabs API integration for optional text-to-speech playback
  • Web Speech API integration for voice capture

Data + State

  • LocalStorage for persistent timeline continuity in-browser
  • Patient-controlled visibility model (private, shared, include-in-brief)

Reasoning Pipeline

  1. Natural language input
  2. Classification as symptom, reflection, or session context
  3. Body-region or theme mapping
  4. Structured event creation
  5. Timeline/context update
  6. Pattern synthesis
  7. User/provider-facing explanation

Physical input becomes spatial context. Journal/session input becomes temporal context. Together they form a connected view of health over time.

Challenges We Ran Into

  • Balancing product complexity with simple UX
  • Preventing AI overconfidence in a health context
  • Making body and timeline visuals informative, not decorative
  • Enforcing safety boundaries without weakening usefulness

We intentionally avoided unsupported medical claims and automated diagnosis behavior.

Accomplishments We’re Proud Of

  • Unified body + reflection + timeline product loop
  • Patient-controlled sharing model
  • Session prep that reduces provider context switching
  • AI synthesis that prioritizes clarity and restraint

What We Learned

Health intelligence is not about collecting more data. It is about connecting the right signals in the right way.

A single symptom or reflection can be noisy. Patterns across time are where useful insight emerges.

What’s Next

  • expand interactive body coverage
  • improve evidence-grounded explanation quality
  • strengthen theme extraction and continuity summaries
  • enhance provider session-prep tooling
  • refine privacy and consent controls
  • explore wearable and measurement-based integrations

Built With

React, Vite, JavaScript (ES6+), CSS3, Node.js, Express.js, Claude API, ElevenLabs API, Web Speech API, LocalStorage, Vercel (frontend), Render (backend), GitHub.

Project Structure

frontend/   React + Vite client app
backend/    Express API for chat, triage, narration, body mapping, and voice helpers
data/       product data assets

Run Locally

  1. Frontend
cd frontend
npm install
npm run dev
  1. Backend
cd backend
npm install
cp .env.example .env
npm run dev

Frontend runs on http://localhost:5173 and calls backend via VITE_API_URL.

Environment Variables

Backend (backend/.env):

  • GEMINI_API_KEY (required)
  • ELEVENLABS_API_KEY (optional)
  • ELEVENLABS_VOICE_ID (optional)
  • PORT (optional, defaults to 3001)

Frontend:

  • VITE_API_URL = deployed backend URL

About

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors