Skip to content

LeNguyen20507/EverMind

Repository files navigation

EverMind - Alzheimer's Care Support App

EverMind Logo

AI-Powered Emergency Grounding Support for Alzheimer's & Dementia Patients

FeaturesTech StackGetting StartedArchitectureUsage


Overview

EverMind is a caregiver companion app designed to help families and professional caregivers support loved ones with Alzheimer's disease and dementia. The app features an AI-powered SOS voice assistant that provides emergency grounding conversations using evidence-based techniques like validation therapy, reminiscence therapy, and sensory grounding.

The Problem

Caregivers of Alzheimer's patients often face challenging moments when their loved ones experience:

  • Sudden confusion or disorientation
  • Anxiety and agitation episodes
  • Sundowning symptoms
  • Memory-related distress

In these moments, having immediate access to a calming, patient presence can make all the difference.

Our Solution

EverMind provides:

  • 🆘 SOS Voice Assistant: One-tap access to an AI-powered calming conversation using the patient's personalized profile
  • 📊 Mood & Activity Tracking: Log daily moods, conversations, and observations that inform the AI's approach
  • 👤 Patient Profiles: Store detailed information about multiple patients including triggers, comfort memories, and calming strategies
  • 🎵 Personalized Comfort: Music preferences, family photos, and voice recordings for therapeutic use

Features

🆘 Emergency SOS Voice Assistant

The core feature - a specialized AI agent trained in Alzheimer's grounding techniques:

  • Validation Therapy: Acknowledges and validates the patient's feelings without correction
  • Reminiscence Therapy: Uses personalized memories and topics to redirect and calm
  • Sensory Grounding: Guided breathing and present-moment awareness techniques
  • Adaptive Responses: Uses recent tracking data to tailor the conversation approach
  • Natural Voice: High-quality text-to-speech with warm, calming voices
Patient: "I need to get to school, the children are waiting!"
AI: "I can hear you're worried about the children, Margaret. That shows 
     how much you care. You're safe at home right now. Tell me about 
     your favorite students - I'd love to hear about them."

📊 Care Tracking

  • Mood Logging: Track patient moods throughout the day with notes
  • Conversation Notes: Document significant interactions with tags (medical, behavior, emergency)
  • Emergency Context: Important information that gets prioritized during SOS calls
  • Mood History: View patterns over time to identify triggers and improvements

👤 Multi-Patient Management

  • Support for multiple patient profiles
  • Quick switching between patients
  • Personalized data for each:
    • Medical information (medications, allergies, doctor)
    • Emergency contacts
    • Comfort memories and calming strategies
    • Known triggers to avoid
    • Favorite music and voice recordings

Tech Stack

Category Technology
Frontend React 18 + Vite 5
Routing React Router v6
Icons Lucide React
Voice AI VAPI (Voice AI Platform)
AI Model Claude claude-sonnet-4-20250514 (Anthropic)
Speech-to-Text Deepgram Nova 2
Text-to-Speech Azure Neural Voices
Data Storage localStorage (client-side)
MCP Server EverMind MCP Server (Model Context Protocol)

Getting Started

Prerequisites

  • Node.js 18+
  • npm or yarn
  • VAPI account (for voice features)

Installation

  1. Clone the repository

    git clone https://github.com/yourusername/adtreat.git
    cd adtreat
  2. Install dependencies

    npm install
  3. Configure environment variables

    cp .env.example .env.local

    Edit .env.local with your API keys:

    # Required for voice features
    VITE_VAPI_PUBLIC_KEY=your_vapi_public_key_here
    
    # Optional
    VITE_ANTHROPIC_API_KEY=your_anthropic_key
  4. Start the development server

    npm run dev
  5. Open in browser

    http://localhost:5173
    

Getting VAPI API Key

  1. Sign up at vapi.ai
  2. Go to Dashboard → API Keys
  3. Copy your Public Key
  4. Add to .env.local as VITE_VAPI_PUBLIC_KEY

Architecture

Project Structure

adtreat/
├── src/
│   ├── components/
│   │   ├── Navbar.jsx           # Navigation with SOS button
│   │   ├── SOSModal.jsx         # Emergency voice call modal
│   │   ├── PatientSwitcher.jsx  # Patient selection modal
│   │   ├── EmergencyModal.jsx   # Quick emergency contacts
│   │   └── VoiceSession.jsx     # Full voice session page
│   │
│   ├── pages/
│   │   ├── Home.jsx             # Patient overview dashboard
│   │   └── Tracking.jsx         # Mood & notes logging
│   │
│   ├── context/
│   │   └── PatientContext.jsx   # Global patient state management
│   │
│   ├── utils/
│   │   ├── vapiClient.js        # VAPI voice call integration
│   │   ├── promptGenerator.js   # AI system prompt builder
│   │   ├── activityTracker.js   # Mood/notes storage
│   │   ├── mcpClient.js         # MCP server client
│   │   └── profiles.js          # Demo patient profiles
│   │
│   └── styles/
│       ├── index.css            # Core styles
│       ├── navigation.css       # Nav components
│       └── tracking.css         # Tracking page
│
├── mcp-server/                  # Optional MCP server
│   ├── src/
│   │   ├── index.js             # Server entry point
│   │   └── profiles.js          # Patient profile data
│   └── package.json
│
└── public/
    └── assets/                  # Logos and images

Data Flow

┌─────────────────────────────────────────────────────────────────┐
│                        EverMind App                              │
├─────────────────────────────────────────────────────────────────┤
│                                                                  │
│  ┌──────────────┐    ┌──────────────┐    ┌──────────────┐       │
│  │  Tracking    │───▶│  Activity    │───▶│  EverMind    │       │
│  │    Page      │    │   Tracker    │    │  MCP Server  │       │
│  └──────────────┘    └──────────────┘    └──────────────┘       │
│         │                   │                    │               │
│         ▼                   ▼                    ▼               │
│  ┌────────────────────────────────────────────────────────┐     │
│  │              EverMind MCP Server                        │     │
│  │  • Patient profiles & medical info                      │     │
│  │  • Mood logs & conversation notes                       │     │
│  │  • Emergency context & tracking data                    │     │
│  │  • Provides structured context to AI                    │     │
│  └────────────────────────────────────────────────────────┘     │
│                            │                                     │
│                            ▼                                     │
│  ┌────────────────────────────────────────────────────────┐     │
│  │                   SOS Modal                             │     │
│  │                                                         │     │
│  │  1. Fetches patient data from MCP Server                │     │
│  │  2. Pulls tracking data (moods, notes) from MCP         │     │
│  │  3. Generates personalized system prompt                │     │
│  │  4. Initiates VAPI voice call with full context         │     │
│  └────────────────────────────────────────────────────────┘     │
│                            │                                     │
└────────────────────────────┼─────────────────────────────────────┘
                             │
                             ▼
┌─────────────────────────────────────────────────────────────────┐
│                         VAPI Cloud                               │
├─────────────────────────────────────────────────────────────────┤
│                                                                  │
│  ┌──────────────┐    ┌──────────────┐    ┌──────────────┐       │
│  │   Deepgram   │    │   Claude     │    │    Azure     │       │
│  │  Nova 2 STT  │───▶│claude-sonnet-4-20250514│───▶│   TTS      │       │
│  └──────────────┘    └──────────────┘    └──────────────┘       │
│                                                                  │
│  Speech Recognition → AI Response → Natural Voice               │
│                                                                  │
└─────────────────────────────────────────────────────────────────┘

AI Prompt System

The system prompt is dynamically generated with:

  1. Patient Profile Data

    • Name, age, diagnosis stage
    • Core identity and background
    • Safe places and comfort memories
    • Known triggers to avoid
    • Calming topics and strategies
  2. Tracking Context

    • Recent mood patterns (last 3 days)
    • Today's caregiver notes
    • Emergency-tagged observations
  3. Grounding Techniques

    • Validation therapy approaches
    • Reminiscence conversation starters
    • Sensory grounding scripts
    • Reassurance phrases

Usage

For Caregivers

  1. Select Patient: Tap the avatar in the top-right to switch patients
  2. Daily Tracking: Use the Tracking page to log moods and notes
  3. Emergency SOS: Tap the "Help" button anytime for AI grounding support

SOS Voice Call Flow

  1. Caregiver taps Help button
  2. Modal opens with patient info displayed
  3. Tap Start Grounding Call
  4. AI initiates calming conversation using patient's profile
  5. Real-time transcripts show AI responses
  6. End call when patient is calm

Tracking Tips

  • Log moods at different times of day to identify patterns
  • Use the Emergency tag for critical observations
  • Add notes after difficult episodes to inform future AI conversations
  • The AI uses recent tracking data to adapt its approach

Configuration

Environment Variables

Variable Required Description
VITE_VAPI_PUBLIC_KEY Yes VAPI API public key for voice calls
VITE_ANTHROPIC_API_KEY No Direct Anthropic API access
VITE_MCP_SERVER_URL No MCP server URL if using external server

Voice Configuration

The app uses Azure Neural Voices by default:

  • Female: en-US-JennyNeural (warm, caring tone)
  • Male: en-US-GuyNeural (calm, reassuring tone)

Voice selection is based on patient profile data.


Development

Available Scripts

# Start development server
npm run dev

# Build for production
npm run build

# Preview production build
npm run preview

EverMind MCP Server

The EverMind MCP Server (Model Context Protocol) is a core component that serves as the data hub between the app and the AI. All tracking information (moods, notes, emergency context) flows through the MCP server before being sent to the AI, ensuring the voice assistant has complete, structured context.

cd mcp-server
npm install
npm start

The MCP Server provides:

  • Structured patient profile data
  • Real-time tracking information
  • Emergency context prioritization
  • Consistent data format for AI consumption

Patient Profiles

Adding Your Own Patients

You can add any patient information through the app interface. Use the patient switcher (top-right avatar) to create new patient profiles with:

  • Personal information (name, age, diagnosis stage)
  • Medical details (medications, allergies, doctor info)
  • Emergency contacts
  • Comfort memories and calming strategies
  • Known triggers to avoid
  • Favorite music and voice recordings

Default Patients

The app comes with three pre-configured patients to help you get started:

  1. Margaret Thompson (Maggie) - 78, Early-Stage Alzheimer's

    • Former elementary school teacher, loves 1960s music
    • Triggers: loud noises, being rushed
  2. William O'Connor (Bill) - 81, Moderate Alzheimer's

    • Former firefighter, Boston Red Sox fan
    • Triggers: evening time (sundowning), crowds
  3. Dorothy Mae Johnson (Dot) - 84, Moderate-Severe Alzheimer's

    • Church choir singer for 50 years, loves gospel music
    • Triggers: being alone, darkness

License

This project is licensed under the MIT License - see the LICENSE file for details.

EverMind - Because every moment of connection matters.

Made with love for caregivers and their loved ones

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •