Skip to content

alal29/lock-backin

Repository files navigation

LockBackIn

**An AI-track hackathon app:**TEAM LockBackIn is a privacy-first lecture engagement platform for students. Track your focus with local and optional cloud AI, capture transcripts, and catch up on missed content with AI-generated summaries. All data stays local by default—cloud integrations are opt-in.

Features

Core

  • Live Transcript Capture - Web Speech API with fallback. Supports browser, ElevenLabs STT (optional), or demo sample transcript.
  • Local-First Focus Detection - Heuristic analysis of camera frames (luminance, motion, page visibility, activity). No frames are uploaded.
  • Missed Segment Timeline - Automatically track unfocused periods (configurable 6-15s threshold, shorter in demo mode).
  • Catch-Up Cards - AI-generated summaries with:
    • 3 key bullet points
    • 6 key terms
    • Self-assessment question + suggested answer
  • Mindfulness Reset - 60-second breathing flow before revealing catch-up content.
  • Demo Mode - Reduced auto-unfocus threshold (6s) for reliable demo experience.

Accessibility

  • Keyboard shortcuts: M (manual mark), C (camera toggle), T (transcription toggle)
  • ARIA labels and high-contrast mode
  • Responsive design

Privacy

  • Local-only by default: All data (transcripts, focus scores, segments, summaries) stored in browser localStorage.
  • Opt-in cloud exports: Send anonymized events to Databricks or download JSON locally.
  • No video upload: Only anonymized metrics are sent (with explicit user consent).

UI Navigation

The app uses a 3-mode interface with a top navigation bar:

🎯 Focus Mode

Your primary view while studying. Shows:

  • Live camera feed with focus score
  • Session controls (Start/Stop Camera, Start/Stop Transcribe)
  • Status dashboard (focus status, missed segments count, unfocused time)
  • Live transcript ticker (shows what's being transcribed)
  • Keyboard shortcuts: C to toggle camera, T to toggle transcription

📝 Catch-Up Mode

Review and summarize missed content. Shows:

  • Transcript panel (left) with search, copy, and sample data load
  • Missed segments panel (right) with instant summary generation
  • Sticky demo tools drawer at top (demo mode, provider selection, test segment generation)
  • All existing transcript and segment controls

⚙️ Settings

Configure the app. Includes:

  • Accessibility settings (Silent mode, Private mode, High contrast)
  • Voice Nudge configuration (custom recorded nudge clips, threshold, cooldown, volume)
  • Integration health checks (verify Gemini, ElevenLabs, Databricks, Presage)
  • Analytics & data export
  • Keyboard shortcuts reference

Keyboard Shortcuts

Press these keys anytime (except when typing):

  • M: Toggle manual mark (marks current time as start/end of missed segment)
  • C: Toggle camera on/off
  • T: Toggle transcription on/off

Quick Start

Installation

cd lock-backin
npm install
npm run dev

Open http://localhost:3000 in Chrome (best Web Speech API support).

Demo Run (No API Keys Required)

  1. Click "Load sample transcript" in Catch-Up mode (transcript panel)
  2. Switch to Focus mode and click "Start Camera"
  3. Wait ~10 seconds unfocused (or go to Catch-Up > Demo Tools, enable Demo Mode for 6s threshold)
  4. Switch to Catch-Up mode and use "Gen Demo Segment" in Demo Tools to create a test missed segment
  5. Select the segment and click "Generate Summary"

The app will generate a summary using the local fallback summarizer (no API response needed).


Sponsor Integrations

All integrations are optional and gracefully degrade if keys are not set. The app works fully offline.

1. Google Gemini (MLH AI/ML Sponsor)

Setup:

  1. Get API key from Google AI Studio
  2. Add to .env.local:
    GEMINI_API_KEY=your_key_here
    
  3. Restart dev server
  4. In Catch-Up mode, open the "Demo Tools" drawer and select "Gemini AI" from the provider dropdown

Behavior:

  • Uses /api/summarize to call Gemini for instant catch-up summaries
  • Falls back to local summarizer if Gemini fails or key is missing

2. ElevenLabs STT (MLH API Sponsor)

Setup:

  1. Get API key from ElevenLabs
  2. Add to .env.local:
    ELEVENLABS_API_KEY=your_key_here
    ELEVENLABS_MODEL_ID=optional_model_id
    
  3. Restart dev server

Usage:

  • Check Integration Health in Settings mode
  • Browser records audio chunks and sends to /api/transcribe
  • Falls back to Web Speech API if ElevenLabs fails

3. Presage AI - Computer Vision (MLH Computer Vision Sponsor)

Setup:

  1. Get API key from Presage or your vendor
  2. Add to .env.local:
    PRESAGE_API_KEY=your_key_here
    
  3. Presage is integrated as an adapter layer in lib/presage.ts

Behavior:

  • Currently returns null (graceful fallback)
  • Structured code ready to add real Presage SDK
  • When enabled, would augment local focus score (50/50 blend) with vision-based attention metrics
  • Does NOT upload raw frames by default

To integrate a real Presage SDK:

  1. Install: npm install @presage-ai/sdk
  2. Update lib/presage.ts:
    • Call PresageClient.analyzeFrame(imageData) in getPresageFocusScore()
    • Return attention/engagement score

4. Databricks Analytics (MLH Data & Analytics Sponsor)

Setup:

  1. Create a Databricks workspace and SQL endpoint
  2. Add to .env.local:
    DATABRICKS_HOST=https://your-instance.cloud.databricks.com
    DATABRICKS_TOKEN=your_token
    DATABRICKS_TABLE=analytics_events
    DATABRICKS_WAREHOUSE_ID=your_warehouse_id
    
  3. Restart dev server

Usage:

  • Analytics Panel (bottom-right) shows event count
  • Click "Download JSON" to export events locally
  • Click "Send to Databricks" to ingest to SQL warehouse
  • Events logged: focus_sample, missed_segment_created, catchup_generated, reset_completed, camera/transcription start/stop

Events Schema (sent to Databricks):

{
  "event_type": "catchup_generated",
  "event_timestamp": "2025-02-28T10:30:00Z",
  "event_payload": "{\"segmentId\": \"...\", \"provider\": \"gemini\"}",
  "ingest_timestamp": "2025-02-28T10:31:00Z"
}

5. FeatherlessAI - Open Model Inference (Stevens Alumni Sponsor)

Setup:

  1. Get API key from FeatherlessAI
  2. Add to .env.local:
    FEATHERLESS_API_KEY=your_key_here
    FEATHERLESS_BASE_URL=https://api.featherless.ai
    FEATHERLESS_MODEL=mistral-7b  # or your preferred model
    
  3. Restart dev server
  4. Select "Summarizer: Featherless" from dropdown

Behavior:

  • Routes summarization requests to Featherless /v1/completions endpoint
  • Expects JSON response with bullets, keyTerms, checkQuestion
  • Falls back to local summarizer if API fails or key is missing
  • Supports any open-source model (Mistral, Llama, etc.)

Environment Variables

See .env.example for a complete template. Copy to .env.local:

cp .env.example .env.local
Variable Sponsor Purpose Required
GEMINI_API_KEY Google Gemini Summarization ❌ No
ELEVENLABS_API_KEY ElevenLabs Speech-to-Text ❌ No
PRESAGE_API_KEY Presage AI Focus Detection ❌ No
DATABRICKS_* Databricks Analytics Export ❌ No
FEATHERLESS_API_KEY FeatherlessAI Summarization ❌ No

All variables are optional. The app provides deterministic fallbacks for every integration.


Architecture & Key Files

File Purpose
app/page.tsx Main UI, state management, keyboard shortcuts, demo mode
app/api/summarize/route.ts Multi-provider summarization (Gemini, Featherless, local fallback)
app/api/transcribe/route.ts ElevenLabs STT transcription endpoint
app/api/databricks/ingest/route.ts Analytics event ingestion to Databricks
hooks/useCameraFocus.ts Local focus detection (luminance, motion, activity, page visibility)
hooks/useSpeechRecognition.ts Web Speech API transcript capture
lib/analytics.ts Event logging and export (localStorage-based)
lib/presage.ts Presage AI adapter (stub, ready to integrate)
lib/featherless.ts FeatherlessAI inference client
components/AnalyticsPanel.tsx Event dashboard and export UI

Known Issues & Fixes

Hydration Error (Fixed)

  • Issue: Date.now() in initial state causes SSR/client mismatch
  • Fix: Initialize state to 0, set real time in useEffect after mount
  • Code: page.tsx uses deterministic initial state

Demo Transcript Timing

  • Sample transcript now anchors to sessionElapsedMs (session start time)
  • Demo missed segments generated with overlapping time windows
  • Ensures reliable summarization during judging

Scripts

npm run dev      # Start dev server (http://localhost:3000)
npm run build    # Production build
npm run start    # Run production build
npm run lint     # ESLint check

Privacy Statement

What Stays Local

✅ Camera input (Web Speech API)
✅ Transcript segments
✅ Focus scores and missed segments
✅ Summaries and catch-up cards
✅ Analytics events (stored in localStorage)

What Can Be Sent (Opt-In)

🔄 Transcript text → Gemini / FeatherlessAI API (for summarization)
🔄 Audio chunks → ElevenLabs (for transcription)
🔄 Anonymized metrics → Databricks (for analytics)
🔄 Local storage JSON → User device (download only)

No Video Upload

❌ Raw camera frames are never uploaded
❌ Only frame-level metrics (luminance, motion) are computed locally
❌ Presage integration (when available) augments local heuristics only


Browser Compatibility

Feature Chrome Firefox Safari Edge
Web Speech API ⚠️
Camera (getUserMedia)
localStorage
Clipboard API

Recommended: Chrome/Chromium for best Web Speech API support.


Keyboard Shortcuts

Key Action
M Toggle manual missed segment mark
C Toggle camera on/off
T Toggle transcription on/off

Development Notes

Sponsor Integration Checklist

  • Gemini API integration (with JSON validation & fallback)
  • ElevenLabs STT endpoint (/api/transcribe)
  • Presage adapter layer (stub ready for SDK)
  • Databricks ingestion (/api/databricks/ingest)
  • FeatherlessAI summarization client
  • Analytics event logger (localStorage + export)
  • Demo mode reducer (6s unfocus threshold)
  • Generated demo missed segments
  • Keyboard shortcuts (M, C, T)
  • Analytics panel with export UI

Future Enhancements

  • Real Presage SDK integration
  • WebRTC audio streaming to ElevenLabs (instead of chunked)
  • Databricks Delta table append (vs. SQL insert)
  • Custom model selection UI
  • Persistent session across tabs

Contributing

Submit issues or PRs to improve demo reliability, add new integrations, or enhance accessibility.


License

MIT (if applicable)


Built for Stevens Hackathon 2025. Shout-out to MLH and sponsor companies: Google (Gemini), ElevenLabs, Presage AI, Databricks, and FeatherlessAI.

About

AI-powered focus companion that detects drift, generates catch-ups, and tracks burnout patterns.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages