**An AI-track hackathon app:**TEAM LockBackIn is a privacy-first lecture engagement platform for students. Track your focus with local and optional cloud AI, capture transcripts, and catch up on missed content with AI-generated summaries. All data stays local by default—cloud integrations are opt-in.
- Live Transcript Capture - Web Speech API with fallback. Supports browser, ElevenLabs STT (optional), or demo sample transcript.
- Local-First Focus Detection - Heuristic analysis of camera frames (luminance, motion, page visibility, activity). No frames are uploaded.
- Missed Segment Timeline - Automatically track unfocused periods (configurable 6-15s threshold, shorter in demo mode).
- Catch-Up Cards - AI-generated summaries with:
- 3 key bullet points
- 6 key terms
- Self-assessment question + suggested answer
- Mindfulness Reset - 60-second breathing flow before revealing catch-up content.
- Demo Mode - Reduced auto-unfocus threshold (6s) for reliable demo experience.
- Keyboard shortcuts: M (manual mark), C (camera toggle), T (transcription toggle)
- ARIA labels and high-contrast mode
- Responsive design
- Local-only by default: All data (transcripts, focus scores, segments, summaries) stored in browser localStorage.
- Opt-in cloud exports: Send anonymized events to Databricks or download JSON locally.
- No video upload: Only anonymized metrics are sent (with explicit user consent).
The app uses a 3-mode interface with a top navigation bar:
Your primary view while studying. Shows:
- Live camera feed with focus score
- Session controls (Start/Stop Camera, Start/Stop Transcribe)
- Status dashboard (focus status, missed segments count, unfocused time)
- Live transcript ticker (shows what's being transcribed)
- Keyboard shortcuts: C to toggle camera, T to toggle transcription
Review and summarize missed content. Shows:
- Transcript panel (left) with search, copy, and sample data load
- Missed segments panel (right) with instant summary generation
- Sticky demo tools drawer at top (demo mode, provider selection, test segment generation)
- All existing transcript and segment controls
Configure the app. Includes:
- Accessibility settings (Silent mode, Private mode, High contrast)
- Voice Nudge configuration (custom recorded nudge clips, threshold, cooldown, volume)
- Integration health checks (verify Gemini, ElevenLabs, Databricks, Presage)
- Analytics & data export
- Keyboard shortcuts reference
Press these keys anytime (except when typing):
- M: Toggle manual mark (marks current time as start/end of missed segment)
- C: Toggle camera on/off
- T: Toggle transcription on/off
cd lock-backin
npm install
npm run devOpen http://localhost:3000 in Chrome (best Web Speech API support).
- Click "Load sample transcript" in Catch-Up mode (transcript panel)
- Switch to Focus mode and click "Start Camera"
- Wait ~10 seconds unfocused (or go to Catch-Up > Demo Tools, enable Demo Mode for 6s threshold)
- Switch to Catch-Up mode and use "Gen Demo Segment" in Demo Tools to create a test missed segment
- Select the segment and click "Generate Summary"
The app will generate a summary using the local fallback summarizer (no API response needed).
All integrations are optional and gracefully degrade if keys are not set. The app works fully offline.
Setup:
- Get API key from Google AI Studio
- Add to
.env.local:GEMINI_API_KEY=your_key_here - Restart dev server
- In Catch-Up mode, open the "Demo Tools" drawer and select "Gemini AI" from the provider dropdown
Behavior:
- Uses
/api/summarizeto call Gemini for instant catch-up summaries - Falls back to local summarizer if Gemini fails or key is missing
Setup:
- Get API key from ElevenLabs
- Add to
.env.local:ELEVENLABS_API_KEY=your_key_here ELEVENLABS_MODEL_ID=optional_model_id - Restart dev server
Usage:
- Check Integration Health in Settings mode
- Browser records audio chunks and sends to
/api/transcribe - Falls back to Web Speech API if ElevenLabs fails
Setup:
- Get API key from Presage or your vendor
- Add to
.env.local:PRESAGE_API_KEY=your_key_here - Presage is integrated as an adapter layer in
lib/presage.ts
Behavior:
- Currently returns
null(graceful fallback) - Structured code ready to add real Presage SDK
- When enabled, would augment local focus score (50/50 blend) with vision-based attention metrics
- Does NOT upload raw frames by default
To integrate a real Presage SDK:
- Install:
npm install @presage-ai/sdk - Update
lib/presage.ts:- Call
PresageClient.analyzeFrame(imageData)ingetPresageFocusScore() - Return attention/engagement score
- Call
Setup:
- Create a Databricks workspace and SQL endpoint
- Add to
.env.local:DATABRICKS_HOST=https://your-instance.cloud.databricks.com DATABRICKS_TOKEN=your_token DATABRICKS_TABLE=analytics_events DATABRICKS_WAREHOUSE_ID=your_warehouse_id - Restart dev server
Usage:
- Analytics Panel (bottom-right) shows event count
- Click "Download JSON" to export events locally
- Click "Send to Databricks" to ingest to SQL warehouse
- Events logged:
focus_sample,missed_segment_created,catchup_generated,reset_completed, camera/transcription start/stop
Events Schema (sent to Databricks):
{
"event_type": "catchup_generated",
"event_timestamp": "2025-02-28T10:30:00Z",
"event_payload": "{\"segmentId\": \"...\", \"provider\": \"gemini\"}",
"ingest_timestamp": "2025-02-28T10:31:00Z"
}Setup:
- Get API key from FeatherlessAI
- Add to
.env.local:FEATHERLESS_API_KEY=your_key_here FEATHERLESS_BASE_URL=https://api.featherless.ai FEATHERLESS_MODEL=mistral-7b # or your preferred model - Restart dev server
- Select "Summarizer: Featherless" from dropdown
Behavior:
- Routes summarization requests to Featherless
/v1/completionsendpoint - Expects JSON response with bullets, keyTerms, checkQuestion
- Falls back to local summarizer if API fails or key is missing
- Supports any open-source model (Mistral, Llama, etc.)
See .env.example for a complete template. Copy to .env.local:
cp .env.example .env.local| Variable | Sponsor | Purpose | Required |
|---|---|---|---|
GEMINI_API_KEY |
Google Gemini | Summarization | ❌ No |
ELEVENLABS_API_KEY |
ElevenLabs | Speech-to-Text | ❌ No |
PRESAGE_API_KEY |
Presage AI | Focus Detection | ❌ No |
DATABRICKS_* |
Databricks | Analytics Export | ❌ No |
FEATHERLESS_API_KEY |
FeatherlessAI | Summarization | ❌ No |
All variables are optional. The app provides deterministic fallbacks for every integration.
| File | Purpose |
|---|---|
app/page.tsx |
Main UI, state management, keyboard shortcuts, demo mode |
app/api/summarize/route.ts |
Multi-provider summarization (Gemini, Featherless, local fallback) |
app/api/transcribe/route.ts |
ElevenLabs STT transcription endpoint |
app/api/databricks/ingest/route.ts |
Analytics event ingestion to Databricks |
hooks/useCameraFocus.ts |
Local focus detection (luminance, motion, activity, page visibility) |
hooks/useSpeechRecognition.ts |
Web Speech API transcript capture |
lib/analytics.ts |
Event logging and export (localStorage-based) |
lib/presage.ts |
Presage AI adapter (stub, ready to integrate) |
lib/featherless.ts |
FeatherlessAI inference client |
components/AnalyticsPanel.tsx |
Event dashboard and export UI |
- Issue:
Date.now()in initial state causes SSR/client mismatch - Fix: Initialize state to
0, set real time inuseEffectafter mount - Code:
page.tsxuses deterministic initial state
- Sample transcript now anchors to
sessionElapsedMs(session start time) - Demo missed segments generated with overlapping time windows
- Ensures reliable summarization during judging
npm run dev # Start dev server (http://localhost:3000)
npm run build # Production build
npm run start # Run production build
npm run lint # ESLint check✅ Camera input (Web Speech API)
✅ Transcript segments
✅ Focus scores and missed segments
✅ Summaries and catch-up cards
✅ Analytics events (stored in localStorage)
🔄 Transcript text → Gemini / FeatherlessAI API (for summarization)
🔄 Audio chunks → ElevenLabs (for transcription)
🔄 Anonymized metrics → Databricks (for analytics)
🔄 Local storage JSON → User device (download only)
❌ Raw camera frames are never uploaded
❌ Only frame-level metrics (luminance, motion) are computed locally
❌ Presage integration (when available) augments local heuristics only
| Feature | Chrome | Firefox | Safari | Edge |
|---|---|---|---|---|
| Web Speech API | ✅ | ❌ | ✅ | |
| Camera (getUserMedia) | ✅ | ✅ | ✅ | ✅ |
| localStorage | ✅ | ✅ | ✅ | ✅ |
| Clipboard API | ✅ | ✅ | ✅ | ✅ |
Recommended: Chrome/Chromium for best Web Speech API support.
| Key | Action |
|---|---|
| M | Toggle manual missed segment mark |
| C | Toggle camera on/off |
| T | Toggle transcription on/off |
- Gemini API integration (with JSON validation & fallback)
- ElevenLabs STT endpoint (
/api/transcribe) - Presage adapter layer (stub ready for SDK)
- Databricks ingestion (
/api/databricks/ingest) - FeatherlessAI summarization client
- Analytics event logger (localStorage + export)
- Demo mode reducer (6s unfocus threshold)
- Generated demo missed segments
- Keyboard shortcuts (M, C, T)
- Analytics panel with export UI
- Real Presage SDK integration
- WebRTC audio streaming to ElevenLabs (instead of chunked)
- Databricks Delta table append (vs. SQL insert)
- Custom model selection UI
- Persistent session across tabs
Submit issues or PRs to improve demo reliability, add new integrations, or enhance accessibility.
MIT (if applicable)
Built for Stevens Hackathon 2025. Shout-out to MLH and sponsor companies: Google (Gemini), ElevenLabs, Presage AI, Databricks, and FeatherlessAI.