Skip to content

araeyn/aura

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 

Repository files navigation

Aura

Aura is a native macOS corner overlay for video calls. It watches the other participant's face on screen, estimates likely micro-expression shifts, and gives the user one practical coaching cue in real time.

This is designed for autistic users and anyone who struggles to read subtle facial feedback during online conversations. Instead of forcing them to guess whether someone is receptive, overwhelmed, skeptical, or drifting away, Aura surfaces a compact live read with confidence, supporting cues, and the best next move.

What the App Does

  • Stays in the corner during Zoom, Meet, or any other video call.
  • Tracks the largest face currently visible on screen.
  • Calibrates to that person's neutral expression before making reads.
  • Shows a likely state such as Engaged and receptive, Processing carefully, Skeptical but listening, or Feeling pressure.
  • Explains why the read fired using compact cue labels.
  • Recommends an immediate conversational adjustment instead of dumping raw emotion labels.

Why This Fits Beyond the Chatbot

Aura is not a prompt box. It is an always-on ambient assistant that combines:

  • Computer vision on live meeting video.
  • Real-time decision logic over facial signals.
  • Local HUD presentation designed for in-call use.
  • AI-generated coaching phrased as a fast actionable response.

The core value is embodied and contextual: the system reacts to what another human is doing in the meeting, then helps the user adapt in the moment.

Practicality

The product is intentionally framed around likely states and response strategy, not fake certainty. That makes it much more usable:

  • Likely state: what the person may be signaling.
  • Confidence: how stable the read is.
  • Best move: what the user should do next.
  • Avoid: what would probably make the moment worse.

That keeps the interface useful under ambiguity, which is exactly what a real assistive tool needs.

Current Architecture

Hackathon Story

Problem

Online meetings are full of subtle facial feedback. Many autistic users miss those signals or spend too much energy trying to decode them while also speaking.

Solution

Aura acts like a lightweight social co-pilot. It reads visible facial cues from the other participant and translates them into something immediately useful: what might be happening, how confident the system is, and what to do next.

Demo flow

  1. Start a video call and place Aura in the corner.
  2. Let it calibrate on the other person's neutral face.
  3. Speak normally while the overlay updates with likely social states.
  4. Show how the advice changes when the other participant looks engaged, overloaded, skeptical, or disengaged.

Sponsor Fit

  • Featherless.ai: already integrated for concise coaching generation.
  • ElevenLabs: strong next addition for optional whisper-style audio coaching or spoken summaries.
  • Opennote: strong next addition for meeting reflection journals, user studies, and published planning artifacts for judging.

Notes

  • Aura currently uses CGWindowListCreateImage, which typechecks but is deprecated on newer macOS versions. The migration path is ScreenCaptureKit.
  • Featherless coaching is optional. If FEATHERLESS_API_KEY is not set, the app falls back to local advice templates.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors