Aura is a native macOS corner overlay for video calls. It watches the other participant's face on screen, estimates likely micro-expression shifts, and gives the user one practical coaching cue in real time.
This is designed for autistic users and anyone who struggles to read subtle facial feedback during online conversations. Instead of forcing them to guess whether someone is receptive, overwhelmed, skeptical, or drifting away, Aura surfaces a compact live read with confidence, supporting cues, and the best next move.
- Stays in the corner during Zoom, Meet, or any other video call.
- Tracks the largest face currently visible on screen.
- Calibrates to that person's neutral expression before making reads.
- Shows a likely state such as
Engaged and receptive,Processing carefully,Skeptical but listening, orFeeling pressure. - Explains why the read fired using compact cue labels.
- Recommends an immediate conversational adjustment instead of dumping raw emotion labels.
Aura is not a prompt box. It is an always-on ambient assistant that combines:
- Computer vision on live meeting video.
- Real-time decision logic over facial signals.
- Local HUD presentation designed for in-call use.
- AI-generated coaching phrased as a fast actionable response.
The core value is embodied and contextual: the system reacts to what another human is doing in the meeting, then helps the user adapt in the moment.
The product is intentionally framed around likely states and response strategy, not fake certainty. That makes it much more usable:
Likely state: what the person may be signaling.Confidence: how stable the read is.Best move: what the user should do next.Avoid: what would probably make the moment worse.
That keeps the interface useful under ambiguity, which is exactly what a real assistive tool needs.
- Aura/Aura/AuraApp.swift: launches the floating macOS overlay and anchors it in the lower-right corner.
- Aura/Aura/ScreenCaptureManager.swift: captures the active screen, finds the most prominent face, and extracts facial cue proxies from Vision landmarks.
- Aura/Aura/AuraEngine.swift: calibrates a baseline, scores likely meeting states, smooths confidence, and decides what guidance to show.
- Aura/Aura/AuraHUDView.swift: renders the compact meeting overlay with state, confidence, metrics, cues, and suggested action.
- Aura/Aura/NetworkManager.swift: optionally uses Featherless to rewrite the coaching line into a tighter real-time suggestion.
Problem
Online meetings are full of subtle facial feedback. Many autistic users miss those signals or spend too much energy trying to decode them while also speaking.
Solution
Aura acts like a lightweight social co-pilot. It reads visible facial cues from the other participant and translates them into something immediately useful: what might be happening, how confident the system is, and what to do next.
Demo flow
- Start a video call and place Aura in the corner.
- Let it calibrate on the other person's neutral face.
- Speak normally while the overlay updates with likely social states.
- Show how the advice changes when the other participant looks engaged, overloaded, skeptical, or disengaged.
Featherless.ai: already integrated for concise coaching generation.ElevenLabs: strong next addition for optional whisper-style audio coaching or spoken summaries.Opennote: strong next addition for meeting reflection journals, user studies, and published planning artifacts for judging.
- Aura currently uses
CGWindowListCreateImage, which typechecks but is deprecated on newer macOS versions. The migration path isScreenCaptureKit. - Featherless coaching is optional. If
FEATHERLESS_API_KEYis not set, the app falls back to local advice templates.