Turn any video into an interactive shadowing workstation.
AI-powered smart segmentation · Single-sentence looping · Hands-free hotkeys · 100% local & offline
Afterglow is a local-first language learning player designed for shadowing practice. Drop in any video file, and Afterglow will:
- Transcribe it into timestamped sentences using Faster-Whisper
- Segment the speech into logical sentence boundaries
- Play each sentence with configurable looping and gap pauses — so you can listen, pause, and repeat
No cloud APIs. No subscriptions. Everything runs on your machine.
| Feature | Description |
|---|---|
| AI Transcription | Powered by Faster-Whisper (base model, ~143 MB). Auto-downloads on first run. |
| Single-Sentence Loop | Repeat current sentence 1 / 2 / 3 / 5 / ∞ times before advancing. |
| Smart Gap | Auto-pause between sentences — giving you time to shadow. |
| Hotkeys | Space play/pause · Enter replay · ←→ prev/next · ↑↓ speed |
| Segment Merge | Combine broken segments with one click. |
| Subtitle Blocker | Hide embedded subtitles so you rely on your ears. |
| Smart Caching | SHA-256 file hash → skip re-transcription on reload. |
| Playback Speed | 0.3× to 3.0× in 0.1× steps. |
| Gap OFF | Gap ON | |
|---|---|---|
| Loop OFF | Normal continuous play | Auto-pause after each sentence |
| Loop ON | Repeat N times, then next | Repeat → pause → repeat cycle |
- Python 3.13+
- Node.js 22+ (with npm)
git clone https://github.com/your-username/Afterglow.git
cd Afterglow
start.batThe script installs dependencies, finds free ports, starts both services, and opens your browser.
Backend:
cd backend
pip install -r requirements.txt
uvicorn app.main:app --reload --port 8000Frontend:
cd frontend
npm install
npm run dev -- --port 5173Then open http://localhost:5173 in your browser.
Afterglow/
├── frontend/ # React 19 + TypeScript + Vite
│ ├── src/
│ │ ├── components/ # UI components (VideoPlayer, TranscriptPanel, ...)
│ │ ├── hooks/ # Custom hooks (useShadowPlayer, useHotkeys, ...)
│ │ ├── services/ # API client & file hashing
│ │ └── types/ # TypeScript interfaces
│ └── vite.config.ts
│
├── backend/ # FastAPI + Faster-Whisper
│ ├── app/
│ │ ├── routers/ # /api/transcribe, /api/cache
│ │ └── services/ # Whisper model wrapper
│ └── requirements.txt
│
├── docs/ # Requirements & plans
├── start.bat # Windows launcher
└── README.md
Runtime data is stored in ~/.afterglow/:
~/.afterglow/
├── cache/ # Transcription cache (JSON, keyed by SHA-256)
└── models/ # Whisper model files (auto-downloaded)
| Method | Endpoint | Description |
|---|---|---|
POST |
/api/transcribe |
Upload video file → get timestamped segments |
GET |
/api/cache/{hash} |
Retrieve cached transcription by file hash |
PUT |
/api/cache/{hash} |
Store transcription result in cache |
| Layer | Technology |
|---|---|
| Frontend | React 19, TypeScript, Vite 7, CSS3 |
| Backend | Python 3.13, FastAPI, Uvicorn |
| ASR Engine | Faster-Whisper (CTranslate2) |
| Caching | SHA-256 hash → local JSON files |
- Core video player with transcript panel
- AI transcription (Faster-Whisper)
- Single-sentence loop & smart gap
- Hotkey controls
- Segment merging
- Transcription caching
- Recording & playback comparison
- Pronunciation scoring
- Desktop app packaging (Electron / Tauri)
This project is licensed under the CC License.
