Turn any YouTube lecture, podcast, or educational video into bite-sized brainrot-style study clips. Paste a URL, let AI find the key concepts, and study by doomscrolling a TikTok-style feed.
YouTube URL → Download → Transcribe → AI Curate → Render → Scroll & Study
- Download — yt-dlp grabs the video and audio
- Transcribe — Whisper (local, base model) generates timestamped segments
- Curate — Claude Haiku picks the most study-worthy moments (key concepts, surprising facts, "aha" moments)
- Render — FFmpeg crops to 9:16, burns in captions, outputs short clips
- Serve — TikTok-style vertical scroll feed with autoplay
BrainClipper/
├── backend/ # FastAPI (Python 3.11) — all heavy processing
│ ├── main.py # API routes + job orchestration
│ ├── pipeline/
│ │ ├── downloader.py # yt-dlp download (MP4 + MP3)
│ │ ├── transcriber.py # Whisper transcription
│ │ ├── curator.py # Claude API clip selection
│ │ └── renderer.py # FFmpeg 9:16 crop + caption burn
│ ├── prompts/
│ │ └── curator_prompt.txt # System prompt for clip curation
│ ├── jobs/ # Job state JSON files (runtime)
│ ├── outputs/
│ │ ├── raw/ # Downloaded videos (runtime)
│ │ └── clips/ # Rendered clips (runtime)
│ ├── requirements.txt
│ ├── Dockerfile
│ ├── railway.toml
│ └── .env.example
├── frontend/ # Next.js 14 (App Router) — UI only
│ ├── app/
│ │ ├── page.tsx # Home — URL input
│ │ └── watch/[jobId]/ # TikTok scroll feed
│ ├── components/
│ │ ├── UrlInput.tsx # YouTube URL submission
│ │ ├── VideoFeed.tsx # Scroll-snap container
│ │ ├── VideoCard.tsx # Per-clip player with autoplay
│ │ ├── LoadingState.tsx # Animated loading screen
│ │ ├── ErrorState.tsx # Error display with retry
│ │ └── SpotlightBg.tsx # Spotlight following the cursor
│ ├── lib/
│ │ └── api.ts # Typed API client
│ └── .env.local
├── CLAUDE.md # Project memory for Claude Code
├── AGENTS.md # Onboarding guide for AI coding agents
└── readme.md
- Python 3.11+
- Node.js 20+
- FFmpeg installed (
brew install ffmpegon macOS) - An Anthropic API key
cd backend
# Create virtual environment (first time only)
python3 -m venv venv
# Activate and install dependencies
source venv/bin/activate
pip install -r requirements.txt
# Configure environment
cp .env.example .env
# Edit .env — set your ANTHROPIC_API_KEY
# Optionally set YTDLP_COOKIES_BROWSER=chrome for YouTube auth
# Run the server
uvicorn main:app --host 127.0.0.1 --port 8000 --reloadcd frontend
# Install dependencies (first time only)
npm install
# Run the dev server
npm run devThen open http://localhost:3000, paste a YouTube URL, and wait for clips.
YouTube blocks automated downloads without authentication. Set one of these in backend/.env:
# Option A: Auto-read from your browser (easiest)
YTDLP_COOKIES_BROWSER=chrome
# Option B: Exported cookie file (for deployment)
# YTDLP_COOKIES_FILE=cookies.txtFor Option B, export cookies from an incognito YouTube session using the "Get cookies.txt LOCALLY" Chrome extension.
| Method | Route | Description |
|---|---|---|
GET |
/ |
Health check |
POST |
/api/jobs |
Create a job {"url": "..."} → {"job_id": "..."} |
GET |
/api/jobs/{id} |
Poll job status and get clip URLs |
GET |
/static/clips/{file} |
Serve rendered clip MP4s |
Each pipeline module has a standalone test block:
cd backend
source venv/bin/activate
# Test transcriber (uses macOS TTS to generate test audio)
python3 -m pipeline.transcriber
# Test curator (requires ANTHROPIC_API_KEY in .env)
python3 -m pipeline.curator
# Test renderer (generates test video with FFmpeg)
python3 -m pipeline.rendererCreate a fake "done" job to test the scroll feed:
# Start backend
cd backend && source venv/bin/activate && uvicorn main:app --port 8000
# The test-feed job should already exist in jobs/test-feed.json
# If not, the renderer test creates outputs/clips/test-clip.mp4
# Then visit: http://localhost:3000/watch/test-feedTest URLs for different UI states:
/watch/test-feed— Video feed with clips/watch/test-loading— Loading animation/watch/test-error— Error state
cd backend
railway upSet environment variables in Railway dashboard:
ANTHROPIC_API_KEYYTDLP_COOKIES_FILE(upload cookies.txt as a volume)
cd frontend
vercelSet in Vercel dashboard:
NEXT_PUBLIC_BACKEND_URL= your Railway URL
- End-to-end YouTube download testing — YouTube's bot detection currently blocks most automated downloads. Need to test with proper cookie auth on Railway or find alternative download strategies
- Granular job progress — Currently fakes progress messages with a timer. Add real status updates (downloading → transcribing → curating → rendering clip 1/3...)
- Error recovery — Add retry logic for transient failures (network timeouts, API rate limits)
- Mobile responsiveness — Test and polish the scroll feed on actual mobile devices
- Word-level karaoke captions — Use Whisper word timestamps + .ass subtitle files for per-word highlighting
- Split-screen mode — Gameplay footage on bottom half (classic brainrot style)
- Clip duration control — Let users choose clip length (15s, 30s, 60s)
- Share clips — Direct link to individual clips, download button
- Better video format handling — Fallback format selectors for different YouTube video types
- Deepgram for faster transcription (Whisper base is slow on CPU)
- Redis/Celery for job queue instead of threading
- User accounts + clip history via Supabase
- CDN for clips instead of serving from the app server
- Rate limiting to prevent abuse