Download YouTube transcripts and manage channel archives with a unified CLI.
- 📹 Extract videos from any YouTube channel
- 📝 Download transcripts with metadata (title, author, duration, etc.)
- 📄 Save as markdown files with YAML frontmatter for easy processing
- 🔄 Track progress in CSV files to resume interrupted downloads
- Cross-platform data directories via
platformdirs - Unified CLI with subcommands (
ytscriber download,extract,sync-all)
pip install ytscribergit clone https://github.com/dparedesi/YTScribe.git
cd YTScribe
pip install -e .pip install -e ".[dev]"# Extract videos from a conference channel
ytscriber extract https://www.youtube.com/@AWSEventsChannel/videos \
--count 100 \
--folder aws-reinvent-2025 \
--register-channel
# Download transcripts
ytscriber download --folder aws-reinvent-2025ytscriber extract <channel_url> --count <number> --folder <folder>Examples:
# AWS re:Invent 2025
ytscriber extract https://www.youtube.com/@AWSEventsChannel/videos \
--count 100 \
--folder aws-reinvent-2025 \
--register-channel
# PyCon US
ytscriber extract https://www.youtube.com/@PyConUS \
--count 50 \
--folder pycon-2024 \
--register-channel
# KubeCon
ytscriber extract https://www.youtube.com/@cncf/videos \
--count 75 \
--folder kubecon-2024 \
--register-channelOptions:
| Option | Description | Default |
|---|---|---|
--count, -n |
Number of latest videos to extract | 10 |
--folder |
Folder under data dir (shorthand for CSV) | - |
--append-csv |
Create or append to CSV file | - |
--output, -o |
Save video IDs to text file | - |
--register-channel |
Add channel to channels.yaml | False |
--verbose, -v |
Enable verbose output | False |
ytscriber download --folder <folder>Examples:
# Download transcripts for AWS re:Invent
ytscriber download --folder aws-reinvent-2025
# With faster processing (shorter delay)
ytscriber download --folder pycon-2024 --delay 30Options:
| Option | Description | Default |
|---|---|---|
--folder |
Folder under data dir (shorthand for CSV/output) | - |
--csv |
Input CSV file with video URLs | - |
--output-dir |
Directory for transcript files | outputs |
--delay |
Seconds between requests | 60 |
--languages, -l |
Language codes to try | en en-US en-GB |
--verbose, -v |
Enable verbose output | False |
Single video mode:
ytscriber download https://www.youtube.com/watch?v=VIDEO_ID --output transcript.mdytscriber add <youtube_url> --folder <collection>ytscriber sync-allytscriber download-allytscriber config
ytscriber config --set defaults.delay=45ytscriber statusEach transcript is saved as a Markdown file with rich metadata in the YAML frontmatter:
---
video_id: i_cskqmWA3U
video_url: https://www.youtube.com/watch?v=i_cskqmWA3U
title: Personalize ChatGPT with custom instructions
author: OpenAI
published_date: 2025-05-12
length_minutes: 2.17
views: 60882
description: "With custom instructions, ChatGPT becomes more relevant..."
is_generated: True
is_translatable: True
---
You can find memory and custom instructions by clicking on your profile
in the top right, clicking on settings, and then personalization...Files are named with the publish date for easy sorting: 2025-05-12-i_cskqmWA3U.md
To use the AI summarization features, you need an API key from OpenRouter.
- Get an API Key: Sign up at OpenRouter and create a key.
- Configure Environment:
Create a
.envfile in the project root:Add your key:cp .env.example .env
OPENROUTER_API_KEY=sk-or-your-key-here
- Recommended Model:
By default, the tool uses
xiaomi/mimo-v2-flash:free, which is free and fast. You can change this using the--modelflag. You can also set defaults withytscriber config --set summarization.model=....
ytscriber summarize <folder_name> [options]Examples:
# Summarize random folder
ytscriber summarize random
# Summarize all folders
ytscriber summarize --all
# Dry run to preview changes
ytscriber summarize random --dry-runOptions:
| Option | Description | Default |
|---|---|---|
--all |
Process all folders | False |
--dry-run |
Preview changes without writing | False |
--force |
overwrite existing summaries | False |
--model |
OpenRouter model to use | xiaomi/mimo-v2-flash:free |
url,title,duration_minutes,view_count,description,transcript_downloaded,summary_done
https://youtube.com/watch?v=...,Talk Title,45.5,1234,Description...,success,| Column | Description |
|---|---|
transcript_downloaded |
Status: success, error: <reason>, or empty |
summary_done |
Track if you've processed the transcript |
YTScribe/
├── src/
│ └── ytscriber/
│ ├── __init__.py # Package exports
│ ├── cli.py # Command-line interface
│ ├── downloader.py # Transcript downloading
│ ├── extractor.py # Channel video extraction
│ ├── csv_handler.py # CSV operations
│ ├── metadata.py # Video metadata extraction
│ ├── models.py # Data models
│ ├── exceptions.py # Custom exceptions
│ ├── logging_config.py # Logging setup
│ └── utils.py # Utility functions
├── scripts/ # Automation scripts
├── tests/ # Unit tests
├── prompts/ # AI prompts for analysis
├── examples/ # Example shell scripts
├── pyproject.toml # Project configuration
└── README.md
By default, data is stored in:
- macOS/Windows:
~/Documents/YTScriber - Linux:
~/ytscriber
Example structure:
YTScriber/
├── aws-reinvent-2025/
│ ├── videos.csv
│ └── transcripts/
│ ├── 2025-12-03-abc123xyz.md
│ └── 2025-12-03-def456uvw.md
├── pycon-2024/
│ ├── videos.csv
│ └── transcripts/
└── kubecon-eu-2024/
├── videos.csv
└── transcripts/
If you used the old transcript-* commands and a repo-local data/ folder:
- Move your existing
data/folder into the new data directory above. - Copy
channels.yamlinto the same data directory if you rely on sync-all. - Update commands:
transcript-extract->ytscriber extracttranscript-download->ytscriber downloadtranscript-add->ytscriber addtranscript-summarize->ytscriber summarize
- Optionally set defaults with
ytscriber config --set defaults.delay=....
YouTube may rate limit or block your IP if you make too many requests:
- Use reasonable delays: Default 60 seconds between requests is safe
- Resume capability: Script tracks progress in CSV, can resume after interruption
- Start small: Test with 10-20 videos before large batches
- Respect limits: If you get blocked, wait 30-60 minutes before retrying
# Clone repository
git clone https://github.com/dparedesi/YTScribe.git
cd YTScribe
# Install with development dependencies
pip install -e ".[dev]"
# Install pre-commit hooks (optional)
pre-commit install# Run all tests
pytest
# Run with coverage
pytest --cov=ytscriber
# Run specific test file
pytest tests/test_utils.py# Format code
black src tests
# Lint code
ruff check src tests
# Type checking
mypy srcfrom ytscriber import TranscriptDownloader, ChannelExtractor
# Extract videos from a channel
extractor = ChannelExtractor()
videos = extractor.extract_videos(
"https://www.youtube.com/@AWSEventsChannel/videos",
max_videos=10
)
# Download transcripts
downloader = TranscriptDownloader(
languages=["en", "en-US"],
delay=30,
output_dir="transcripts"
)
for video in videos:
result = downloader.download(video.video_id, video.url)
if result.success:
print(f"Downloaded: {video.title}")
else:
print(f"Failed: {result.error_message}")- Video may not have captions/transcripts available
- Try with different language codes:
--languages en en-US auto
- Wait 30-60 minutes before retrying
- Increase delay:
--delay 120 - Use different network/IP if persistent
- Transcript will still download, just without extra metadata
- Check if video is accessible and not private
- Just run the same command again - it will skip already downloaded videos
- Progress is saved to CSV after each video
- Python 3.9+
- youtube-transcript-api
- yt-dlp
- pytube
MIT License - see LICENSE for details.
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Run tests and linting (
pytest && ruff check .) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request