Generate static websites for TV show quote search and meme generation. Similar to Frinkiac and Morbotron, but self-hosted and for any show.
- Full-text search - Search through all subtitles with instant results using lunr.js
- Frame extraction - Automatically extract frames at subtitle timestamps
- Meme generator - Create memes with custom text overlays
- Multiple subtitle formats - Supports SRT, ASS/SSA, and WebVTT
- Embedded subtitles - Extract subtitles from MKV/MP4 containers
- Static output - No server required, host anywhere (GitHub Pages, S3, etc.)
- Fast & parallel - Uses Rayon for parallel frame extraction
cargo install anytrongit clone https://github.com/bkero/anytron.git
cd anytron
cargo install --path .- FFmpeg must be installed and available in your PATH
- Rust 1.70+ (for building from source)
- Organize your media files:
my_show/
├── anytron.toml # Configuration file
├── Show.S01E01.mkv # Video files with embedded subtitles
├── Show.S01E02.mkv # ... or external subtitle files
├── Show.S01E02.srt
└── ...
- Create a configuration file (
anytron.toml):
[show]
name = "My Favorite Show"
description = "Quotes from My Favorite Show"
[site]
title = "My Show Quote Search"
base_url = "/"
enable_memes = true- Generate the site:
anytron generate ./my_show -o ./output- Preview locally:
anytron serve ./output
# Open http://localhost:8080 in your browser- Deploy to any static hosting (GitHub Pages, Netlify, S3, etc.)
# Generate a static site
anytron generate <INPUT_DIR> [OPTIONS]
# Validate input directory structure
anytron validate <INPUT_DIR>
# Serve generated site locally
anytron serve <DIRECTORY> [--port PORT]| Option | Description |
|---|---|
-o, --output <DIR> |
Output directory (default: output) |
-j, --jobs <N> |
Number of parallel workers |
--quality <N> |
JPEG quality 1-100 (default: 85) |
--thumb-width <N> |
Thumbnail width in pixels (default: 320) |
--skip-frames |
Skip frame extraction (use existing) |
--clean |
Clean output directory before generating |
--seasons <LIST> |
Only process specific seasons (e.g., 1,2,3) |
--episodes <LIST> |
Only process specific episodes (e.g., S01E01,S02E05) |
-v, --verbose |
Increase verbosity (-v, -vv, -vvv) |
Create an anytron.toml file in your input directory:
[show]
name = "Show Name"
description = "A description of the show"
[site]
title = "Quote Search" # Page title
base_url = "/" # Base URL for links (use "/" for root)
theme_color = "#1a1a2e" # Theme color for mobile browsers
enable_memes = true # Enable meme generator on caption pagesAnytron uses filename patterns to identify episodes:
Show.S01E01.mkv- Season 1, Episode 1Show.S01E01.Episode.Title.mp4- With episode titleshow.s1e5.avi- Lowercase, single-digit seasonShow.1x05.mkv- Alternative formatShow [1x05].mkv- Bracketed format
Subtitle files should match video files:
Show.S01E01.srtforShow.S01E01.mkvShow.S01E01.en.srt- Language-tagged subtitle
output/
├── index.html # Search page
├── caption/
│ └── S01E01-12345.html # Caption detail pages
├── img/
│ ├── frames/
│ │ └── S01E01/
│ │ └── 12345.jpg # Full-size frames
│ └── thumbs/
│ └── S01E01/
│ └── 12345.jpg # Thumbnails
├── search/
│ └── index.json # Search index
├── css/
│ └── style.css
└── js/
└── bundle.js # Search + meme generator
When multiple subtitle tracks are available, Anytron selects the best one:
- English subtitles preferred over other languages
- Default track preferred if marked
- Regular subtitles preferred over SDH/CC (hearing impaired)
- Full subtitles preferred over forced-only tracks
- Text-based formats (SRT, ASS) preferred over bitmap
Anytron can also be used as a library:
use anytron::config::Config;
use anytron::discovery::Scanner;
use anytron::subtitle::parse_file;
use std::path::Path;
// Scan for episodes
let scanner = Scanner::new(Path::new("./my_show"));
let episodes = scanner.scan()?;
// Parse subtitles
for episode in &episodes {
if let Some(sub_path) = &episode.subtitle_path {
let entries = parse_file(sub_path)?;
println!("Found {} subtitle entries", entries.len());
}
}Contributions are welcome! Please see CONTRIBUTING.md for guidelines.
This project is licensed under the MIT License - see the LICENSE file for details.