This repository contains an agent-assisted workflow for editing short-form videos (Reels/Shorts) from recorded footage.
The pipeline is designed to:
- transcribe source footage,
- analyze scenes, silence, and audio rhythm,
- generate an edit plan (EDL),
- render a final vertical video with captions, optional b-roll, and optional music.
Include source code, templates, docs, and small non-sensitive examples:
viral-shortform-editor/(scripts, templates, docs)sync_audio.pyPLAN.mdskills-lock.jsonexamples/*.analysis.json,examples/*.edl.json,examples/*.transcript.json,examples/*.plan_context.mdfonts/only if licenses allow redistribution
Do not commit local-only or heavy/generated assets:
- raw footage (
input/,BROLL/, personal videos) - music files you do not have rights to redistribute (
MUSIC/) - render outputs (
output/, generated MP4 files) - local Python environments (
venv/) - local tool state (
.claude/,.agents/, logs, caches) - secrets (
.env, API keys, tokens)
If you want to provide sample media, use short, licensed demo clips only.
- macOS/Linux (also works on Windows with small path adjustments)
- Python 3.11+
- Node.js 22+
- ffmpeg and ffprobe in PATH
Python packages used by this repo:
faster-whisperscenedetect[opencv]librosaffmpeg-pythonjinja2
- Clone the repository.
- Create and activate a virtual environment.
- Install Python dependencies.
- Ensure Node.js and ffmpeg are installed.
- Install HyperFrames skill once in the project directory.
Suggested commands:
python3 -m venv venv
source venv/bin/activate
pip install -U pip
pip install faster-whisper "scenedetect[opencv]" librosa ffmpeg-python jinja2
# in project root (install skills in /.claude)
npx skills add heygen-com/hyperframes
npx hyperframes doctorviral-shortform-editor/scripts/transcribe.py: word-level transcriptionviral-shortform-editor/scripts/scene_detect.py: scene/silence/audio analysisviral-shortform-editor/scripts/plan_edit.py: builds planning context for the editviral-shortform-editor/scripts/render_composition.py: generates HyperFrames composition, lints, and rendersviral-shortform-editor/templates/composition.html.j2: composition templateviral-shortform-editor/references/style-guide.md: editing and caption style reference
Prompt Claude to edit your video.
You can tell it to add cool motion graphics and effects in certain spots.
Claude will use the hyperframes skill and produce a FIRE vid for you.
Here's an example prompt:
Edit the video in @input/video.mp4 using the hyperframes skill and the viral short form editor skillThis assumes your source video is at input/video.mp4.
Or if you really want, you can use the workflow manually.
Assume your source video is at input/video.mp4.
- Transcribe:
python viral-shortform-editor/scripts/transcribe.py input/video.mp4
- Analyze scenes/audio:
python viral-shortform-editor/scripts/scene_detect.py input/video.mp4- Build planning context:
python viral-shortform-editor/scripts/plan_edit.py input/video.mp4 \
--broll-dir BROLL \
--music-dir MUSICThis writes input/video.plan_context.md.
- Create EDL with your coding agent:
- Ask your agent to read
input/video.plan_context.mdand writeinput/video.edl.json. - Review any low-confidence cuts.
- Render:
python viral-shortform-editor/scripts/render_composition.py input/video.edl.jsonOutput is typically written as input/video_edited.mp4.
The renderer stops if any clip confidence is below threshold (default 0.85). You can:
- fix cut timing/confidence in the EDL, then rerun, or
- bypass the gate intentionally with:
python viral-shortform-editor/scripts/render_composition.py input/video.edl.json --approve-allBefore pushing:
- verify no raw/private media is staged,
- verify
.envand credentials are excluded, - verify generated render outputs are excluded,
- verify font/music licenses if including any assets,
- include at least one minimal JSON example for reproducibility.
- Add
requirements.txtfor one-command dependency install. - Add a small
sample/folder with tiny, licensed demo assets. - Add CI checks (lint + basic smoke test of scripts).
This repository is licensed under the Apache License 2.0.
See LICENSE for the full text.
Dependency note:
- HyperFrames is Apache-2.0 licensed.
- If you only depend on HyperFrames, you are generally fine to publish this repo.
- If you copy/modify Apache-2.0 source from HyperFrames into this repo, keep required notices and attribution for those files.