Skip to content

FeezRM/AI-ShortForm-Editor

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AI Video Editor (Agentic Workflow)

This repository contains an agent-assisted workflow for editing short-form videos (Reels/Shorts) from recorded footage.

The pipeline is designed to:

  • transcribe source footage,
  • analyze scenes, silence, and audio rhythm,
  • generate an edit plan (EDL),
  • render a final vertical video with captions, optional b-roll, and optional music.

What to include in the public repo

Include source code, templates, docs, and small non-sensitive examples:

  • viral-shortform-editor/ (scripts, templates, docs)
  • sync_audio.py
  • PLAN.md
  • skills-lock.json
  • examples/*.analysis.json, examples/*.edl.json, examples/*.transcript.json, examples/*.plan_context.md
  • fonts/ only if licenses allow redistribution

What NOT to include

Do not commit local-only or heavy/generated assets:

  • raw footage (input/, BROLL/, personal videos)
  • music files you do not have rights to redistribute (MUSIC/)
  • render outputs (output/, generated MP4 files)
  • local Python environments (venv/)
  • local tool state (.claude/, .agents/, logs, caches)
  • secrets (.env, API keys, tokens)

If you want to provide sample media, use short, licensed demo clips only.

Requirements

  • macOS/Linux (also works on Windows with small path adjustments)
  • Python 3.11+
  • Node.js 22+
  • ffmpeg and ffprobe in PATH

Python packages used by this repo:

  • faster-whisper
  • scenedetect[opencv]
  • librosa
  • ffmpeg-python
  • jinja2

Setup

  1. Clone the repository.
  2. Create and activate a virtual environment.
  3. Install Python dependencies.
  4. Ensure Node.js and ffmpeg are installed.
  5. Install HyperFrames skill once in the project directory.

Suggested commands:

python3 -m venv venv
source venv/bin/activate
pip install -U pip
pip install faster-whisper "scenedetect[opencv]" librosa ffmpeg-python jinja2

# in project root (install skills in /.claude)
npx skills add heygen-com/hyperframes
npx hyperframes doctor

Project layout

  • viral-shortform-editor/scripts/transcribe.py: word-level transcription
  • viral-shortform-editor/scripts/scene_detect.py: scene/silence/audio analysis
  • viral-shortform-editor/scripts/plan_edit.py: builds planning context for the edit
  • viral-shortform-editor/scripts/render_composition.py: generates HyperFrames composition, lints, and renders
  • viral-shortform-editor/templates/composition.html.j2: composition template
  • viral-shortform-editor/references/style-guide.md: editing and caption style reference

How To Use The Workflow (Recommended)

Prompt Claude to edit your video.

You can tell it to add cool motion graphics and effects in certain spots.

Claude will use the hyperframes skill and produce a FIRE vid for you.

Here's an example prompt:

Edit the video in @input/video.mp4 using the hyperframes skill and the viral short form editor skill

This assumes your source video is at input/video.mp4.

Or if you really want, you can use the workflow manually.

End-to-end workflow (Manual Use)

Assume your source video is at input/video.mp4.

  1. Transcribe:
python viral-shortform-editor/scripts/transcribe.py input/video.mp4
  1. Analyze scenes/audio:
python viral-shortform-editor/scripts/scene_detect.py input/video.mp4
  1. Build planning context:
python viral-shortform-editor/scripts/plan_edit.py input/video.mp4 \
  --broll-dir BROLL \
  --music-dir MUSIC

This writes input/video.plan_context.md.

  1. Create EDL with your coding agent:
  • Ask your agent to read input/video.plan_context.md and write input/video.edl.json.
  • Review any low-confidence cuts.
  1. Render:
python viral-shortform-editor/scripts/render_composition.py input/video.edl.json

Output is typically written as input/video_edited.mp4.

Confidence gate

The renderer stops if any clip confidence is below threshold (default 0.85). You can:

  • fix cut timing/confidence in the EDL, then rerun, or
  • bypass the gate intentionally with:
python viral-shortform-editor/scripts/render_composition.py input/video.edl.json --approve-all

Publishing checklist for GitHub

Before pushing:

  • verify no raw/private media is staged,
  • verify .env and credentials are excluded,
  • verify generated render outputs are excluded,
  • verify font/music licenses if including any assets,
  • include at least one minimal JSON example for reproducibility.

Suggested next improvements

  • Add requirements.txt for one-command dependency install.
  • Add a small sample/ folder with tiny, licensed demo assets.
  • Add CI checks (lint + basic smoke test of scripts).

License

This repository is licensed under the Apache License 2.0.

See LICENSE for the full text.

Dependency note:

  • HyperFrames is Apache-2.0 licensed.
  • If you only depend on HyperFrames, you are generally fine to publish this repo.
  • If you copy/modify Apache-2.0 source from HyperFrames into this repo, keep required notices and attribution for those files.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors