Skip to content

tayor/clipping-cli

Repository files navigation

Clipping CLI

TypeScript CLI for turning long-form videos into short-form clip packages and a stitched final montage for TikTok, Instagram Reels, and YouTube Shorts.

It is packaged for public npm publishing as clipping-cli, works with npx, and supports global install with npm install -g.

What it uses

Purpose Provider / tool Model / API
Transcript-driven clip planning Cloudflare Workers AI @cf/moonshotai/kimi-k2.6
Transcript reuse Sidecar subtitles .vtt / .srt
Local transcription fallback Python backend faster-whisper or openai-whisper, default model small
Source acquisition Local binary yt-dlp
Rendering Local binary ffmpeg / ffprobe
Burned subtitle styling Local render step TikTok-style ASS subtitles with active-word highlighting

Workflow

  1. acquire source media
  2. reuse subtitles or transcribe locally
  3. send transcript + candidate windows to Kimi 2.6
  4. let Kimi choose clips and montage order
  5. render per-clip outputs
  6. stitch a final montage
  7. burn TikTok-style subtitles into clip and final outputs

Requirements

  • Node.js 20.18+
  • ffmpeg and ffprobe on your PATH
  • yt-dlp on your PATH for URL intake
  • Cloudflare Account ID and Workers AI API token for Kimi planning
  • Optional: uv plus the project .venv for local transcription when sidecar subtitles are unavailable

Install

NPX

npx clipping-cli init

Global install

npm install -g clipping-cli
clipping-cli init

Local development

npm install
uv venv
uv pip install -r requirements.txt
npm run build
npm run check

Python transcription venv

Sidecar .vtt / .srt subtitles are preferred when available. For local ASR fallback, create a uv-managed virtual environment before running transcribe or run --force-transcribe:

uv venv
uv pip install -r requirements.txt

The CLI resolves Python in this order:

  1. CLIPPER_PYTHON
  2. active VIRTUAL_ENV
  3. .venv/bin/python in the current working directory
  4. python3 from PATH

Quick start

Initialize .env files:

clipping-cli init

Check your environment:

clipping-cli doctor

Run the full pipeline:

clipping-cli run "https://www.youtube.com/watch?v=dQw4w9WgXcQ" --auto-select 3 --emit-hardsub

This writes:

  • per-clip folders under work/<slug>/clips/
  • a stitched final video under work/<slug>/final/final.mp4
  • optional hard-sub final render under work/<slug>/final/final.hardsub.mp4

Commands

init

clipping-cli init [--force]

Creates:

  • .env.example
  • .env unless it already exists
  • requirements.txt unless it already exists

doctor

clipping-cli doctor

Checks:

  • ffmpeg / ffprobe
  • yt-dlp
  • configured Python executable / .venv
  • subtitle/ASS filter support in ffmpeg
  • faster-whisper / openai-whisper
  • Cloudflare account/token presence
  • configured Kimi model

acquire

clipping-cli acquire <source> [--work-dir <path>] [--subtitle-lang en]

transcribe

clipping-cli transcribe --work-dir <path> [options]

Options:

  • --subtitle <path>: explicit subtitle file
  • --engine <auto|faster-whisper|openai-whisper>
  • --model <name>: Whisper model name, default small
  • --force-transcribe
  • --word-timestamps

For local media, matching sidecar subtitles next to the source file are copied into the work tree automatically. Supported names include video.vtt, video.srt, video.en.vtt, and video.en.auto.vtt.

Outputs:

  • analysis/transcript.json
  • analysis/transcript.srt

analyze

clipping-cli analyze --work-dir <path> [options]

Kimi receives the transcript and deterministic candidate windows, then returns:

  • clip selection
  • titles
  • reasons
  • stitch order
  • montage title / summary

Outputs:

  • analysis/candidate_clips.json
  • analysis/selected_clips.json
  • analysis/stitch_plan.json
  • analysis/candidate-review.txt

package

clipping-cli package --work-dir <path> [--emit-hardsub]

Outputs:

  • per-clip clip.mp4
  • per-clip clip.hardsub.mp4 when enabled
  • per-clip clip.srt
  • per-clip clip.ass
  • stitched final/final.mp4
  • stitched final/final.hardsub.mp4 when enabled
  • stitched final/final.ass

run

clipping-cli run <source> [options]

Runs the full pipeline:

  • acquire
  • transcribe
  • Kimi analysis
  • render/package
  • stitched final montage

Environment variables

The npm package and binary are named clipping-cli; configuration variables keep the CLIPPER_* prefix for compatibility.

CLIPPER_WORKDIR=./work
CLOUDFLARE_ACCOUNT_ID=...
CLOUDFLARE_API_TOKEN=...
CLOUDFLARE_KIMI_MODEL=@cf/moonshotai/kimi-k2.6
CLOUDFLARE_KIMI_THINKING=false
CLIPPER_PYTHON=.venv/bin/python
CLIPPER_TRANSCRIPTION_ENGINE=auto
CLIPPER_TRANSCRIPTION_MODEL=small
CLIPPER_DEFAULT_PLATFORM=tiktok
CLIPPER_DEFAULT_MAX_CLIPS=5
CLIPPER_DEFAULT_MIN_SECONDS=25
CLIPPER_DEFAULT_MAX_SECONDS=60
CLIPPER_SUBTITLE_LANGUAGE=en
VIDEO_FPS=30
CAPTIONS_ENABLED=true
CAPTION_STYLE=tiktok
CAPTION_FONT_FACE=Arial
CAPTION_FONT_SIZE=72
CAPTION_FONT_COLOR=white
CAPTION_HIGHLIGHT_COLOR=green
CAPTION_STROKE_COLOR=black
CAPTION_STROKE_WIDTH=4.5
CAPTION_BOLD=true
CAPTION_SHADOW_DEPTH=0
CAPTION_POSITION=bottom_center
CAPTION_MAX_WORDS=6
CAPTION_MAX_CHARS=28
CAPTION_MAX_DURATION_SECONDS=4.2

Output layout

work/<slug>/
  source/
    source.json
    source.<ext>
  analysis/
    transcript.json
    transcript.srt
    candidate_clips.json
    selected_clips.json
    stitch_plan.json
    candidate-review.txt
    run-manifest.json
  clips/
    01-<clip-slug>/
      clip.mp4
      clip.hardsub.mp4
      clip.srt
      clip.ass
      metadata.json
      package.json
  final/
    concat.txt
    final.mp4
    final.hardsub.mp4
    final.ass
    final-manifest.json

License

MIT.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors