Skip to content

KurbanIntelligenceLab/Motion-Vector-Learning

Repository files navigation

Temporal Realism Evaluation of Generated Videos Using Compressed-Domain Motion Vectors

A deep learning framework for classifying videos using motion vectors, pixel data, or both. Designed to distinguish between real videos and videos generated by different AI models (SVD, Pika, CogVideo, T2VZ, VideoCrafter2, etc.).

Overview

This repository bundles two tightly coupled components:

  • A motion-vector extractor built on top of ffmpeg, producing dense MV tensors plus metadata.
  • A PyTorch training pipeline that learns to distinguish real and AI-generated videos using motion vectors, RGB frames, or both.

Use the extractor to populate motion-vector datasets, then fine-tune or evaluate classification baselines with the provided scripts.

Motion Vector Direction Distributions

Installation

Requirements

  • Python 3.8+ (extraction tools verified on 3.8–3.10)
  • ffmpeg 4.4+ in PATH
  • CUDA-capable GPU (recommended)

Setup

git clone <repository-url>
cd Motion-Vector-Learning
pip install -r requirements.txt
# optional
wandb login

Dataset Structure

The project expects datasets in the following structure:

Data/
├── videos/
│   ├── vript/          # Real videos (class 0)
│   ├── hdvg/           # Real videos (class 0)
│   ├── cogvideo/       # CogVideo generated (class 1)
│   ├── svd/            # Stable Video Diffusion (class 2)
│   ├── pika/           # Pika generated (class 3)
│   ├── t2vz/           # Text-to-Video-Zero (class 4)
│   └── vc2/            # VideoCrafter2 (class 5)
└── motion_vectors/
    ├── vript/
    ├── hdvg/
    ├── cogvideo/
    ├── svd/
    ├── pika/
    ├── t2vz/
    └── vc2/

More details about the dataset, including the downloads we used, can be found here.

Data Formats

  • Videos: .mp4
  • Motion vectors: .npy arrays ([T, H, W, 3] with dx, dy, sign)

Motion Vector Extraction

Sample data and scripts live in MotionVectorExtractor/.

ffmpeg -version              # verify dependency
python MotionVectorExtractor/extract_mv.py \
  --data_root MotionVectorExtractor/Data/ \
  --out_root MotionVectorExtractor/TestOut/ \
  --override --keepFrames

Command help: python MotionVectorExtractor/extract_mv.py --help

Outputs per video include a motion-vector tensor, resolution metadata, frame types, timestamps, and optional visualization frames.

System Architecture

Quick Start

Training

Train a model using motion vectors only:

python run.py \
    --root_dir /path/to/motion_vectors \
    --classes_config configs/multi.json \
    --data mv \
    --model resnet18 \
    --epochs 50 \
    --batch_size 8 \
    --lr 1e-4 \
    --frames 16 \
    --pretrained

Train with combined modalities (motion vectors + pixels):

python run.py \
    --root_dir /path/to/videos /path/to/motion_vectors \
    --classes_config configs/multi.json \
    --data combined \
    --merge_strategy mvaf \
    --model resnet18 \
    --epochs 50 \
    --batch_size 4 \
    --pretrained

Configuration

  • Class mapping files live in configs/ (binary, multi-class, and per-generator variants).
  • Training scripts expect motion-vector arrays under Data/motion_vectors/ and optional RGB videos under Data/videos/.

Training

# motion-vector only
python run.py \
  --root_dir /path/to/motion_vectors \
  --classes_config configs/multi.json \
  --data mv \
  --model resnet18

# combined RGB + MV
python run.py \
  --root_dir /path/to/videos /path/to/motion_vectors \
  --classes_config configs/multi.json \
  --data combined \
  --merge_strategy mvaf

Convenience wrappers (Scripts/run_mv.sh, Scripts/run_vid.sh, Scripts/run_combined.sh) provide ready-to-run presets. SLURM launchers are under Scripts/slurm/.

Utilities

  • visualize_motion_patterns.py: summarize motion statistics and heatmaps.
  • optimize_density_thresholds.py: tune MVAF thresholds.
  • MotionVectorExtractor/: contains CLI tools, sample assets, and logs produced during MV extraction.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published