This is a video generation api ,
-
Updated
Jul 17, 2024 - Python
This is a video generation api ,
Official Code for "Kinetic Typography Diffusion Model (ECCV 2024)"
dgenerate is a command line tool for generating images and animation sequences using stable diffusion and related techniques.
High-Quality Human Motion Video Generation with Confidence-aware Pose Guidance
[CVPR 2024] Intelligent Grimm - Open-ended Visual Storytelling via Latent Diffusion Models
ChronoMagic-Bench: A Benchmark for Metamorphic Evaluation of Text-to-Time-lapse Video Generation
[CVPR2024 Highlight] VBench - We Evaluate Video Generation
A python tool that uses GPT-4, FFmpeg, and OpenCV to automatically analyze videos, extract the most interesting sections, and crop them for an improved viewing experience.
[ECCV 2024] Be-Your-Outpainter https://arxiv.org/abs/2403.13745
VideoCrafter2: Overcoming Data Limitations for High-Quality Video Diffusion Models
Champ: Controllable and Consistent Human Image Animation with 3D Parametric Guidance
OmniTokenizer: one model and one weight for image-video joint tokenization.
Code for FreeTraj, a tuning-free method for trajectory-controllable video generation
Motion Consistency Model: Accelerating Video Diffusion with Disentangled Motion-Appearance Distillation
Code for Paper "UniAnimate: Taming Unified Video Diffusion Models for Consistent Human Image Animation".
program that allows you to create short videos
[ICLR24] Official implementation of the paper “MagicDrive: Street View Generation with Diverse 3D Geometry Control”
A one-stop library to standardize the inference and evaluation of all the conditional video generation models.
[ECCV 2024] EDTalk - Official PyTorch Implementation
Fine-Grained Open Domain Image Animation with Motion Guidance
Add a description, image, and links to the video-generation topic page so that developers can more easily learn about it.
To associate your repository with the video-generation topic, visit your repo's landing page and select "manage topics."