Materials for human action generation
Diffusion models | VQVAE
- 54、Diffusion Model扩散模型理论与完整PyTorch代码详细解读 [Bilibili]
- Awesome-Diffusion-Models [Github]
- Understanding Diffusion Models: A Unified Perspective [Paper]
- Awesome AI image synthesis [Github]
- Human Motion Capture [Github]
- DIFFUSION MODELS: A COMPREHENSIVE SURVEY OF METHODS AND APPLICATIONS [Paper]
DDPM: Denoising Diffusion Probabilistic Models [Paper]
DDIM: Improved Denoising Diffusion Probabilistic Models (NeurIPS-21) [Paper]
Diffusion Models Beat GANs on Image Synthesis [Paper]
Improved Denoising Diffusion Probabilistic Models (ICLR2021) [Paper] [Code]
VQDiffusion: Vector Quantized Diffusion Model for Text-to-Image Synthesis(CVPR2022) [Code] [Paper] [Video]
为了解决diffusion速度慢,需要上千次的迭代才能生成最终的结果。提出了改进方案:通过 VQVAE 降低 inference 的图像尺寸
Tackling the Generative Learning Trilemma with Denoising Diffusion GANs (ICLR2022) [Paper] [Code] [Project]
Diffusion Models for Video Prediction and Infilling [Paper]
MCVD: Masked Conditional Video Diffusion for Prediction, Generation, and Interpolation 以前或者后视频帧作为条件去预测所需要的视频帧 [Code] [Paper]
Stochastic Trajectory Prediction via Motion Indeterminacy Diffusion(CVPR202) [Code] [Paper]
通过diffusion model预测trajectory
-
BABEL: Bodies, Action and Behavior with English Labels [Project] [Paper] [Code]
-
KIT Motion-Language Dataset [Project]
-
HumanML3D Generating Diverse and Natural 3D Human Motions from Text (CVPR2022) [Code] [Paper]
Generating Diverse and Natural 3D Human Motions from Text (CVPR2022) [Code] [Paper]
TM2T: Stochastic and Tokenized Modeling for the Reciprocal Generation of 3D Human Motions and Texts (ECCV2022) [Code] [Paper]
TEMOS: Generating diverse human motions from textual descriptions (ECCV 2022 (Oral)) [Code] [Paper]
MotionDiffuse: Text-Driven Human Motion Generation with Diffusion Model [Paper] [Project]
FLAME: Free-form Language-based Motion Synthesis & Editing [Paper]
TEACH: Temporal Action Composition for 3D Humans [Paper] [Project]
- AIST++: AI Choreographer: Music Conditioned 3D Dance Generation with AIST++ [Paper] [Project] [Code]
- BRACE: BRACE: The Breakdancing Competition Dataset for Dance Motion Synthesis [Paper] [Code]
- Dance Revolution: Long-Term Dance Generation with Music via Curriculum Learning (ICLR2021) [Dataset]
- ChoreoMaster: Choreography-Oriented Music-Driven Dance Synthesis (SIGGRAPH2021) [Dataset]
A Brand New Dance Partner: Music-Conditioned Pluralistic Dancing Controlled by Multiple Dance Genres (CVPR2022) [Code] [Paper]
Bailando: 3D Dance Generation by Actor-Critic GPT with Choreographic Memory (CVPR2022) [Code] [Paper] [Video]
Collaborative Neural Rendering using Anime Character Sheets [Paper] [Code]
Diverse Dance Synthesis via Keyframes with Transformer Controllers [Paper] [code]
ChoreoNet: Towards Music to Dance Synthesis with Choreographic Action Unit [Paper] [dataset]
ChoreoMaster: Choreography-Oriented Music-Driven Dance Synthesis (SIGGRAPH2021) [Paper] [Github] [Dataset] [Project]
Music-driven Dance Regeneration with Controllable Key Pose Constraints [Paper]
DanceFormer: Music Conditioned 3D Dance Generation with Parametric Motion Transformer (AAAI2022) [Paper]
Dance Revolution: Long-Term Dance Generation with Music via Curriculum Learning (ICLR2021) [Paper] [Github] [Dataset] [Slides]
Dancing to Music (NIPS2019) [Paper] [Github]
Based on the Survey from Yupei's Repo