Skip to content

zeng-yifei/STAG4D

Repository files navigation

STAG4D: Spatial-Temporal Anchored Generative 4D Gaussians

1Nanjing University 2CASIA 3Fudan University
*equal contribution +corresponding author

⚙️ Installation

pip install -r requirements.txt

git clone --recursive https://github.com/slothfulxtx/diff-gaussian-rasterization.git
pip install ./diff-gaussian-rasterization

pip install ./simple-knn

Video-to-4D

To generate the examples in the project page, you can download the dataset from google drive. Place them in the dataset folder, and run:

python main.py --config configs/stag4d.yaml path=dataset/minions save_path=minions

#use --gui=True to turn on the visualizer (recommend)
python main.py --config configs/stag4d.yaml path=dataset/minions save_path=minions gui=True

To generate the spatial-temporal consistent data from stratch, your should place your rgba data in the form of

├── dataset
│   | your_data 
│     ├── 0_rgba.png
│     ├── 1_rgba.png
│     ├── 2_rgba.png
│     ├── ...

and then run

python scripts/gen_mv.py --path dataset/your_data --pipeline_path xxx/guidance/zero123pp

python main.py --config configs/stag4d.yaml path=data_path save_path=saving_path gui=True

To visualize the result, use you can replace the main.py with visualize.py, and the result will be saved to the valid/xxx path, e.g.:

python visualize.py --config configs/stag4d.yaml path=dataset/minions save_path=minions

Text-to-4D

For Text to 4D generation, we recommend using SDXL and SVD to generate a reasonable video. Then, after matting the video, use the command above to generate a good 4D result. (This pipeline contains many independent parts and is kind of complex, so we may upload the whole workflow after integration if possible.)

Citation

If you find our work useful for your research, please consider citing our paper as well as Consistent4D:

@article{zeng2024stag4d,
      title={STAG4D: Spatial-Temporal Anchored Generative 4D Gaussians}, 
      author={Yifei Zeng and Yanqin Jiang and Siyu Zhu and Yuanxun Lu and Youtian Lin and Hao Zhu and Weiming Hu and Xun Cao and Yao Yao},
      year={2024},
      eprint={2403.14939},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

@article{jiang2023consistent4d,
      title={Consistent4D: Consistent 360{\deg} Dynamic Object Generation from Monocular Video}, 
      author={Yanqin Jiang and Li Zhang and Jin Gao and Weimin Hu and Yao Yao},
      year={2023},
      eprint={2311.02848},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Acknowledgment

This repo is built on DreamGaussian and Zero123plus. Thank all the authors for their great work.

About

Official Implementation for STAG4D: Spatial-Temporal Anchored Generative 4D Gaussians

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published