Skip to content

Julien-Sahli/DyNCA-CLIP

 
 

Repository files navigation

DyNCA: Real-Time Dynamic Texture Synthesis Using Neural Cellular Automata (CVPR 2023)

arXiv

[Project Website]

This is the official implementation of DyNCA, framework for real-time and controllable dynamic texture synthesis. Our model can learn to synthesize dynamic texture videos such that:

  • The frames of the video resemble a given target appearance
  • The succession of the frames induces the motion of the target dynamic which can either be a video or a motion vector field

Run in Google Colab

DyNCA can learn the target motion either from a video or a motion vector field. In the table below you can find the corresponding notebooks for different modes of training DyNCA.

Target Motion Colab Notebook Jupyter Notebook
VectorField Open In Colab vector_field_motion.ipynb
Video Open In Colab video_motion.ipynb

Run Locally

TODO

Installing Requirements

TODO

Running the training scripts

TODO

Visualizing with Streamlit

Citation

If you make use of our work, please cite our paper:

@InProceedings{pajouheshgar2022dynca,
  title     = {DyNCA: Real-Time Dynamic Texture Synthesis Using Neural Cellular Automata},
  author    = {Pajouheshgar, Ehsan and Xu, Yitao and Zhang, Tong and S{\"u}sstrunk, Sabine},
  booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  year      = {2023},
}

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 99.1%
  • Other 0.9%