This is the official implementation of DyNCA, framework for real-time and controllable dynamic texture synthesis. Our model can learn to synthesize dynamic texture videos such that:
- The frames of the video resemble a given target appearance
- The succession of the frames induces the motion of the target dynamic which can either be a video or a motion vector field
DyNCA can learn the target motion either from a video or a motion vector field. In the table below you can find the corresponding notebooks for different modes of training DyNCA.
Target Motion | Colab Notebook | Jupyter Notebook |
---|---|---|
VectorField | vector_field_motion.ipynb | |
Video | video_motion.ipynb |
TODO
TODO
TODO
If you make use of our work, please cite our paper:
@InProceedings{pajouheshgar2022dynca,
title = {DyNCA: Real-Time Dynamic Texture Synthesis Using Neural Cellular Automata},
author = {Pajouheshgar, Ehsan and Xu, Yitao and Zhang, Tong and S{\"u}sstrunk, Sabine},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2023},
}