Yingying Deng, Fan Tang, Weiming Dong, Haibin Huang, Chongyang Ma, Changsheng Xu
Overall structure of MCCNet.
- python 3.6
- pytorch 1.4.0
- PIL, numpy, scipy
- tqdm
Pretrained models: vgg-model, [decoder], [MCC_module](see above)
Please download them and put them into the floder ./experiments/
python test_video.py --content_dir input/content/ --style_dir input/style/ --output out
Traing set is WikiArt collected from WIKIART
Testing set is COCO2014
python train.py --style_dir ../../datasets/Images --content_dir ../../datasets/train2014 --save_dir models/ --batch_size 4
If you use our work in your research, please cite us using the following BibTeX entry ~ Thank you ^ . ^. Paper Link [pdf](coming soon)
@inproceedings{deng:2020:arbitrary,
title={Arbitrary Video Style Transfer via Multi-Channel Correlation},
author={Deng, Yingying and Tang, Fan and Dong, Weiming and Huang, haibin and Ma chongyang and Xu, Changsheng},
booktitle={AAAI},
year={2021},
}
@ARTICLE{10008203,
author={Kong, Xiaoyu and Deng, Yingying and Tang, Fan and Dong, Weiming and Ma, Chongyang and Chen, Yongyong and He, Zhenyu and Xu, Changsheng},
journal={IEEE Transactions on Neural Networks and Learning Systems},
title={Exploring the Temporal Consistency of Arbitrary Style Transfer: A Channelwise Perspective},
year={2023},
volume={},
number={},
pages={1-15},
doi={10.1109/TNNLS.2022.3230084}}