Skip to content

diyiiyiii/MCCNet

Repository files navigation

Arbitrary Video Style Transfer via Multi-Channel Correlation

Yingying Deng, Fan Tang, Weiming Dong, Haibin Huang, Chongyang Ma, Changsheng Xu

Results presentation

Visual comparisons of video style transfer results. The first row shows the video frame stylized results. The second row shows the heat maps which are used to visualize the differences between two adjacent video frame.

Framework

Overall structure of MCCNet.

Experiment

Requirements

  • python 3.6
  • pytorch 1.4.0
  • PIL, numpy, scipy
  • tqdm

Testing

Pretrained models: vgg-model, [decoder], [MCC_module](see above)
Please download them and put them into the floder ./experiments/

python test_video.py  --content_dir input/content/ --style_dir input/style/    --output out

Training

Traing set is WikiArt collected from WIKIART

Testing set is COCO2014

python train.py --style_dir ../../datasets/Images --content_dir ../../datasets/train2014 --save_dir models/ --batch_size 4

Reference

If you use our work in your research, please cite us using the following BibTeX entry ~ Thank you ^ . ^. Paper Link [pdf](coming soon)

@inproceedings{deng:2020:arbitrary,
  title={Arbitrary Video Style Transfer via Multi-Channel Correlation},
  author={Deng, Yingying and Tang, Fan and Dong, Weiming and Huang, haibin and Ma chongyang and Xu, Changsheng},
  booktitle={AAAI},
  year={2021},
 
}
@ARTICLE{10008203,
  author={Kong, Xiaoyu and Deng, Yingying and Tang, Fan and Dong, Weiming and Ma, Chongyang and Chen, Yongyong and He, Zhenyu and Xu, Changsheng},
  journal={IEEE Transactions on Neural Networks and Learning Systems}, 
  title={Exploring the Temporal Consistency of Arbitrary Style Transfer: A Channelwise Perspective}, 
  year={2023},
  volume={},
  number={},
  pages={1-15},
  doi={10.1109/TNNLS.2022.3230084}}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages