This project is based on https://github.com/sysu-imsl/EdgeGAN. We change the training dataset by using the sketches generated by PhotoSketch instead of edge map. Our training data can be found on Google drive. Report of result can be found on GitHub team-06.
- Download the pretrained model from Google Drive trained with 14 classes, and run:
mkdir -p outputs/edgegan
cd outputs/edgegan
cp <checkpoints download path> .
unzip checkpoints.zip
cd ../..
- Generate images with models:
python -m edgegan.test --name=edgegan --dataroot=<root of dataset> --dataset=<dataset> --gpu=<gpuid> #(model trained with multi-classes)
python -m edgegan.test --name=[model_name] --dataroot=<root of dataset> --dataset=<dataset> --nomulticlasses --gpu=<gpuid> #(model trained with single class)
- the outputs will be located at
outputs/edgegan/test_output/
by default
It will cost about fifteen hours to run on a single Nvidia RTX 2080 Ti card.
python -m edgegan.train --name=<new_name> --dataroot=<root of dataset> --dataset=<datsaet_name> --gpu=<gpuid> #(with multi-classes)
python -m edgegan.train --name=<new_name> --dataroot=<root of dataset> --dataset=<datsaet_name> --nomulticlasses --gpu=<gpuid> #(with single class)
If you use this code for your research, please cite our papers.
@inproceedings{gao2020sketchycoco,
title={SketchyCOCO: Image Generation From Freehand Scene Sketches},
author={Gao, Chengying and Liu, Qi and Xu, Qi and Wang, Limin and Liu, Jianzhuang and Zou, Changqing},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={5174--5183},
year={2020}
}