############################################ ############################################
Small revision for using this code at windows.
xh-liu's original code was for linux with 16 GPUs.
But currently torch.distributed does not support windows.
So I edit some option to use it.
Some options who are related to torch.distributed are deleted.
Some options such as "gpu_ids", "save_latest_freq" revived.(Options of SPADE)
Below is my training command.
python train_spade.py --name coco_test --dataroot D:\Sanghun\SPADE\datasets\coco_stuff --batchSize 2 --ngpus_per_node 2 --gpu_ids 0,1
############################################ ############################################
Xihui Liu, Guojun Yin, Jing Shao, Xiaogang Wang and Hongsheng Li.
Published in NeurIPS 2019.
Clone this repo.
git clone https://github.com/xh-liu/CC-FPSE.git
cd CC-FPSE/
This code requires PyTorch 1.1+ and python 3+. Please install dependencies by
pip install -r requirements.txt
The results reported in the paper is trained on 16 TITANX GPUs.
Follow the dataset preparation process in SPADE.
-
Download the pretrained models from Google Drive Folder, and extract it to 'checkpoints/'.
-
Generate images using the pretrained model with test_coco.sh, test_ade.sh, and test_cityscapes.sh.
-
The outputs images are stored at
./results/[type]_pretrained/
by default. You can view them using the autogenerated HTML file in the directory.
New models can be trained with train.sh. This is an example of training the model on one machine.
If you use this code for your research, please cite our papers.
@inproceedings{liu2019learning,
title={Learning to Predict Layout-to-image Conditional Convolutions for Semantic Image Synthesis},
author={Liu, Xihui and Yin, Guojun and Shao, Jing and Wang, Xiaogang and Li, Hongsheng},
booktitle={Advances in Neural Information Processing Systems},
year={2019}
}
This code borrows heavily from SPADE.