Skip to content
Multi-Channel Attention Selection GAN with Cascaded Semantic Guidance for Cross-View Image Translation (CVPR 2019 Oral)
Branch: master
Clone or download
Ha0Tang update readme
Latest commit 97b2c87 Apr 24, 2019
Type Name Latest commit message Commit time
Failed to load latest commit information.
data update files Apr 25, 2019
imgs update readme Apr 14, 2019
models update files Apr 25, 2019
options update files Apr 25, 2019
scripts update readme Apr 25, 2019
util update files Apr 25, 2019 update Apr 14, 2019 update readme Apr 25, 2019
requirements.txt update readme Apr 14, 2019 update files Apr 25, 2019

License CC BY-NC-SA 4.0 Python 3.6 Packagist

SelectionGAN Framework

SelectionGAN for Cross-View Image Translation

SelectionGAN Framework

SelectionGAN Framework

Multi-Channel Attention Selection Module

Selection Module

Project page | Paper

Multi-Channel Attention Selection GAN with Cascaded Semantic Guidancefor Cross-View Image Translation.
Hao Tang*, Dan Xu*, Nicu Sebe, Yanzhi Wang, Jason J. Corso and Yan Yan. (* Equal Contribution.)
In CVPR 2019 (Oral).
The repository offers the implementation of our paper in PyToch.


Copyright (C) 2019 University of Trento.

All rights reserved. Licensed under the CC BY-NC-SA 4.0 (Attribution-NonCommercial-ShareAlike 4.0 International)

The code is released for academic research use only. For commercial use, please contact


Clone this repo.

git clone
cd SelectionGAN/

This code requires PyTorch 0.4.1 and python 3.6+. Please install dependencies by

pip install -r requirements.txt (for pip users)


./scripts/ (for Conda users)

To reproduce the results reported in the paper, you would need an NVIDIA GeForce GTX 1080 Ti GPU with 11GB memory.

Dataset Preparation

For Dayton, CVUSA or Ego2Top, the datasets must be downloaded beforehand. Please download them on the respective webpages. In addition, we put a few sample images in this code repo.

Preparing Dayton Dataset. The dataset can be downloaded here. In particular, you will need to download Ground Truth semantic maps are not available for this datasets. We adopt RefineNet trained on CityScapes dataset for generating semantic maps and use them as training data in our experiments. Please cite their papers if you use this dataset. Train/Test splits for Dayton dataset can be downloaded from here.

Preparing CVUSA Dataset. The dataset can be downloaded here, which is from the page. After unzipping the dataset, prepare the training and testing data as discussed in our paper. We also convert semantic maps to the color ones by using this script. Since there is no semantic maps for the aerial images on this dataset, we use black images as aerial semantic maps for placehold purposes.

Preparing Ego2Top Dataset. The dataset can be downloaded here, which is from this paper. We further adopt this tool to generate the sematic maps for training. The trianing and testing splits can be downloaded here.

Preparing New Dataset. Each training sample in the dataset will contain {Ig,Ia,Sg,Sa}, where Ig=ground image, Ia=aerial image, Sg=semantic map for ground image, and Sa=semantic map for aerial image. Of course, you can use SelectionGAN for another generative tasks.

Generating Images Using Pretrained Model

Once the dataset is ready. The result images can be generated using pretrained models.

  1. Download the tar of the pretrained models from the Google Drive Folder or Baidu Drive Folder, save it in 'checkpoints/', and run
cd checkpoints
tar xvf checkpoints.tar.gz
cd ../
  1. Generate images using the pretrained model.
python --dataroot [path_to_dataset] --name type]_pretrained --model selectiongan --which_model_netG unet_256 --which_direction AtoB --dataset_mode aligned --norm batch --gpu_ids 0 --batchSize [BS] --loadSize [LS] --fineSize [FS] --no_flip --eval

[path_to_dataset], is the path to the dataset. Dataset can be one of dayton, cvusa, and ego2top. [type]_pretrained is the directory name of the checkpoint file downloaded in Step 1, which should be one of dayton_a2g_64_pretrained, dayton_g2a_64_pretrained, dayton_a2g_256_pretrained, dayton_g2a_256_pretrained, cvusa_pretrained,and ego2top_pretrained. If you are running on CPU mode, change --gpu_ids -0 to --gpu_ids -1. For [BS, LS, FS],

  • dayton_a2g_64_pretrained: [16,72,64]
  • dayton_g2a_64_pretrained: [16,72,64]
  • dayton_g2a_256_pretrained: [4,286,256]
  • dayton_g2a_256_pretrained: [4,286,256]
  • cvusa_pretrained: [4,286,256]
  • ego2top_pretrained: [8,286,256]

Note that testing require large amount of disk space, because the model will generate 10 intermedia image results and 10 attention maps on disk. If you don't have enough space, append --saveDisk on the command line.

  1. The outputs images are stored at ./results/[type]_pretrained/ by default. You can view them using the autogenerated HTML file in the directory.

Training New Models

New models can be trained with the following commands.

  1. Prepare dataset.

  2. Train.

# To train on the dayton dataset on 64*64 resolution,

python --dataroot [path_to_dayton_dataset] --name [experiment_name] --model selectiongan --which_model_netG unet_256 --which_direction AtoB --dataset_mode aligned --norm batch --gpu_ids 0 --batchSize 16 --niter 50 --niter_decay 50 --loadSize 72 --fineSize 64 --no_flip --lambda_L1 100 --lambda_L1_seg 1 --display_winsize 64 --display_id 0
# To train on the datasets on 256*256 resolution,

python --dataroot [path_to_dataset] --name [experiment_name] --model selectiongan --which_model_netG unet_256 --which_direction AtoB --dataset_mode aligned --norm batch --gpu_ids 0 --batchSize [BS] --loadSize [LS] --fineSize [FS] --no_flip --display_id 0 --lambda_L1 100 --lambda_L1_seg 1
  • For dayton dataset, [BS,LS,FS]=[4,286,256], append --niter 20 --niter_decay 15.
  • For cvusa dataset, [BS,LS,FS]=[4,286,256], append --niter 15 --niter_decay 15.
  • For ego2top dataset, [BS,LS,FS]=[8,286,256], append --niter 5 --niter_decay 5.

There are many options you can specify. Please use python --help. The specified options are printed to the console. To specify the number of GPUs to utilize, use export CUDA_VISIBLE_DEVICES=[GPU_ID]. Training will cost about one week with the default --batchSize on one NVIDIA GeForce GTX 1080 Ti GPU. So we suggest you use a larger --batchSize, while performance is not tested using a larger --batchSize.

To view training results and loss plots on local computers, set display_id to a non-zero value and run python -m visdom.server on a new terminal and click the URL http://localhost:8097. On a remote server, replace localhost with your server's name, such as


Testing is similar to testing pretrained models.

python --dataroot [path_to_dataset] --name type]_pretrained --model selectiongan --which_model_netG unet_256 --which_direction AtoB --dataset_mode aligned --norm batch --gpu_ids 0 --batchSize [BS] --loadSize [LS] --fineSize [FS] --no_flip --eval

Use --how_many to specify the maximum number of images to generate. By default, it loads the latest checkpoint. It can be changed using --which_epoch.

Code Structure

  •, the entry point for training and testing.
  • models/ creates the networks, and compute the losses
  • models/networks/: defines the architecture of all models for selectiongan
  • options/: creates option lists using argparse package. More individuals are dynamically added in other files as well. Please see the section below.
  • data/: defines the class for loading images and semantic maps.

Evaluation Code

We use several metrics to evaluate the quality of the generated images.

  • Inception Score: IS, need install python 2.7
  • Top-k prediction accuracy: Acc, need install python 2.7
  • KL score: KL, need install python 2.7
  • Structural-Similarity: SSIM, need install Lua
  • Peak Signal-to-Noise Radio: PSNR, need install Lua
  • Sharpness Difference: SD, need install Lua

We also provide image IDs used in our paper here for further qualitative comparsion.


If you use this code for your research, please cite our papers.

  title={Multi-Channel Attention Selection GAN with Cascaded Semantic Guidancefor Cross-View Image Translation},
  author={Tang, Hao and Xu, Dan and Sebe, Nicu and Wang, Yanzhi and Corso, Jason J. and Yan, Yan},


This source code borrows heavily from Pix2pix. We thank the authors X-Fork & X-Seq for providing the evaluation codes. This research was partially supported by National Institute of Standards and Technology Grant 60NANB17D191 (YY, JC), Army Research Office W911NF-15-1-0354 (JC) and gift donation from Cisco Inc (YY).


If you have any questions/comments/bug reports, feel free to open a github issue or pull a request or e-mail to the author Hao Tang (

You can’t perform that action at this time.