Skip to content

Commit

Permalink
Add Pretrained models.
Browse files Browse the repository at this point in the history
Add pretrained models.

Add configs.

Restructure and adapt configs.

Add paths to aws.

Fix bug in checkpointer.

Add epoch_idx.

Work on pretrained models.

Work on pretrained models.

Use amazon file.
  • Loading branch information
LMescheder committed Jul 9, 2019
1 parent be89c37 commit de65a61
Show file tree
Hide file tree
Showing 27 changed files with 796 additions and 211 deletions.
5 changes: 5 additions & 0 deletions .gitignore
@@ -0,0 +1,5 @@
output
data
*_lmdb
__pycache__
*.pyc
26 changes: 24 additions & 2 deletions README.md
Expand Up @@ -22,7 +22,7 @@ python train.py PATH_TO_CONFIG

To compute the inception score for your model and generate samples, use
```
python test.py PATH_TO_CONIFG
python test.py PATH_TO_CONFIG
```

Finally, you can create nice latent space interpolations using
Expand All @@ -34,8 +34,30 @@ or
python interpolate_class.py PATH_TO_CONFIG
```

# Pretrained models
We also provide several pretrained models.

You can use the models for sampling by entering
```
python test.py PATH_TO_CONFIG
```
where `PATH_TO_CONFIG` is one of the config files
```
configs/pretrained/celebA_pretrained.yaml
configs/pretrained/celebAHQ_pretrained.yaml
configs/pretrained/imagenet_pretrained.yaml
configs/pretrained/lsun_bedroom_pretrained.yaml
configs/pretrained/lsun_bridge_pretrained.yaml
configs/pretrained/lsun_church_pretrained.yaml
configs/pretrained/lsun_tower_pretrained.yaml
```
Our script will automatically download the model checkpoints and run the generation.
You can find the outputs in the `output/pretrained` folders.
Similarly, you can use the scripts `interpolate.py` and `interpolate_class.py` for generating interpolations for the pretrained models.

Please note that the config files `*_pretrained.yaml` are only for generation, not for training new models: when these configs are used for training, the model will be trained from scratch, but during inference our code will still use the pretrained model.

# Notes
* For the results presented in the paper, we did not use a moving average over the weights. However, using a moving average helps to reduce noise and we therefore recommend its usage. Indeed, we found that using a moving average leads to much better inception scores on Imagenet.
* Batch normalization is currently *not* supported when using an exponential running average, as the running average is only computed over the parameters of the models and not the other buffers of the model.

# Results
Expand Down
27 changes: 0 additions & 27 deletions configs/celebAHQ.yaml
Expand Up @@ -2,10 +2,7 @@ data:
type: npy
train_dir: data/celebA-HQ
test_dir: data/celebA-HQ
lsun_categories_train: [bedroom_train]
lsun_categories_test: [bedroom_test]
img_size: 1024
nlabels: 1
generator:
name: resnet
kwargs:
Expand All @@ -23,35 +20,11 @@ z_dist:
dim: 256
training:
out_dir: output/celebAHQ
gan_type: standard
reg_type: real
reg_param: 10.
batch_size: 24
nworkers: 16
take_model_average: true
model_average_beta: 0.999
model_average_reinit: false
monitoring: tensorboard
sample_every: 1000
sample_nlabels: 20
inception_every: -1
save_every: 900
backup_every: 100000
restart_every: -1
optimizer: rmsprop
lr_g: 0.0001
lr_d: 0.0001
lr_anneal: 1.
lr_anneal_every: 150000
d_steps: 1
equalize_lr: false
test:
batch_size: 4
sample_size: 6
sample_nrow: 3
use_model_average: true
compute_inception: false
conditional_samples: false
interpolations:
nzs: 10
nsubsteps: 75
53 changes: 53 additions & 0 deletions configs/default.yaml
@@ -0,0 +1,53 @@
data:
type: lsun
train_dir: data/LSUN
test_dir: data/LSUN
lsun_categories_train: [bedroom_train]
lsun_categories_test: [bedroom_test]
img_size: 256
nlabels: 1
generator:
name: resnet
kwargs:
discriminator:
name: resnet
kwargs:
z_dist:
type: gauss
dim: 256
training:
out_dir: output/default
gan_type: standard
reg_type: real
reg_param: 10.
batch_size: 64
nworkers: 16
take_model_average: true
model_average_beta: 0.999
model_average_reinit: false
monitoring: tensorboard
sample_every: 1000
sample_nlabels: 20
inception_every: -1
save_every: 900
backup_every: 100000
restart_every: -1
optimizer: rmsprop
lr_g: 0.0001
lr_d: 0.0001
lr_anneal: 1.
lr_anneal_every: 150000
d_steps: 1
equalize_lr: false
model_file: model.pt
test:
batch_size: 32
sample_size: 64
sample_nrow: 8
use_model_average: true
compute_inception: false
conditional_samples: false
model_file: model.pt
interpolations:
nzs: 10
nsubsteps: 75
25 changes: 2 additions & 23 deletions configs/imagenet.yaml
Expand Up @@ -2,8 +2,6 @@ data:
type: image
train_dir: data/Imagenet
test_dir: data/Imagenet
lsun_categories_train: bedroom_train
lsun_categories_test: bedroom_test
img_size: 128
nlabels: 1000
generator:
Expand All @@ -24,32 +22,13 @@ z_dist:
training:
out_dir: output/imagenet
gan_type: standard
reg_type: real
reg_param: 10.
batch_size: 128
nworkers: 32
take_model_average: true
model_average_beta: 0.999
model_average_reinit: false
monitoring: tensorboard
sample_every: 1000
sample_nlabels: 20
inception_every: 10000
save_every: 900
backup_every: 100000
restart_every: -1
optimizer: adam
lr_g: 0.0001
lr_d: 0.0001
lr_anneal: 1.
lr_anneal_every: 150000
d_steps: 1
equalize_lr: false
batch_size: 128
test:
batch_size: 32
sample_size: 32
sample_size: 64
sample_nrow: 8
use_model_average: true
compute_inception: true
conditional_samples: true
interpolations:
Expand Down
30 changes: 2 additions & 28 deletions configs/lsun_bedroom.yaml
Expand Up @@ -5,7 +5,6 @@ data:
lsun_categories_train: [bedroom_train]
lsun_categories_test: [bedroom_test]
img_size: 256
nlabels: 1
generator:
name: resnet
kwargs:
Expand All @@ -23,35 +22,10 @@ z_dist:
dim: 256
training:
out_dir: output/lsun_bedroom
gan_type: standard
reg_type: real
reg_param: 10.
batch_size: 64
nworkers: 32
take_model_average: true
model_average_beta: 0.999
model_average_reinit: false
monitoring: tensorboard
sample_every: 1000
sample_nlabels: 20
inception_every: -1
save_every: 900
backup_every: 100000
restart_every: -1
optimizer: rmsprop
lr_g: 0.0001
lr_d: 0.0001
lr_anneal: 1.
lr_anneal_every: 150000
d_steps: 1
equalize_lr: false
test:
batch_size: 32
sample_size: 15
sample_nrow: 5
use_model_average: true
compute_inception: false
conditional_samples: false
sample_size: 64
sample_nrow: 8
interpolations:
nzs: 10
nsubsteps: 75
30 changes: 2 additions & 28 deletions configs/lsun_bridge.yaml
Expand Up @@ -5,7 +5,6 @@ data:
lsun_categories_train: [bridge_train]
lsun_categories_test: [bridge_train]
img_size: 256
nlabels: 1
generator:
name: resnet
kwargs:
Expand All @@ -23,35 +22,10 @@ z_dist:
dim: 256
training:
out_dir: output/lsun_bridge
gan_type: standard
reg_type: real
reg_param: 10.
batch_size: 64
nworkers: 32
take_model_average: true
model_average_beta: 0.999
model_average_reinit: false
monitoring: tensorboard
sample_every: 1000
sample_nlabels: 20
inception_every: -1
save_every: 900
backup_every: 100000
restart_every: -1
optimizer: rmsprop
lr_g: 0.0001
lr_d: 0.0001
lr_anneal: 1.
lr_anneal_every: 150000
d_steps: 1
equalize_lr: false
test:
batch_size: 32
sample_size: 15
sample_nrow: 5
use_model_average: true
compute_inception: false
conditional_samples: false
sample_size: 64
sample_nrow: 8
interpolations:
nzs: 10
nsubsteps: 75
30 changes: 2 additions & 28 deletions configs/lsun_church.yaml
Expand Up @@ -5,7 +5,6 @@ data:
lsun_categories_train: [church_outdoor_train]
lsun_categories_test: [church_outdoor_test]
img_size: 256
nlabels: 1
generator:
name: resnet
kwargs:
Expand All @@ -23,35 +22,10 @@ z_dist:
dim: 256
training:
out_dir: output/lsun_church
gan_type: standard
reg_type: real
reg_param: 10.
batch_size: 64
nworkers: 32
take_model_average: true
model_average_beta: 0.999
model_average_reinit: false
monitoring: tensorboard
sample_every: 1000
sample_nlabels: 20
inception_every: -1
save_every: 900
backup_every: 100000
restart_every: -1
optimizer: rmsprop
lr_g: 0.0001
lr_d: 0.0001
lr_anneal: 1.
lr_anneal_every: 150000
d_steps: 1
equalize_lr: false
test:
batch_size: 32
sample_size: 15
sample_nrow: 5
use_model_average: true
compute_inception: false
conditional_samples: false
sample_size: 64
sample_nrow: 8
interpolations:
nzs: 10
nsubsteps: 75
30 changes: 2 additions & 28 deletions configs/lsun_tower.yaml
Expand Up @@ -5,7 +5,6 @@ data:
lsun_categories_train: [tower_train]
lsun_categories_test: [tower_test]
img_size: 256
nlabels: 1
generator:
name: resnet
kwargs:
Expand All @@ -23,35 +22,10 @@ z_dist:
dim: 256
training:
out_dir: output/lsun_tower
gan_type: standard
reg_type: real
reg_param: 10.
batch_size: 64
nworkers: 32
take_model_average: true
model_average_beta: 0.999
model_average_reinit: false
monitoring: tensorboard
sample_every: 1000
sample_nlabels: 20
inception_every: -1
save_every: 900
backup_every: 100000
restart_every: -1
optimizer: rmsprop
lr_g: 0.0001
lr_d: 0.0001
lr_anneal: 1.
lr_anneal_every: 150000
d_steps: 1
equalize_lr: false
test:
batch_size: 32
sample_size: 15
sample_nrow: 5
use_model_average: true
compute_inception: false
conditional_samples: false
sample_size: 64
sample_nrow: 8
interpolations:
nzs: 10
nsubsteps: 75

0 comments on commit de65a61

Please sign in to comment.