Skip to content

Commit

Permalink
Update converter
Browse files Browse the repository at this point in the history
  • Loading branch information
ajbrock committed Mar 22, 2019
1 parent 19bf57b commit 7b65e82
Show file tree
Hide file tree
Showing 12 changed files with 799 additions and 647 deletions.
6 changes: 6 additions & 0 deletions BigGAN.py
Expand Up @@ -18,6 +18,12 @@
# block at both resolution 32x32 and 64x64. Just '64' will apply at 64x64.
def G_arch(ch=64, attention='64', ksize='333333', dilation='111111'):
arch = {}
arch[512] = {'in_channels' : [ch * item for item in [16, 16, 8, 8, 4, 2, 1]],
'out_channels' : [ch * item for item in [16, 8, 8, 4, 2, 1, 1]],
'upsample' : [True] * 7,
'resolution' : [8, 16, 32, 64, 128, 256, 512],
'attention' : {2**i: (2**i in [int(item) for item in attention.split('_')])
for i in range(3,10)}}
arch[256] = {'in_channels' : [ch * item for item in [16, 16, 8, 8, 4, 2]],
'out_channels' : [ch * item for item in [16, 8, 8, 4, 2, 1]],
'upsample' : [True] * 6,
Expand Down
6 changes: 3 additions & 3 deletions README.md
Expand Up @@ -12,7 +12,7 @@ This code is by Andy Brock and Alex Andonian.
You will need:

- [PyTorch](https://PyTorch.org/), version 1.0.1
- tqdm, scipy, and h5py
- tqdm, numpy, scipy, and h5py
- The ImageNet training set

First, you may optionally prepare a pre-processed HDF5 version of your target dataset for faster I/O. Following this (or not), you'll need the Inception moments needed to calculate FID. These can both be done by modifying and running
Expand Down Expand Up @@ -92,11 +92,11 @@ but it looks like this particular model got a winning ticket. Regardless, we pro

## A Note On The Design Of This Repo
This code is designed from the ground up to serve as an extensible, hackable base for further research code.
I've put a lot of thought into making sure the abstractions are the *right* thickness for how I do research--not so thick as to be impenetrable, but not so thin as to be useless.
We've put a lot of thought into making sure the abstractions are the *right* thickness for research--not so thick as to be impenetrable, but not so thin as to be useless.
The key idea is that if you want to experiment with a SOTA setup and make some modification (try out your own new loss function, architecture, self-attention block, etc) you should be able to easily do so just by dropping your code in one or two places, without having to worry about the rest of the codebase.
Things like the use of self.which_conv and functools.partial in the BigGAN.py model definition were put together with this in mind, as was the design of the Spectral Norm class inheritance.

With that said, this is a somewhat large codebase for a single project. While I tried to be thorough with the comments, if there's something you think could be more clear, better written, or better refactored, please feel free to raise an issue or a pull request.
With that said, this is a somewhat large codebase for a single project. While we tried to be thorough with the comments, if there's something you think could be more clear, better written, or better refactored, please feel free to raise an issue or a pull request.

## Feature Requests
Want to work on or improve this code? There are a couple things this repo would benefit from, but which don't yet work.
Expand Down
14 changes: 14 additions & 0 deletions TFHub/README.md
@@ -0,0 +1,14 @@
# BigGAN-PyTorch TFHub converter
This dir contains scripts for taking the [pre-trained generator weights from TFHub](https://tfhub.dev/s?q=biggan) and porting them to BigGAN-Pytorch.

In addition to the base libraries for BigGAN-PyTorch, to run this code you will need:

TensorFlow
TFHub
parse

Note that this code is only presently set up to run the ported models without truncation--you'll need to accumulate standing stats at each truncation level yourself if you wish to employ it.

To port the 128x128 model from tfhub, produce a pretrained weights .pth file, and generate samples, run

python converter.py -r 128 --generate_samples

0 comments on commit 7b65e82

Please sign in to comment.