This repository provides a Torch implementation of Wasserstein GAN as described by Arjovsky et. al. in their paper Wasserstein GAN.
cudnnto train the network on GPU. Training on CPU is supported but not recommended (very slow)
Please refer to the official Torch website to install Torch.
- Choose a dataset and create a folder with its name (ex:
mkdir celebA; cd celebA). Inside this folder create another folder (
imagesfor example) containing your images.
Note: You can download the celebA dataset on the celebA web page. Extract the images and run
DATA_ROOT=celebA th data/crop_celebA.lua
- Train the Wasserstein model
DATA_ROOT=<dataset_folder> name=<whatever_name_you_want> th main.lua
The networks are saved into the
checkpoints/ directory with the name you gave.
- Generate images
net=<path_to_generator_network> name=<name_to_save_images> th generate.lua
net=checkpoints/generator.t7 name=myimages display=2929 th generate.lua
The generated images are saved in
Display images in a browser
If you want, install the
display package (
luarocks install display) and run
th -ldisplay.start <PORT_NUMBER> 0.0.0.0
to launch a server on the port you chose. You can access it in your browser with the url http://localhost:PORT_NUMBER.
To train your network or for completion add the variable
display=<PORT_NUMBER> to the list of options.
In your command line instructions you can specify several parameters (for example the display port number), here are some of them:
noisewhich can be either
normalindicates the prior distribution from which the samples are generated
batchSizeis the size of the batch used for training or the number of images to reconstruct
nameis the name you want to use to save your networks or the generated images
gpuspecifies if the computations are done on the GPU or not. Set it to 0 to use the CPU (not recommended, too slow) and to n to use the nth GPU you have (1 is the default value)
lris the learning rate
loadSizeis the size to use to scale the images. 0 means no rescale
niteris the number of epochs for training