Paper: Feedback Networks, CVPR 2017.
Amir R. Zamir*,Te-Lin Wu*, Lin Sun, William B. Shen, Bertram E. Shi, Jitendra Malik, Silvio Savarese.
Feedback Networks training in Torch
- Install Torch on a machine with CUDA GPU
- Install cuDNN v4 or v5 and the Torch cuDNN bindings
- Install rnn the Element-Research RNN library for Torch
- Download the CIFAR10/100 dataset (binary file) and put it under folder gen/
If you already have Torch installed, update
The training scripts come with several options, which can be listed with the
th main.lua --help
To run the training, see the example run.sh, explanations below:
th main.lua -seqLength [number of feedback iterations] -sequenceOut [true for feedback false for recurrence inference] -nGPU [number of GPU] -depth [20 to bypass] -batchSize [batch size] -dataset [cifar100] -nEpochs [number of epochs to train] -netType [the model under models/ directory] -save [checkpoints directory to save the model] -resume [checkpoints directory to restore the model]
To run the testing, simply assign a directory of where the checkpoints are saved and turn of the testOnly flag and specify the model path as follows:
-testOnly 'true' -resume [checkpoints directory to restore the model]
Using your own criterion
You can write your own criterion and store it under the directory lib/, and require them in the models/init.lua Add another options in the opts.lua to use them while running a script, for example
cmd:option('-coarsefine', 'false', 'If using this criterion or not') opt.coarsefine = opt.coarsefine ~= 'false'
In the bash script add
Writing your own model
You can develop your own model and store in under models/, as an exmaple model of ours, models/feedback_48.lua Modify the code below the following lines within the code block, and set the netType in your running bash script or command to the name of the model you develop:
elseif opt.dataset == 'cifar100' then -- Model type specifies number of layers for CIFAR-100 model