New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pixel mean substraction #22
Comments
Yes. We compute each average of R, G, B values using all pixels of 1~800 images. Please refer to our code: NTIRE2017/code/tools/stat_DIV2K.lua |
I take advantage of this thread to make an additional question. When the model in the multiscale model is swapped during test/evaluation, the method does the following:
The fourth and fifth layer of the sequential container are the branched upscaling layer and the convolution layers in the concatenation table? And, what are exactly the inputs of the parallel container at the starting branch (the deblurring layer)? I can't quite grasp it. |
Hello. Actually, we used many tricks for MDSR, and they made our code bit messy. First, the internal structure of model with length 5 (#model == 5, multiscale.lua) looks like below. (1): Mean-subtraction (Same for all scales.) And length 6 model (#model == 6, multiscale_unknown.lua) contains deblur module (Contained in ParallelTable) between (2) and (3). Here, we used ConcatTable and ParallelTable only for storing multiple modules that should be swapped among the different scales. Therefore, inputs of those containers are not necessary. During training and test time, we select appropriate scale-specific modules from those container (as you can see in get(index)), and make a sequential model corresponds to that scale. (I think ParallelTable for mean-addition module is redundant because it is not scale-specific. Sorry for the confusion.) Also, maybe there were a mistake when we re-arranged our pre-trained modules, so it seems that current link to MDSR (It's length is 5) is slightly different from that of our paper. (Which have length 6, Performance is not that different.) https://drive.google.com/open?id=0B_riD-4WK4WwOE9PV0RfN3M2QXM If you need length 6 MDSR which contains deblur module, please use this link to get it. Thank you. |
Thanks for the quick and detailed response :) Another doubt: Do the parameters given to train (training.sh) and the default parameters (opts.lua) force the bicubic degradation for low resolution images even though the multiscale_unknown model is selected? I say this because I can't see a modification of the 'degrade' parameter in the multiscale-unknown training, and it's used when the dataset paths are prepared:
|
In NTIRE2017 challenge, we used different models for multiscale-bicubic and multiscale-unknown. (See http://personal.ie.cuhk.edu.hk/~ccloy/files/cvprw_2017_ntire.pdf for more details.) However, multiscale-unknown model is more general because it contains a scale-specific pre-processing module. (deblur module) Therefore, now we also use multiscale_unknown for bicubic downsampling. |
Ok, thanks! :) |
Hi,
I was wondering how did you compute the dataset mean that you substract/add at the start/end of the training. Did you add all red/green/blue components in all pixels, in all images?
The text was updated successfully, but these errors were encountered: