-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Output size discriminator #49
Comments
This is because the (default) discriminator is a "PatchGAN" (Section 2.2.2 in the paper). This discriminator slides across the generated image, convolutionally, trying to classify if each overlapping 70x70 patch is real or fake. This results in a 30x30 grid of classifier outputs, each corresponding to a different patch in the generated image. |
@phillipi If I want to classify each overlapping a minor patch eg. like 16 x 16 patch is real or fake, should this results in a which size grid of classifier outputs? any formula or hints can provide? |
You can use this script to determine the receptive field (e.g., 16 x 16) of a given architecture: I'm not sure what the formula would be for calculating output grid size given desired receptive field. For a given architecture, you can always check the output size by running it on an image and calling |
@phillipi Thank you for you quick response. I am not familiar with the Matlab. Would you please be more specify how can I get the output with the script you provided. Thank you so much. |
That script gives you the receptive field of a neuron. The equation to compute the input receptive field size from a given output receptive field size is (for a single convolutional layer): You can call this recursively to compute the receptive field sizes across multiple layers. |
@phillipi Thank you for you kind explanation. I am sorry that I didn't define my question well. The problem is when I use 70*70 patch GAN as discriminator in code, the output is not as sharp as what described in the paper in some occasion. So I wonder maybe the patch GAN size as default code is 70*70 is too large for the discriminator to classify the fake or real. If I want to enhance the sharpness should I enlarge the patch size or decrease the patch size? or any other suggestions ? |
@kenshinzh what's your input and output? To produce good colorization results, one needs to map |
Actually, my experments is from a B2A direction which is not the color corrections. I just take this B/W for illustration. all the input image is as the demo 256*256 |
In the colorizations in the paper, we concatenate the predicted I don't think there is a simple relationship between discriminator patch size and and sharpness. In practice 70x70 usually works pretty well for me, but you could try a few variants to see what works in your application. |
Yep that's right!
…On Sun, Dec 24, 2017 at 5:47 PM, Kv Manohar ***@***.***> wrote:
@phillipi <https://github.com/phillipi> Just wanted to confirm my
understanding of PatchGAN. Please point out if there is a mistake.
Say the generated image has dimensions of HxWx3, we have a 70x70 patchGAN
discriminator. In order to determine if the image is real or fake, 70x70
patch from the generated image is taken and passed through a discriminator
to produce a single scalar. On extracting each such 70x70 patch
convolutionally from the generated image and averaging the obtained scalars
for each patch, I get the final probability of whether the image is real or
fake ?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#49 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAxoVh_WboAdoCFewRF1-jx4R1nQmtuKks5tDv7EgaJpZM4L9OgB>
.
|
Hi,
why are you using an output size of 1x30x30 for the discriminator? and not just 1x1x1?
Thanks!
The text was updated successfully, but these errors were encountered: