New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Commit to enable true fully convolutional application of network #684

merged 1 commit into from Apr 10, 2017


None yet
5 participants

warmspringwinds commented Nov 24, 2016


The original implementation of VGG models has fully connected layers implemented as convolution.

It gives predictable results when the network is used for classification.

It is also possible to apply the network in a fully convolutional manner like it is described in the paper
"fully convolutional networks for Image Segmentation".This part also works except one small detail --
the padding that is specified for the fc layer that is implemented as convolution is specified to be 'VALID'.
This gives the downsampled ouput of (input / 32) - 6. To be able to apply the network like it was
described in the paper the ouput should be (input / 32). This can be achieved by using 'SAME' padding.
This makes it possible to upsample the ouput by 32 and get predictions of the same size as an input image. Here are examples where I have applied the vgg network in a fully convolutional manner
, giving prediction map for 1000 classes of Imagenet:


Valid padding

Same padding

You can see that in the second picture the image was downsampled by 32 while on the first one
we also lost 6 pixels.

I suggest to add one more argument to the definition of models to be able to switch between those two
types of paddings depending on what user wants to do.


This comment has been minimized.


warmspringwinds commented Jan 19, 2017

Hey, @sguada @nathansilberman .

Could you, guys, give me some comments, please?


This comment has been minimized.

newnold commented Feb 28, 2017

Hi! I`m a beginner of Python and tensorflow.When I run you programs with these change, however, I met a problem like that:

Caused by op 'SoftmaxCrossEntropyWithLogits', defined at:
File "", line 165, in
File "C:\Users\liu45\Anaconda3\envs\python35\lib\site-packages\tensorflow\python\ops\", line 1449, in softmax_cross_entropy_with_logits
precise_logits, labels, name=name)
File "C:\Users\liu45\Anaconda3\envs\python35\lib\site-packages\tensorflow\python\ops\", line 2265, in _softmax_cross_entropy_with_logits
features=features, labels=labels, name=name)
File "C:\Users\liu45\Anaconda3\envs\python35\lib\site-packages\tensorflow\python\framework\", line 759, in apply_op
File "C:\Users\liu45\Anaconda3\envs\python35\lib\site-packages\tensorflow\python\framework\", line 2240, in create_op
original_op=self._default_original_op, op_def=op_def)
File "C:\Users\liu45\Anaconda3\envs\python35\lib\site-packages\tensorflow\python\framework\", line 1128, in init
self._traceback = _extract_stack()

InvalidArgumentError (see above for traceback): logits and labels must be same size: logits_size=[168960,2] labels_size=[183000,2]
[[Node: SoftmaxCrossEntropyWithLogits = SoftmaxCrossEntropyWithLogits[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"](Reshape_2, Reshape_3)]]

I am afraid that I need your help.Thank you!


This comment has been minimized.


ahundt commented Mar 12, 2017

@sguada @nathansilberman Could this pull request be reviewed?


This comment has been minimized.


ahundt commented Apr 5, 2017

@jhseu or @nealwu could you review then merge #684? I noticed you merged other slim pull requests and #684 is both small and very useful, but has been outstanding for 5 months.


This comment has been minimized.


nealwu commented Apr 10, 2017

Sure, looks good to me.

@nealwu nealwu merged commit 9681f3f into tensorflow:master Apr 10, 2017

1 check passed

cla/google All necessary CLAs are signed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment