You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
According to my understanding on the paper, in order for both images to be able to perform cross input neighborhood differences, the first two convolution layers must be tied in which their trained filters must be shared for two input images. Can I know whether the tide convolution is performed correctly as from my understanding, your codes are using totally different filters for both images.
The text was updated successfully, but these errors were encountered:
@LDHo
I think we can replace
[conv2_1 = tf.layers.conv2d(images2, 20, [5, 5], activation=tf.nn.relu,
kernel_regularizer=tf.contrib.layers.l2_regularizer(weight_decay), name='conv2_1')]
with
[ conv2_1 = tf.layers.conv2d(images2, 20, [5, 5], activation=tf.nn.relu, kernel_regularizer=tf.contrib.layers.l2_regularizer(weight_decay), name='conv1_1',reuse=True)].
Other conv layers can share weight and bias by this way.
In fact,we can try the both approaches and then compare the results to judge which is better!
According to my understanding on the paper, in order for both images to be able to perform cross input neighborhood differences, the first two convolution layers must be tied in which their trained filters must be shared for two input images. Can I know whether the tide convolution is performed correctly as from my understanding, your codes are using totally different filters for both images.
The text was updated successfully, but these errors were encountered: