Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About construct_gt_score_maps #59

Open
lzw19951010 opened this issue Sep 10, 2018 · 2 comments
Open

About construct_gt_score_maps #59

lzw19951010 opened this issue Sep 10, 2018 · 2 comments

Comments

@lzw19951010
Copy link

lzw19951010 commented Sep 10, 2018

Hi, bilylee, thank you for your great work!

I've noticed a slight difference between your implementation and Luca's. I'm not sure whether it is reasonable, it would be great if you can enlighten me on this.

Luca's implementation:

            if dist_from_origin <= rPos
                logloss_label(i,j) = +1;
            else
                if dist_from_origin <= rNeg
                    logloss_label(i,j) = 0;
                else
                    logloss_label(i,j) = -1;
                end
            end

Yours:

y = tf.cast(tf.range(0, ho), dtype=tf.float32) - get_center(ho)
x = tf.cast(tf.range(0, wo), dtype=tf.float32) - get_center(wo)
[Y, X] = tf.meshgrid(y, x)
dist_to_center = tf.abs(X) + tf.abs(Y)  # Block metric
Z = tf.where(dist_to_center <= rPos,
             tf.ones_like(X),
             tf.where(dist_to_center < rNeg,                   
                      0.5 * tf.ones_like(X),
                      tf.zeros_like(X)))

Both of which used balanced weights, but you only generated 1/0.5/0 with your default settings, while Luca's implementation generated 1/0/-1 with respect to different radius. I'm not sure whether this is different by design or simply I'm wrong about it.

@bilylee
Copy link
Owner

bilylee commented Dec 11, 2018

Hi,

I use tf.nn.sigmoid_cross_entropy_with_logits for computing the loss while the original MatConvNet version use something like tf.nn.softmax_cross_entropy_with_logits to compute the loss.

For tf.nn.sigmoid_cross_entropy_with_logits, the label is the probability of the position being the target, therefore, 0 or 1 or 0.5 for not sure. As for tf.nn.softmax_cross_entropy_with_logits, the label is the category of the position belonging to, therefore +1 for target, -1 for background and 0 for not sure.

Please convince yourself these two implementations are identical.

#16 also disccussed this issue in Chinese.

@lzw19951010
Copy link
Author

lzw19951010 commented Dec 12, 2018

Cool, I did not take the loss function into account. It seems to me that these two implementation are optimizing the score response towards similar targets. However, I still cannot convince myself they are exactly identical. Because in softmax_cross_entropy, score inter-class are non-independent while they are independent in sigmoid. Probably a solid conclusion can be only derived from their backward propagation formulas.

Thank you for your code and your patience!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants