-
Notifications
You must be signed in to change notification settings - Fork 84
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
why the loss functions have a dot_loss? #11
Comments
It is a method to get rid of numerical error of exp. When the dot product
of hash codes is too large, exp(dot product) may overflow. Thus, we use
limit theory to calculate a estimation for it as dot loss.
YoumingDeng <notifications@github.com> 于 2018年7月3日周二 上午10:27写道:
… i have see the paper,the loss is estimation by weighted maximum likelihood.
i can't understand the additional dot_loss in the implement.
in the paper
[image: image]
<https://user-images.githubusercontent.com/19166992/42195448-7813989a-7eab-11e8-829a-8f277c9671e2.png>
Corresponding to it is exp_loss
but in the code, the total loss = exp_loss +dot_loss
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#11>, or mute the thread
<https://github.com/notifications/unsubscribe-auth/AKZo2SsB8VRwksWi3CUBjHu6g9fiKDQEks5uCtahgaJpZM4VAMkp>
.
|
Thank you |
@caozhangjie There is also one question about the loss function implemented on Pytorch platform. But recently, I find that the "l_threshold" parameter does not work at all using the default parameter settings in the training script. Line 10 in 591f134
I think variable "mask_dot" will be a matrice of all zeros because "sigmoid_param" is set to be "10./config["hash_bit"]" and "l_threshold" is set to be 15. The maximum value of "dot_product" will be smaller than 10 and thus "mask_dot" will be all zeros. In this way, dot_loss will never be used. Can you explain the function of 'l_threshold'? And why is it set to be 15? Thanks in advance. |
It is a threshold for dot product. Since when the dot product is too large,
there will be some numerical error on the calculation in caffe(infinite).
And the pytorch is implemented by imitating the caffe code. Thus, we use
threshold. When the dot product is larger than threshold, we estimate the
loss by limit theory.
Wu Xiaodong <notifications@github.com> 于 2018年8月24日周五 00:00写道:
… @caozhangjie <https://github.com/caozhangjie> There is also one question
about the loss function implemented on Pytorch platform.
I am a little confused about the function of parameter "l_threshold".
I used to think that "l_threshold" is used to decide whether an image pair
will be penalized. For example, if the distance between two similar
(dissimilar) image pairs is smaller (larger) than a threshold value, then
we will not calculate its loss.
But recently, I find that the "l_threshold" parameter does not work at all
using the default parameter settings in the training script.
https://github.com/thuml/HashNet/blob/591f1342c9f1f8b9d0f04a7219cdfab38f6355f7/pytorch/src/loss.py#L10
I think variable "mask_dot" will be a matrice of all zeros because
"sigmoid_param" is set to be "10./config["hash_bit"]" and "l_threshold" is
set to be 15. The maximum value of "dot_product" will be smaller than 10
and thus "mask_dot" will be all zeros. In this way, dot_loss will never be
used.
Can you explain the function of 'l_threshold'? And why is it set to be 15?
Thanks in advance.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#11 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AKZo2UW17RGhBNxCEhKob47FO4y3fXxmks5uTtGhgaJpZM4VAMkp>
.
|
Thank you very much. I got that. |
i have see the paper,the loss is estimation by weighted maximum likelihood.
i can't understand the additional dot_loss in the implement.
in the paper
Corresponding to it is exp_loss
but in the code, the total loss = exp_loss +dot_loss
The text was updated successfully, but these errors were encountered: