-
Notifications
You must be signed in to change notification settings - Fork 86
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Signed square root #4
Comments
It is used for prevent sqrt(0). This type of trick is also used in other bilinear CNN implementations. |
I get that we are adding 1e-5 to prevent sqrt(0). But in the paper they are using signed square root. Here in the TF implementation - https://github.com/abhaydoke09/Bilinear-CNN-TensorFlow/blob/master/core/bcnn_finetuning.py also signed square root is taken
|
Please refer to this issue #1 for why sqrt rather than signed sqrt is used in this implementation. |
Thanks that makes a lot of sense |
Hi Hao
First of all thanks for the excellent implementation. I have used the code here as a reference for my own implementations.
In the original paper (http://vis-www.cs.umass.edu/bcnn/docs/bcnn_iccv15.pdf) the authors have used signed square root operation. Something like:
X = torch.mul(torch.sign(X),torch.sqrt(torch.abs(X)+1e-5))
instead of the normal square root you have used
X = torch.sqrt(X + 1e-5)
Was there a particular reason for using this ?
The text was updated successfully, but these errors were encountered: