-
Notifications
You must be signed in to change notification settings - Fork 191
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
For Mutual Learning 是用的Local distances做的N*N的距离矩阵吗? #22
Comments
hi , do you know what is the zero gradient in the paper (3) ? and your code in the train_ml.py , line 519 Global Distance Mutual Loss (L2 Loss)
I can't understand " g_dist_mat" and " TVT(g_dist_mat_list[j]).detach()" . |
Hi, yes, |
could you explain again with 中文? what is the zero gradient ? |
我理解的zero gradient的意思是梯度传到那里就不往下传了,所以这里可以用Variable的detach method来实现。 |
this is mean the mutual loss don't backpropagation, only metric loss do backpropagation |
@Phoebe-star 你好,我是论文一作,zero grad是指把这个变量当常数来看,原版论文使用了megvii的框架,Pytorch没有这个operator,Pytorch可以用detach来实现 |
我看您的代码, 好像是用的Local, 原文用的global……有点疑惑
The text was updated successfully, but these errors were encountered: