Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ask reason2 #19

Closed
tingtingzhezhe opened this issue Apr 1, 2019 · 3 comments
Closed

ask reason2 #19

tingtingzhezhe opened this issue Apr 1, 2019 · 3 comments

Comments

@tingtingzhezhe
Copy link

您好!我还是上次提问的学生,能得到您的回答很感激!
这次还想问一下对于原文百度文章中的实现,您的改进是什么?或者说和原文不同的是什么?为什么可以达到比原文还要好的效果?这个理论依据是?

@Walleclipse
Copy link
Owner

首先不能说达到了比原文更好的效果。因为我用的数据集和原文用的数据集完全不一样。说不定原文作者在这个数据集上能取得更好的成果。
相同点:模型与triplet-loss是相同的。(模型可能会有细微的区别)
不同点: 计算hard negative的方法不一样。原文利用大量gpu在大量的候选集中选择了最优样本。而我保存了历史的embedding信息。每次需要从历史embedding中选择最优样本。这大大减少了资源消耗量,但是可能会降低准确率(这点不能确定)
具体请查看issue 4issue 11

@tingtingzhezhe
Copy link
Author

感谢感谢!!我也是实验后得到了不错的效果,想寻求一下理论依据,感谢您的帮助

@Hard-working-bee
Copy link

感谢感谢!!我也是实验后得到了不错的效果,想寻求一下理论依据,感谢您的帮助
我怎么显示一直在训练 没有终止

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants