Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

hard-negative mining #11

Closed
mangushev opened this issue Feb 25, 2019 · 2 comments
Closed

hard-negative mining #11

mangushev opened this issue Feb 25, 2019 · 2 comments

Comments

@mangushev
Copy link

Hi,
I see in the article for FaceNet.
https://blog.csdn.net/baidu_27643275/article/details/79222206
They select all positives, but from negatives that satisfy criteria, they pick at random from the set instead of hardest negatives only.
This feels like more representative picking of samples instead of picking only the hardest.
Any views on this.
Thanks!

@Walleclipse
Copy link
Owner

Hi,
I think, in the early step of training, it is crucial to select negative samples randomly. Because of,

  1. random negative samples feed model more sample, that is helpful for generalization.
  2. It is very hard to directly learn that "hard cases" for the untrained model in early step.

When the performance of the model is not improved, we consider selecting sampling. Because of,
Almost all data is learned effectively except the few hard one. For the a lot of samples, anchor-positive similarity (sap) greater than anchor-negative similarity (san), that cause the loss = |san -sap + alpha| ~0 .
The model can no longer be trained effectively. So we can choose the samples that makes loss >0 .

@mangushev
Copy link
Author

Thanks! That clarifies.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants