You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
I see in the article for FaceNet. https://blog.csdn.net/baidu_27643275/article/details/79222206
They select all positives, but from negatives that satisfy criteria, they pick at random from the set instead of hardest negatives only.
This feels like more representative picking of samples instead of picking only the hardest.
Any views on this.
Thanks!
The text was updated successfully, but these errors were encountered:
Hi,
I think, in the early step of training, it is crucial to select negative samples randomly. Because of,
random negative samples feed model more sample, that is helpful for generalization.
It is very hard to directly learn that "hard cases" for the untrained model in early step.
When the performance of the model is not improved, we consider selecting sampling. Because of,
Almost all data is learned effectively except the few hard one. For the a lot of samples, anchor-positive similarity (sap) greater than anchor-negative similarity (san), that cause the loss = |san -sap + alpha| ~0 .
The model can no longer be trained effectively. So we can choose the samples that makes loss >0 .
Hi,
I see in the article for FaceNet.
https://blog.csdn.net/baidu_27643275/article/details/79222206
They select all positives, but from negatives that satisfy criteria, they pick at random from the set instead of hardest negatives only.
This feels like more representative picking of samples instead of picking only the hardest.
Any views on this.
Thanks!
The text was updated successfully, but these errors were encountered: