Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About triplet loss convergence #6

Closed
demianzhang opened this issue Apr 28, 2018 · 2 comments
Closed

About triplet loss convergence #6

demianzhang opened this issue Apr 28, 2018 · 2 comments
Labels
theory Question on the theory, not on the code.

Comments

@demianzhang
Copy link

Thanks for your nice work for tensorflow triplet loss, I can clearly understand the theory and implement of it. tensorflow-triplet-loss uses Batch all strategy in default to train on MNIST, I find the value of loss vibrate near the margin 0.5.
I meet a problem when I use triplet loss to train another dataset which including 3000 clusters according to content similarity, each cluster has several videos, each video feature size is 512 vector (0-1). The euclidean metrics between two origin data features are between 1 and 10.
I use 3 fully connected layer to learn the embedding, the input batch includes couple in the same cluster, the negative is not. however the triplet loss become to value of margin after several steps, and the embeddings among the anchor, positive, negative are the same. I try to not use hard mining, change lr etc, but can not solve it. Could you help me, Many thanks.

@omoindrot omoindrot added the theory Question on the theory, not on the code. label Apr 30, 2018
@omoindrot
Copy link
Owner

omoindrot commented Apr 30, 2018

With the batch all strategy, since we only take the average loss over the semi-hard and hard triplets it's totally normal that the loss doesn't decrease.

However if the loss gets stuck at exactly the margin (0.5), it indicates that all the embeddings are collapsed into a single point. One solution is to reduce the learning rate until training does not collapse.

In your case I'm not sure what happens, maybe try to decrease the learning rate even further and see if you can get out of this collapsing behavior (with the batch all strategy)?

Also my code currently only works because MNIST has only 10 classes. The model currently receives batches with random images. As I have 64 images in a batch and 10 classes, I have a lot of triplets to work with. However in your case you have 3000 classes so there is only a small probability that you will get informative triplets in a batch of size 64.

The solution is to create your own batches (for instance 16 different classes with 4 images each for a total batch size of 64) and feed them to the model.

Edit: adding issue to track progress on this in #7

@demianzhang
Copy link
Author

Thanks for your reply.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
theory Question on the theory, not on the code.
Projects
None yet
Development

No branches or pull requests

2 participants