Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
About triplet loss convergence #6
Thanks for your nice work for tensorflow triplet loss, I can clearly understand the theory and implement of it. tensorflow-triplet-loss uses Batch all strategy in default to train on MNIST, I find the value of loss vibrate near the margin 0.5.
With the batch all strategy, since we only take the average loss over the semi-hard and hard triplets it's totally normal that the loss doesn't decrease.
However if the loss gets stuck at exactly the margin (
In your case I'm not sure what happens, maybe try to decrease the learning rate even further and see if you can get out of this collapsing behavior (with the batch all strategy)?
Also my code currently only works because MNIST has only 10 classes. The model currently receives batches with random images. As I have 64 images in a batch and 10 classes, I have a lot of triplets to work with. However in your case you have 3000 classes so there is only a small probability that you will get informative triplets in a batch of size 64.
The solution is to create your own batches (for instance 16 different classes with 4 images each for a total batch size of 64) and feed them to the model.
Edit: adding issue to track progress on this in #7