Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Triplet loss #3663

Open
wants to merge 2 commits into
base: master
Choose a base branch
from
Open

Triplet loss #3663

wants to merge 2 commits into from

Conversation

eli-osherovich
Copy link

This PR implements the triplet loss layer.

Eli Osherovich added 2 commits February 11, 2016 09:07
Signed-off-by: Eli Osherovich <oeli@amazon.com>
Signed-off-by: Eli Osherovich <oeli@amazon.com>
@cbalint13
Copy link
Contributor

@shaibagon
Copy link
Member

I see there is very little documentation in the code on how this loss works. If someone wish to understand it better (s)he might find this SO thread useful.

@eli-osherovich, thank you for implementing this loss.

@eli-osherovich
Copy link
Author

@shelhamer Is there any interest in this PR? If so, I can add documentation and examples.

@afshindn
Copy link

@eli-osherovich Are you planning on adding the examples and documentations?

@BeSlower
Copy link

Could you add one example and more documentations? Your code is very helpful for me, but a little bit hard to go through all things.

@jibweb
Copy link

jibweb commented Oct 18, 2016

@eli-osherovich Nice code, one question though, maybe I'm missing something obvious, but why is the line 65 of triplet_loss_layer.cpp
const Dtype scale = top[0]->cpu_diff()[0] / bottom[0]->num()

If I understood correctly, top[0]->cpu_diff()[0] would be the loss value, so why use this and not just 2 as in the derivation process ?

@kwin-wang
Copy link

kwin-wang commented Nov 29, 2016

@jibweb top[0]->cpu_diff[0] is equal to the loss_weight of loss layer in training prototxt file, the default value is 1. it's useful to multi-task learning.

@PeterouZh
Copy link

Hi, I want to know why I can't find triplet_loss_layer files in caffe's code now. I'll very thanks if anyone can help me.

@shaibagon
Copy link
Member

@pengzhou93 you cannot find files, because this PR was not merged yet. If you want to use it (at your own risk) you can pull this branch.

@soulslicer
Copy link

How does this vary from:

#5019

This one looks more extensive

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

9 participants