add double-margin contrastive loss layer #4476
Open
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Migrate and reimplement the
double-margin contrastive loss
from https://bitbucket.org/jonbakerfish/caffe. For consistency (see #2308), this implementation is slightly different from the above repo.The bitbucket one implements the loss in the following form:
L = y * max(d^2 - margin1, 0) + (1-y) * max(margin2 - d^2, 0)
whereas this PR implements:
L = y * max(d - margin1, 0)^2 + (1-y) * max(margin2 - d, 0)^2
This loss has been used in the following works:
[1] Lin et al. DeepHash: Getting Regularization, Depth and Fine-Tuning Right. CoRR, 2015
[2] Sadeghi et al. VISALOGY: Answering Visual Analogy Questions. In NIPS, 2015.
[3] Cao et al. Quartet-net Learning for Visual Instance Retrieval. In MM, 2016.