Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

doubt regarding the final loss #29

Open
ayush1997 opened this issue Feb 14, 2019 · 2 comments
Open

doubt regarding the final loss #29

ayush1997 opened this issue Feb 14, 2019 · 2 comments

Comments

@ayush1997
Copy link

Hi,
Thank you for the code. I had a doubt regarding the Affinity function used in the reformulated loss(eq 11). From what I understand, after computing the triplets by organize_samples() function, they are fed to the criterion_triplet to get the loss.
In TripletEmbedding.lua are the Affinities( A(xi,xj) and A(xi,xk) in eq(11)) represented by delta_pos and delta_net ?

I am a bit confused because I could not see the actual Affinity function(calculated in the agg_clustering.c and as mentioned Graph Degree Linkage paper[68]) to be used while calculating the loss.

Thanks.

@JK654
Copy link

JK654 commented Mar 6, 2020

I am confused too. I have searched in the source code, but I found that the criterion function "TripletEmbeddingCriterion" computed rather using the distances based on the representations learned from the CNN than the affinity between two clusters. And why "TripletEmbeddingCriterion" is more similar to the loss defined in FaceNet which the paper referenced , but not same with the formula proposed in the paper? The Loss proposed in the paper actually uses the affinity between clusters, but why the source code seems to only use the affinity in agglomerative clustering and updating label except computing the triplet loss?

@JK654
Copy link

JK654 commented Mar 6, 2020

In general, the loss used to optimize DNN is >=0, and the purpose to optimize DNN can be expressed to making loss zero in a mathematical form .
But why the triplet loss defined in the paper(equation 7 or 11) is <=0 forever, so what's the destination/criterion of the loss to optimize the CNN? is zero? if it is zero, why add a minus sign?
Hoping anybody comment or answer my problem, thanks a lot.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants