You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
You implement loss = self.alpha * (1-y) * distance**2 + \ self.beta * y * (torch.max(torch.zeros_like(distance), self.margin - distance)**2) as your contrastive loss, however, in your dataset split and preprocessing script, you label (genuine, genuine) as 1 and (genuine,forged) as 0, which means when y=0, your loss = alpha * distance between pairs and it will be minimized, but hopefully they should be as far as possible.
loss = self.alpha * y * distance**2 + \ self.beta * (1-y) * (torch.max(torch.zeros_like(distance), self.margin - distance)**2) would be a correct implementation.
The text was updated successfully, but these errors were encountered:
As I read the paper, the loss function in the current source code is the same as the formulas (1) in the paper. However, the author said "y is a binary indicator function denoting whether the two samples belong to the same class or not" that it is quite ambiguous
You implement
loss = self.alpha * (1-y) * distance**2 + \ self.beta * y * (torch.max(torch.zeros_like(distance), self.margin - distance)**2)
as your contrastive loss, however, in your dataset split and preprocessing script, you label (genuine, genuine) as 1 and (genuine,forged) as 0, which means when y=0, your loss = alpha * distance between pairs and it will be minimized, but hopefully they should be as far as possible.loss = self.alpha * y * distance**2 + \ self.beta * (1-y) * (torch.max(torch.zeros_like(distance), self.margin - distance)**2)
would be a correct implementation.The text was updated successfully, but these errors were encountered: