Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

loss funtion problem #22

Open
tommying opened this issue May 13, 2023 · 1 comment
Open

loss funtion problem #22

tommying opened this issue May 13, 2023 · 1 comment

Comments

@tommying
Copy link

tommying commented May 13, 2023

Hi there!

The paper states that "we calculate their vector-wise cosine similarity loss along the channel axis and obtain a 2-D anomaly map M(WxK)", but the code uses
loss += torch.mean(1-cos_loss(a[item].view(a[item].shape[0],-1), b[item].view(b[item].shape[0],-1))).
It seems that the loss value is calculated when the matrix is flattened to a single vector.

Does the loss function need the formula 1 given in the paper to obtain a 2D anomaly map Mk? Like this:
sim_map = 1 - F.cosine_similarity(a[item], b[item])
loss += (sim_map.view(sim_map.shape[0],-1).mean(-1)).mean()

I think calculating the loss value in this way is consistent with the paper!

However, training the model in this way can lead to a decrease in accuracy. For example, the carpet dataset only achieved an image-level AUC of 92.3%. To eliminate the possibility of slow model convergence, we set the epoch to 1000, but still obtained the same result. (The code only has one modification in the loss function.)

So, the formula provided in the paper cannot achieve the performance reported in the paper?
I'm very confused about why this happens.

Looking forward to your reply!

@ashesofdream
Copy link

same question.Revisiting Reverse Distillation 's code use the same loss function . so puzzled

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants