Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why the loss function is not same with paper said? #14

Open
AlexFEIII opened this issue Mar 19, 2022 · 1 comment
Open

Why the loss function is not same with paper said? #14

AlexFEIII opened this issue Mar 19, 2022 · 1 comment

Comments

@AlexFEIII
Copy link

hello, I'm a student who is learning AI.
I was download your code, but I have a problem about this code,
The paper saids "The loss function in this work is simply the summation of two terms, the classification loss(cross-entropy), and the regularization term(which is l2 norm of the weights in the last two fully-connected layers)"
In the main.py, I just see "the classification loss". Is there any particular reason for this?

finally, thank you for your code contributions

@hnhoangdz
Copy link

l2 norm is the loss function, it is a regularization technique. in this paper, the author wanted to applied l2 norm for only last fcn layers

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants