Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

why do you usr cross entropy loss? #17

Closed
EthanGuan opened this issue Sep 30, 2017 · 2 comments
Closed

why do you usr cross entropy loss? #17

EthanGuan opened this issue Sep 30, 2017 · 2 comments

Comments

@EthanGuan
Copy link

I noticed in the paper and original repo, L2 loss function is used.
In your implementation:

# define loss function (criterion) and pptimizer
criterion = nn.CrossEntropyLoss().cuda()

cross entropy loss is used.

does it really work?

@tensorboy
Copy link
Owner

I didn't successfully implement the training part, but the model is converted right. ;-D.

Best,
Wangpeng

@tensorboy
Copy link
Owner

Try train.py

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants