Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The annealing optimization strategy for A-Softmax loss #21

Closed
taey16 opened this issue Mar 19, 2018 · 2 comments
Closed

The annealing optimization strategy for A-Softmax loss #21

taey16 opened this issue Mar 19, 2018 · 2 comments

Comments

@taey16
Copy link

taey16 commented Mar 19, 2018

Thanks for your nice repo.
I'm trying to your codes.
My question is
the paper said about the annealing optimization strategy for A-Softmax loss with introducing lambda.
here, your implementation is

self.lamb = max(self.LambdaMin,self.LambdaMax/(1+0.1*self.it ))
output = cos_theta * 1.0
output[index] -= cos_theta[index]*(1.0+0)/(1+self.lamb)
output[index] += phi_theta[index]*(1.0+0)/(1+self.lamb)

but, i think the cos term is to be scaled by a factor of lambda such that

output = cos_theta * self.lamb
output[index] -= cos_theta[index]*(self.lamb)/(1+self.lamb)
output[index] += phi_theta[index]*(1.0)/(1+self.lamb)

Please, give me your idea
Thanks

@vzvzx
Copy link

vzvzx commented Mar 19, 2018

the code is right pls have a double check. @taey16

@taey16
Copy link
Author

taey16 commented Mar 20, 2018

@vzvzx ,
Yes, you are right. my mistake.
Thanks for your reply.

@taey16 taey16 closed this as completed Mar 20, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants