Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding cross entropy loss with softmax activation for multi class problems #94

Closed
wants to merge 2 commits into from
Closed

Adding cross entropy loss with softmax activation for multi class problems #94

wants to merge 2 commits into from

Conversation

kvmanohar22
Copy link

@kvmanohar22 kvmanohar22 commented Oct 4, 2017

In Ref #39
Named the structure SoftmaxWithLoss following suite of Caffe Deep Learning Library.

@Evizero
Copy link
Member

Evizero commented Oct 4, 2017

Hi! Thanks for spending time on this.

Could you add some correctness tests (e.g. comparing input-ouput pairs to results that other implementations get), as well as some type stability tests

@kvmanohar22
Copy link
Author

I implemented this loss in tensorflow, and the result comparison is as follows:

In Julia

julia> x = 0.1:0.1:0.3
0.1:0.1:0.3
julia> y = 2
2
julia> loss = SoftmaxWithLoss()
julia> value(loss, y, x)
1.101942848229244

julia> deriv(loss, y, x)
3-element Array{Float64,1}:
  0.30061 
 -0.667775
  0.367165

julia> deriv2(loss, y, x)
3-element Array{Float64,1}:
 0.148411 
 0.0861067
 0.0      

In Tensorflow

loss = [1.1019429]
deriv = [array([[ 0.30060959, -0.66777503,  0.36716539]], dtype=float32)]
deriv2 =  

@juliohm
Copy link
Member

juliohm commented Apr 5, 2020

I am closing this PR because we need to refactor the project to have better support for multi-class losses. The implemented interface here will not fit this generalization.

@juliohm juliohm closed this Apr 5, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants