Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

support 32-bit MulticlassLDA #34

Merged
merged 1 commit into from
Apr 23, 2017

Conversation

bicycle1885
Copy link
Contributor

This makes MulticlassLDA work on 32-bit floating-point numbers as well.

Copy link
Member

@ararslan ararslan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice, thanks! Glad the fix was this simple.

@ararslan ararslan merged commit 8cd7205 into JuliaStats:master Apr 23, 2017
@bicycle1885 bicycle1885 deleted the fix-mclda-float32 branch April 23, 2017 21:02
@bicycle1885
Copy link
Contributor Author

Thank you!

@@ -104,7 +104,7 @@ function multiclass_lda_stats{T<:AbstractFloat}(nc::Int, X::DenseMatrix{T}, y::A
Sw = A_mul_Bt(Z, Z)

# compute between-class scattering
mean = cmeans * (cweights ./ n)
mean = cmeans * (cweights ./ T(n))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why would this matter? Float32/Int=Float32

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cweights is a vector of integers and Int/Int = Float64.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Got it. Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants