-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for Float32 #187
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's a significant improvement in MadNLP! I read this PR and had only a few minor comments so far. It would be interesting to find a good use-case to advertise this new capability. Maybe we should run the GPU benchmark on a simple GPU that does not support double precision?
Thanks, @frapac for the review! Indeed creating a good use case would be important, and probably it doesn't have a big advantage on CPU. Just ran a simple experiment on my laptop: julia> T=Float32; N=400; a = CUDA.randn(T,N,N); a = a*a'+I; @time cholesky(a);
0.000969 seconds (163 allocations: 9.016 KiB)
julia> T=Float64; N=400; a = CUDA.randn(T,N,N); a = a*a'+I; @time cholesky(a);
0.002487 seconds (163 allocations: 9.016 KiB) so, would be interesting to test the performance with a very large-scale dense problem on GPU. Would be interesting to test it with DynamicNLPModels.jl cc: @dlcole3 |
This PR adds support for
Float32
, or any other precision types.To make this possible, we make the
AbstractLinearSovler
a parametric type, where the precision is given as a type parameter. TheFloat32
version of the solver interfaces is added when it is available.