-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Float16 compatibility #15
Comments
Ok I found the issue: Line 6 in a17ca94
.= casts right hand side vector into left hand side vector's format.For example: x32 = ones(Float32,10)
x16 = ones(Float16,10)
x32 .= x16 # this is still a Float32 That is, even is the argument of |
In fact this is not even the bottom of the issue: |
I can change the backend to change the model everytime a quick change is as : f64(m) = Flux.paramtype(Float64, m) # similar to https://github.com/FluxML/Flux.jl/blob/d21460060e055dca1837c488005f6b1a8e87fa1b/src/functor.jl#L217 then to change our model we use : fluxnlp.model= f64(fluxnlp.model) |
Flux just recently added support for this |
New Flux Update ## v0.14.0 (July 2023) * Flux now requires julia v1.9 or later. * CUDA.jl is not a hard dependency anymore. Support is now provided through the extension mechanism, by loading `using Flux, CUDA`. The package cuDNN.jl also needs to be installed in the environment. (You will get instructions if this is missing.) * After a deprecations cycle, the macro `@epochs` and the functions `Flux.stop`, `Flux.skip`, `Flux.zeros`, `Flux.ones` have been removed.
Hi there,
I would like to know if Float16 is supported. I followed this tutorial https://jso.dev/FluxNLPModels.jl/dev/tutorial/ and naively tried
but got a
Float32
. Therefore I assume at least some computations are performed withFloat32
when evaluating the objective. I also tried to modify the functiongetdata()
asbut still got a
Float32
when evaluating the objective.Any idea how to run in Float16 (or any other format)?
The text was updated successfully, but these errors were encountered: