We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I recently customized Flux.jl layers code to introduce Float64 support... To enable it back.
Lux.jl could be more superior and support ability to use various floats.
In my use case, Float32 -> Float64 transition demonstrated completely different Neural Network behavior.
Ironically, it was visible, barely, that Float128 is needed.
Looking into the tomorrow, there should be hyper sensitive Neural Networks framework out there.
Especially, for scientific researches.
I know that I can modify the code to fit my use case.
But it will be better if that will be done by some strong mathematician, not me.
The text was updated successfully, but these errors were encountered:
You can convert parameters to 128 bit Floats (BigFloat) using Lux.LuxEltypeAdaptor{BigFloat}, and use it similar to how f64/f32/f16 are used
BigFloat
Lux.LuxEltypeAdaptor{BigFloat}
f64
f32
f16
Sorry, something went wrong.
No branches or pull requests
I recently customized Flux.jl layers code to introduce Float64 support... To enable it back.
Lux.jl could be more superior and support ability to use various floats.
In my use case, Float32 -> Float64 transition demonstrated completely different Neural Network behavior.
Ironically, it was visible, barely, that Float128 is needed.
Looking into the tomorrow, there should be hyper sensitive Neural Networks framework out there.
Especially, for scientific researches.
I know that I can modify the code to fit my use case.
But it will be better if that will be done by some strong mathematician, not me.
The text was updated successfully, but these errors were encountered: