Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Float128 support #851

Closed
sandijs-private opened this issue Aug 22, 2024 · 1 comment
Closed

Add Float128 support #851

sandijs-private opened this issue Aug 22, 2024 · 1 comment

Comments

@sandijs-private
Copy link

I recently customized Flux.jl layers code to introduce Float64 support... To enable it back.

Lux.jl could be more superior and support ability to use various floats.

In my use case, Float32 -> Float64 transition demonstrated completely different Neural Network behavior.

Ironically, it was visible, barely, that Float128 is needed.

Looking into the tomorrow, there should be hyper sensitive Neural Networks framework out there.

Especially, for scientific researches.

I know that I can modify the code to fit my use case.

But it will be better if that will be done by some strong mathematician, not me.

@avik-pal
Copy link
Member

You can convert parameters to 128 bit Floats (BigFloat) using Lux.LuxEltypeAdaptor{BigFloat}, and use it similar to how f64/f32/f16 are used

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants