You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Oct 31, 2023. It is now read-only.
Is there any plans to support models/grads with bfloat16 type? Bfloat gained quite the popularity lately as every ampere GPU supports the type, and eliminates the need for loss scaling compared to float16.
This is what I get when I try to initialize bnb.AdamW with a bfloat16 casted model: ValueError: Gradient+optimizer bit data type combination not supported: grad torch.bfloat16, optimizer torch.uint8
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Is there any plans to support models/grads with bfloat16 type? Bfloat gained quite the popularity lately as every ampere GPU supports the type, and eliminates the need for loss scaling compared to float16.
This is what I get when I try to initialize bnb.AdamW with a bfloat16 casted model:
ValueError: Gradient+optimizer bit data type combination not supported: grad torch.bfloat16, optimizer torch.uint8
The text was updated successfully, but these errors were encountered: