Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add :type option to load model under specific precision #311

Merged
merged 1 commit into from
Dec 18, 2023
Merged

Conversation

jonatanklosko
Copy link
Member

@jonatanklosko jonatanklosko commented Dec 15, 2023

Allows for Bumblebee.load_model({:hf, "..."}, type: :bf16) to set mixed precision policy on the model and cast params on load. The user can also pass the policy struct itself, type is just a shorthand.

Note: currently loading a f16 checkpoint returns params in f16, but that's somewhat a bug, specifically if all params are available we skip Axon init, which would otherwise cast the params to f32. Now it's going to always be f32 by default and the user can override it using :type.

hf/transformers look at "torch_dtype" in the config file to determine the type, but I'm not sure if we want to configure the policy automatically based on that. @seanmor5 thoughts?

Copy link
Contributor

@josevalim josevalim left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@jonatanklosko jonatanklosko merged commit b947dd2 into main Dec 18, 2023
2 checks passed
@jonatanklosko jonatanklosko deleted the jk-type branch December 18, 2023 07:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants