-
Notifications
You must be signed in to change notification settings - Fork 807
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Lazy import + refactor Lora layer addition #426
Conversation
starting from the last layer. | ||
""" | ||
if model.model_type in [ | ||
"mistral", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: It would be easier for future updates/modifications if we could make those model types as constant and define them at the top of utils.py.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this function is going to turn into a sequence of if elif
depending on the model types, making each one a constant might not make sense? (e.g. see the branch with olmo
)
I admit it's not super clean though. I couldn't think of a better approach yet though.
I also want to add a config option to choose which layers (e.g. to make it easy to add MLP layers). Will do it in a follow up probably.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Make sense, and the function already at top of utils.py. it wouldn't be too bad to updating it.
Looks very good, a much cleaner solution than I thought 🚀 |
"qwen2": qwen2, | ||
MODEL_REMAPPING = { | ||
"mistral": "llama", # mistral is compatible with llama | ||
"phi-msft": "phixtral", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This may cause some issues because the old phi2 was using this model type. So, if a user tries to load the old phi2 model, it will be mapped to phixtral which won't work. see https://huggingface.co/microsoft/phi-2/blob/5d8f23da6be3205c16c06a9db3f22279ee23dbbf/config.json
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, that looks like an old version of phi2. it wouldn't work with mlx lm either way regardless of that remapping. We could try and put a helpful error message when constructing the Phixtral model?
It is too bad that they use the same model_type
, I think it should have been different..
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe just mentioned it in the docs? It will error out for missing parameters during the model loading process anyway.
* lazy model import in mlx_lm * change lora loading * fix olmo lora * remove a bunch of unused stuff from plamo * move phixtral to mlx-lm and out of llms/
Add lazy loading of model architectures
Refactor lora to make a step towards making it more general
Add support for lora tuning olmo
Move Phixtral into mlx-lm and enable it with LoRA