-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
save_pretrained issue #1
Comments
Seems that |
Note to self: The change is in this line: self.weight.data = undo_layout(self.state.CxB.cpu(), self.state.tile_indices.cpu()) |
Wow, you are my hero @angelovAlex ! |
I will add this patch to the setup_lambdalabs.py |
setup_lambdalabs.py now includes the save_pretrained patch. |
see also this comment: https://www.reddit.com/r/LocalLLaMA/comments/13ws492/completely_lost_regarding_training_llama_model/jmedmxh/ "If you get an out of memory error while saving, that's a bitsandbytes bug that I hope they've fixed but if not you'll need to downgrade to 3.72." |
Hey Ray.
Did you find a solution for save_pretrained issue? I am experiencing the same problem. According to stack trace it crashes simply on calling model.state_dict() because bitsandbytes tries to allocate additional memory in 'undo_layout'.
After some playing I came up with workaround and managed to successfully save the adaptor. I don't know if it has any side effects so I would recommend to use it only for saving result of lora training.
First of all, remove original peft and install this version
pip install git+https://github.com/huggingface/peft.git@70af02a2bca5a63921790036b2c9430edf4037e2
(even with lots of ram and no cuda errors, I had the adapter file always 433 bytes on latest peft, but this version seems to work fine)Depending on your installation (just look closely to stack trace of cuda error), you need to find where bitsandbytes library is stored and change
bitsandbytes/nn/modules.py
I did a tweak to move tensors to cpu and removed cloning and exception check. I would recommend to rename original function, put this one next to it, run training, and restore original function after the training
The text was updated successfully, but these errors were encountered: