Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

save_pretrained issue #1

Closed
angelovAlex opened this issue May 20, 2023 · 6 comments
Closed

save_pretrained issue #1

angelovAlex opened this issue May 20, 2023 · 6 comments

Comments

@angelovAlex
Copy link

Hey Ray.
Did you find a solution for save_pretrained issue? I am experiencing the same problem. According to stack trace it crashes simply on calling model.state_dict() because bitsandbytes tries to allocate additional memory in 'undo_layout'.

After some playing I came up with workaround and managed to successfully save the adaptor. I don't know if it has any side effects so I would recommend to use it only for saving result of lora training.

First of all, remove original peft and install this version pip install git+https://github.com/huggingface/peft.git@70af02a2bca5a63921790036b2c9430edf4037e2 (even with lots of ram and no cuda errors, I had the adapter file always 433 bytes on latest peft, but this version seems to work fine)

Depending on your installation (just look closely to stack trace of cuda error), you need to find where bitsandbytes library is stored and change bitsandbytes/nn/modules.py

I did a tweak to move tensors to cpu and removed cloning and exception check. I would recommend to rename original function, put this one next to it, run training, and restore original function after the training

def _save_to_state_dict(self, destination, prefix, keep_vars):
        if not self.state.has_fp16_weights and self.state.CB is None and self.state.CxB is not None:
            # reorder weight layout back from ampere/turing to row
            reorder_layout = True
            #weight_clone = self.weight.data.clone()
        else:
            reorder_layout = False

        #try:
        if reorder_layout:
            self.weight.data = undo_layout(self.state.CxB.cpu(), self.state.tile_indices.cpu())

        super()._save_to_state_dict(destination, prefix, keep_vars)

        # we only need to save SCB as extra data, because CB for quantized weights is already stored in weight.data
        weight_name = "SCB"

        # case 1: .cuda was called, SCB is in self.weight
        param_from_weight = getattr(self.weight, weight_name)
        # case 2: self.init_8bit_state was called, SCB is in self.state
        param_from_state = getattr(self.state, weight_name)

        key_name = prefix + f"{weight_name}"
        if param_from_weight is not None:
            destination[key_name] = param_from_weight if keep_vars else param_from_weight.detach()
        elif not self.state.has_fp16_weights and param_from_state is not None:
            destination[key_name] = param_from_state if keep_vars else param_from_state.detach()
        #finally:
        #    if reorder_layout:
        #        self.weight.data = weight_clone
@angelovAlex
Copy link
Author

Seems that peft step is unnecessary, and 443 bytes file that I had was because of issue in script.

@rhulha
Copy link
Owner

rhulha commented May 22, 2023

Note to self: The change is in this line: self.weight.data = undo_layout(self.state.CxB.cpu(), self.state.tile_indices.cpu())

@rhulha
Copy link
Owner

rhulha commented May 22, 2023

Wow, you are my hero @angelovAlex !
If you want to, you can comment this on this stackoverflow question and I will credit you with the correct answer:

https://stackoverflow.com/questions/76281856/getting-cuda-out-of-memory-when-calling-save-pretrained-in-a-script-that-tries-l

@rhulha
Copy link
Owner

rhulha commented May 22, 2023

I will add this patch to the setup_lambdalabs.py

@rhulha
Copy link
Owner

rhulha commented May 22, 2023

setup_lambdalabs.py now includes the save_pretrained patch.

@rhulha rhulha closed this as completed May 22, 2023
@rhulha
Copy link
Owner

rhulha commented Jun 1, 2023

see also this comment: https://www.reddit.com/r/LocalLLaMA/comments/13ws492/completely_lost_regarding_training_llama_model/jmedmxh/

"If you get an out of memory error while saving, that's a bitsandbytes bug that I hope they've fixed but if not you'll need to downgrade to 3.72."

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants