-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
safetensors_rust.SafetensorError: Error while deserializing header: InvalidHeaderDeserialization #609
Comments
I'm currently having the same problem. Are you using a well known dataset (such as alpaca) or a custom one? @mj2688 By the way I noticed that with few epochs this doesn't happen. |
data_path: str = "yahma/alpaca-cleaned" I referred to this tutorial and deleted 'torch.compile' in finetune.py , but it still not works. |
You can find the fix reported in this issue. This solved the InvalidHeaderDeserialization error for me. |
Have u fix this problem? I'm current facing the same? |
Yes, I solved. You have to comment these lines in finetune.py. |
thanks,i also solved! |
delete some codes in finetune.py :
|
But if I delete the above, it still exists safetensors_rust.SafetensorError: Error while deserializing header: InvalidHeaderDeserialization |
Hi. I think the .safetensors file is not compatible with PEFT,so I delete the xx.safetensors file and it works. |
Do you mean to delete the file after fine-tuning and run it now.Is this file an adapter_ model. safetensors |
Before fine-tuning,i delete this,then it works: if torch.version >= "2" and sys.platform != "win32": |
I've tried it before, but it still doesn't work |
I'm having this same issue (details here: huggingface/transformers#28742). Could anyone please help? |
@MING8276 Would you mind telling me what files you deleted? |
do you solve this problem? i meet the same problem. |
When I fine-tune llama2-7B with LoRa, the following error occurs:
Traceback (most recent call last):
File "/home/ubuntu/lora/alpaca-lora-main/finetune.py", line 290, in
fire.Fire(train)
File "/home/ubuntu/anaconda3/envs/loar/lib/python3.9/site-packages/fire/core.py", line 141, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "/home/ubuntu/anaconda3/envs/loar/lib/python3.9/site-packages/fire/core.py", line 475, in _Fire
component, remaining_args = _CallAndUpdateTrace(
File "/home/ubuntu/anaconda3/envs/loar/lib/python3.9/site-packages/fire/core.py", line 691, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "/home/ubuntu/lora/alpaca-lora-main/finetune.py", line 280, in train
trainer.train()
File "/home/ubuntu/anaconda3/envs/loar/lib/python3.9/site-packages/transformers/trainer.py", line 1555, in train
return inner_training_loop(
File "/home/ubuntu/anaconda3/envs/loar/lib/python3.9/site-packages/transformers/trainer.py", line 1965, in _inner_training_loop
self._load_best_model()
File "/home/ubuntu/anaconda3/envs/loar/lib/python3.9/site-packages/transformers/trainer.py", line 2184, in _load_best_model
model.load_adapter(self.state.best_model_checkpoint, model.active_adapter)
File "/home/ubuntu/anaconda3/envs/loar/lib/python3.9/site-packages/peft/peft_model.py", line 629, in load_adapter
adapters_weights = load_peft_weights(model_id, device=torch_device, **hf_hub_download_kwargs)
File "/home/ubuntu/anaconda3/envs/loar/lib/python3.9/site-packages/peft/utils/save_and_load.py", line 222, in load_peft_weights
adapters_weights = safe_load_file(filename, device=device)
File "/home/ubuntu/anaconda3/envs/loar/lib/python3.9/site-packages/safetensors/torch.py", line 308, in load_file
with safe_open(filename, framework="pt", device=device) as f:
safetensors_rust.SafetensorError: Error while deserializing header: InvalidHeaderDeserialization
And in checkpoint-1000, adapter_model.safetensors is saved in the .safetensors format. I checked the official fine-tuning weights, and they are in the adapter_model.bin format. Why is that?
The text was updated successfully, but these errors were encountered: