-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG]: LlamaRM model has no attribute 'resize_token_embeddings' #3389
Comments
Title: [BUG]: |
Met the same issue:
File "/mnt/batch/tasks/shared/LS_root/mounts/clusters/x-gpu1/code/Users/x/code/ColossalAI/applications/Chat/coati/utils/tokenizer_utils.py", line 68, in smart_tokenizer_and_embedding_resize File "/anaconda/envs/coati/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1614, in getattr |
Meet the same issue: |
Meet the same issue: |
I met this error too, but if you are training the stage 2, you should modified the pretrain to Coati7B(you trained in stage 1) instead of LLaMA7B that provided by huggingface. |
Thank you very much for the reminder! |
met the some problem , and i add the code at line 72 of train_reward_model.py , imitating train_sft.py : tokenizer = LlamaTokenizer.from_pretrained(args.pretrain) |
me too,i met the same problem ubuntu@VM-0-5-ubuntu:~/Desktop/bookDesk202303061/text-generation-webui$ python3 server.py --cpu --model llama-7b-hf --lora chinese-alpaca-lora-7b llama.cpp: loading model from models/llama-7b-hf/ggml-alpaca-7b-q4.bin what should i do to fix it? |
🐛 Describe the bug
LlamaRM is not a huggingface transformer module but LoraModule, while llamaRM.model is a huggingface transformer model. So LlamaRm has no function "resize_token_embeddings" but LlamaRM.model has. When using Llama to train RM model, it will raise the error: AttributeError: 'LlamaRM' object has no attribute 'resize_token_embeddings'
line 68 in coati/utils/tokenizer_utils.py
Environment
No response
The text was updated successfully, but these errors were encountered: