-
Notifications
You must be signed in to change notification settings - Fork 25.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LlavaNextProcessor.__init__() got an unexpected keyword argument 'image_token' #31465
Comments
@M-Fannilla hi, I just updates llava-weights in the hub which caused the error. I will revert the changes soon |
@zucchini-nlp Great, Thanks! |
@M-Fannilla should be working now, closing the issue as resolved! |
There is new issue: Failed to import transformers.models.llama.modeling_llama because of the following error (look up to see its traceback): I did not have this one before. |
@M-Fannilla this one isn't related to the model checkpoint, but rather to the installed packages you have. Seems like flash-attn wasn't installed properly or has some dependency issues. Try to uninstall flash-attn and load again If it doesn't work feel free to open a new issue :) |
System Info
transformers 4.41.2
Python 3.10.12
Nvidia A40 / Runpod.io
Whe working with LlavaNextProcessor I get errors that were not there.
Error:
LlavaNextProcessor.init() got an unexpected keyword argument 'image_token'
Which previously did not happen (~week ago).
Who can help?
No response
Information
Reproduction
model_name = "llava-hf/llava-v1.6-vicuna-7b-hf"
processor = LlavaNextProcessor.from_pretrained(
model_name, padding_side='left'
)
model = LlavaNextForConditionalGeneration.from_pretrained(
model_name, torch_dtype=torch.float16, low_cpu_mem_usage=True
).to("cuda").eval()
Expected behavior
I would expect to initiate the objects properly
The text was updated successfully, but these errors were encountered: