-
Notifications
You must be signed in to change notification settings - Fork 416
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AttributeError: module 'transformers_modules.hf.modeling_chatglm' has no attribute 'ChatGLMForConditionalGenerationWithImage'. Did you mean: 'ChatGLMForConditionalGeneration'? #328
Comments
hf版本的visualglm已经比较久没人维护了,建议用老版本的transformers试一下:
|
好吧 ,现在只有git上这个普通版本吗?比如说后缀没有_hf的py文件 |
是的,不过hf也是可以正常用的,只是需要降级一下transformers库。 |
我试着降级还是不能用还是这个错误 |
或者 您还知道有什么其他的图片大模型吗 |
那应该就是你的代码没下载全,visuaglm-6b的代码里是有ChatGLMForConditionalGenerationWithImage这个类的。你试着在你的代码里搜一下:https://huggingface.co/THUDM/visualglm-6b/blob/main/modeling_chatglm.py#L1341 |
可以试一下CogVLM:https://github.com/THUDM/CogVLM 不过开源版本暂时不支持中文。 |
是的 我又重新检查了一下 非常感谢您 |
如题 有人遇到过这个报错吗?
AttributeError: module 'transformers_modules.hf.modeling_chatglm' has no attribute 'ChatGLMForConditionalGenerationWithImage'. Did you mean: 'ChatGLMForConditionalGeneration'?
我直接下载的HF上visualglm-6b的所有包
然后按照demo写了一个py文件后
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("THUDM/visualglm-6b", trust_remote_code=True)
model = AutoModel.from_pretrained("THUDM/visualglm-6b", trust_remote_code=True).half().cuda()
image_path = "your image path"
response, history = model.chat(tokenizer, image_path, "描述这张图片。", history=[])
print(response)
response, history = model.chat(tokenizer, image_path, "这张图片可能是在什么场所拍摄的?", history=history)
print(response)
然后改了路径指定到我下载的目录后,运行出错
The text was updated successfully, but these errors were encountered: