Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

模型推理时遇到的问题 #236

Open
XuWentaotao opened this issue Aug 16, 2023 · 24 comments
Open

模型推理时遇到的问题 #236

XuWentaotao opened this issue Aug 16, 2023 · 24 comments

Comments

@XuWentaotao
Copy link

运行下面代码时报错
import argparse
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True)
from model import chat, VisualGLMModel
model, model_args = VisualGLMModel.from_pretrained('visualglm-6b', args=argparse.Namespace(fp16=True, skip_init=True))
from sat.model.mixins import CachedAutoregressiveMixin
model.add_mixin('auto-regressive', CachedAutoregressiveMixin())
image_path = "your image path or URL"
response, history, cache_image = chat(image_path, model, tokenizer, "描述这张图片。", history=[])
print(response)
response, history, cache_image = chat(None, model, tokenizer, "这张图片可能是在什么场所拍摄的?", history=history, image=cache_image)
print(response)
模型文件都已经下载到本地并且路径都改好了,但是运行时会报错
AttributeError: 'Namespace' object has no attribute 'model_parallel_size'
请问是什么原因导致的呢

@1049451037
Copy link
Member

1049451037 commented Aug 16, 2023

更新sat和VisualGLM-6B的代码到最新版

pip install --upgrade SwissArmyTransformer
git clone https://github.com/THUDM/VisualGLM-6B

@XuWentaotao
Copy link
Author

都是最新的,更新过还是一样的
/docker/data/visualglm/inference_model_swiss.py:5 in │
│ │
│ 2 from transformers import AutoTokenizer │
│ 3 tokenizer = AutoTokenizer.from_pretrained("/docker/data/chatglm_6b", trust_rem │
│ 4 from model import chat, VisualGLMModel │
│ ❱ 5 model, model_args = VisualGLMModel.from_pretrained('/docker/data/visualglm/wei │
│ 6 from sat.model.mixins import CachedAutoregressiveMixin │
│ 7 model.add_mixin('auto-regressive', CachedAutoregressiveMixin()) │
│ 8 image_path = "/docker/data/visualglm/tmp_img/test.png" │
│ │
│ /mc39/lib/python3.9/site-packages/sat/model/base_model.py:215 in from_pretrained │
│ │
│ 212 │ @classmethod
│ 213 │ def from_pretrained(cls, name, args=None, *, home_path=None, url=None, prefix='', bu │
│ 214 │ │ if build_only or 'model_parallel_size' not in overwrite_args: │
│ ❱ 215 │ │ │ return cls.from_pretrained_base(name, args=args, home_path=home_path, url=ur │
│ 216 │ │ else: │
│ 217 │ │ │ new_model_parallel_size = overwrite_args['model_parallel_size'] │
│ 218 │ │ │ model, model_args = cls.from_pretrained_base(name, args=args, home_path=home │
│ │
│ /mc39/lib/python3.9/site-packages/sat/model/base_model.py:207 in from_pretrained_base │
│ │
│ 204 │ │ │ args = cls.get_args() │
│ 205 │ │ args = update_args_with_file(args, path=os.path.join(model_path, 'model_config.j │
│ 206 │ │ args = overwrite_args_by_dict(args, overwrite_args=overwrite_args) │
│ ❱ 207 │ │ model = get_model(args, cls, **kwargs) │
│ 208 │ │ if not build_only: │
│ 209 │ │ │ load_checkpoint(model, args, load_path=model_path, prefix=prefix) │
│ 210 │ │ return model, args │
│ │
│ /mc39/lib/python3.9/site-packages/sat/model/base_model.py:352 in get_model │
│ │
│ 349 │ │ # pop params_dtype from kwargs │
│ 350 │ │ params_dtype = kwargs.pop('params_dtype') │
│ 351 │ │
│ ❱ 352 │ model = model_cls(args, params_dtype=params_dtype, **kwargs) │
│ 353 │ │
│ 354 │ if mpu.get_data_parallel_rank() == 0: │
│ 355 │ │ print_all(' > number of parameters on model parallel rank {}: {}'.format( │
│ │
│ /docker/data/visualglm/model/visualglm.py:32 in init
│ │
│ 29 │
│ 30 class VisualGLMModel(ChatGLMModel): │
│ 31 │ def init(self, args, transformer=None, **kwargs): │
│ ❱ 32 │ │ super().init(args, transformer=transformer, **kwargs) │
│ 33 │ │ self.image_length = args.image_length │
│ 34 │ │ self.add_mixin("eva", ImageMixin(args)) │
│ 35 │
│ │
│ /mc39/lib/python3.9/site-packages/sat/model/official/chatglm_model.py:167 in init
│ │
│ 164 │
│ 165 class ChatGLMModel(BaseModel): │
│ 166 │ def init(self, args, transformer=None, **kwargs): │
│ ❱ 167 │ │ super(ChatGLMModel, self).init(args, transformer=transformer, activation_fun │
│ 168 │ │ del self.transformer.position_embeddings │
│ 169 │ │ self.add_mixin("chatglm-final", ChatGLMFinalMixin(args.vocab_size, args.hidden_s │
│ 170 │ │ self.add_mixin("chatglm-attn", ChatGLMAttnMixin(args.hidden_size, args.num_atten │
│ │
│ /mc39/lib/python3.9/site-packages/sat/model/base_model.py:88 in init
│ │
│ 85 │ │ else: │
│ 86 │ │ │ # check if model-only mode │
│ 87 │ │ │ from sat.arguments import _simple_init │
│ ❱ 88 │ │ │ success = _simple_init(model_parallel_size=args.model_parallel_size) │
│ 89 │ │ │ │
│ 90 │ │ │ args_dict = {k: (getattr(args, v[0]) if hasattr(args, v[0]) else v[1]) for k │
│ 91 │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
AttributeError: 'Namespace' object has no attribute 'model_parallel_size'

@1049451037
Copy link
Member

这是你自己写的代码吧,直接运行cli_demo.py呢,我本地是没这个问题的

@XuWentaotao
Copy link
Author

这是readme里模型推理的第二个使用Huggingface transformers库调用模型的代码呀,我就是把模型文件下载到了本地然后改了一下模型路径和图片路径,

@1049451037
Copy link
Member

好吧,huggingface版本的不是我维护的,建议使用sat版本,因为huggingface版本也是调用的sat。

@XuWentaotao
Copy link
Author

不好意思我说错了,就是用的sat版本,不是用的huggingface

@1049451037
Copy link
Member

试一下装github版本的sat呢:

git clone https://github.com/THUDM/SwissArmyTransformer
cd SwissArmyTransformer
pip install .

@XuWentaotao
Copy link
Author

如果其他的都没错,就只会是sat版本的问题吗,有没有可能是模型文件的问题

@1049451037
Copy link
Member

模型文件也有可能,你下载的不是sat的模型文件吗,或者你下载以后自己又改了东西?

@XuWentaotao
Copy link
Author

可能是模型文件的问题,我是在huggingface上下载的,请问sat的模型文件在哪里可以下载到本地呀

@1049451037
Copy link
Member

运行cli_demo.py会自动下载

@XuWentaotao
Copy link
Author

请问这个r2://visualglm-6b.zip有一个具体的链接吗,因为我服务器的网络有限制,我想现在外面下载到本地

@1049451037
Copy link
Member

1049451037 commented Aug 17, 2023

没有具体链接,如果你了解rclone的话也可以用rclone下载,相关配置的参数在这里:https://github.com/THUDM/SwissArmyTransformer/blob/main/sat/resources/download.py#L79

但是还是建议直接运行程序下载,外面下载也可以运行python吧:

pip install SwissArmyTransformer --no-deps
from model.visualglm import VisualGLMModel
model, args = VisualGLMModel.from_pretrained('visualglm-6b')

运行上面两行代码就会自动下载了。

@XuWentaotao
Copy link
Author

运行会报错:botocore.exceptions.ProxyConnectionError: Failed to connect to proxy URL:******
因为公司的电脑都是由公司的网络代理的,所以下载起来会很麻烦,请问有直接可以下载visualglm-6b.zip的地址吗,没有的话我就再鼓捣鼓捣

@1049451037
Copy link
Member

1049451037 commented Aug 17, 2023

没有地址,cloudflare对于大文件没有直连地址。

代理按理说也是一样下载的,是你的代理不能连网吗?(至少得找一个能联网的机器吧)

@XuWentaotao
Copy link
Author

好吧,可以联网呀,但是我也不知道为什么会报错,总之谢谢你的耐心解答,我再想想其他办法吧

@1049451037
Copy link
Member

如果是代理的话正常来说需要配置一下环境变量,比如你的代码是cli_demo.py,那你运行的时候就要:

HTTP_PROXY=http://127.0.0.1:port HTTPS_PROXY=http://127.0.0.1:port python cli_demo.py

@XuWentaotao
Copy link
Author

这串指令没有办法运行呀,配置环境变量应该是在代码中有一个参数可以修改的吧,我没有找到诶

@1049451037
Copy link
Member

好吧,看来你的机器不是linux,那运行这个:

import os
os.environ['HTTP_PROXY'] = 'http://127.0.0.1:port'
os.environ['HTTPS_PROXY'] = 'http://127.0.0.1:port'
from model.visualglm import VisualGLMModel
model, args = VisualGLMModel.from_pretrained('visualglm-6b')

@XuWentaotao
Copy link
Author

出现了botocore.exceptions.SSLError: SSL validation failed for https://c8a00746a80e06c4632028e37de24d6e.r2.cloudflarestorage.com/sat/visualglm-6b.zip [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1056)
这个报错怎么解决呢

@1049451037
Copy link
Member

试一下安装github最新版的sat:

git clone https://github.com/THUDM/SwissArmyTransformer
cd SwissArmyTransformer
pip install . --no-deps

@XuWentaotao
Copy link
Author

哎呀还是会报同样的错

@1049451037
Copy link
Member

boto/botocore#2630

看起来是你需要更新boto3的版本或者python版本

@XuWentaotao
Copy link
Author

我再看看怎么解决吧,再次谢谢你的耐心解答!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants