Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TypeError: 'NoneType' object is not iterable(本地部署) #2314

Closed
mountain1275 opened this issue Dec 7, 2023 · 33 comments
Closed

TypeError: 'NoneType' object is not iterable(本地部署) #2314

mountain1275 opened this issue Dec 7, 2023 · 33 comments
Assignees
Labels
bug Something isn't working

Comments

@mountain1275
Copy link

问题描述
本地部署时,python startup.py --all -webui,启动后打开web页面,后台和页面就会报错,报错信息为

Traceback (most recent call last):
  File "/opt/conda/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 534, in _run_script
    exec(code, module.__dict__)
  File "/workspace/Langchain-Chatchat-0.2.7/webui.py", line 64, in <module>
    pages[selected_page]["func"](api=api, is_lite=is_lite)
  File "/workspace/Langchain-Chatchat-0.2.7/webui_pages/dialogue/dialogue.py", line 83, in dialogue_page
    running_models = list(api.list_running_models())
TypeError: 'NoneType' object is not iterable

复现问题的步骤
正常安装依赖

pip install -r requirements.txt 
pip install -r requirements_webui.txt  
python init_database.py --recreate-vs

修改config中的文件内容,所有修改内容如下

修改模型的path,路径确认没问题
EMBEDDING_MODEL = "m3e-base"
 "m3e-base": "../m3e-base"
"chatglm3-6b": "../THUDM/chatglm3-6b"
VLLM_MODEL_DICT中"chatglm3-6b": "../THUDM/chatglm3-6b"

修改了默认模型列表,注释掉了不用的(保留也试过,一样的报错信息)
LLM_MODELS = ["chatglm3-6b"] #"zhipu-api", "openai-api"

server_config.py,修改端口和GPU使用

#webui.py server
WEBUI_SERVER = {
    "host": DEFAULT_BIND_HOST,
    "port": 8080,
}

"gpus": "0,1", # 使用的GPU,以str的格式指定,如"0,1",如失效请使用CUDA_VISIBLE_DEVICES="0,1"等形式指定
"num_gpus": 2 # 使用GPU的数量

环境信息
实现方式:远程服务器拉的pytorch2.1镜像,部署此项目
==============================Langchain-Chatchat Configuration==============================
操作系统:Linux-5.15.0-46-generic-x86_64-with-glibc2.31.
python版本:3.10.13 (main, Sep 11 2023, 13:44:35) [GCC 11.2.0]
项目版本:v0.2.7
langchain版本:0.0.340. fastchat版本:0.2.33

当前使用的分词器:ChineseRecursiveTextSplitter
当前启动的LLM模型:['chatglm3-6b'] @ cuda
{'device': 'cuda',
'gpus': '0,1',
'host': '0.0.0.0',
'infer_turbo': False,
'model_path': '../THUDM/chatglm3-6b',
'num_gpus': 2,
'port': 20002}
当前Embbedings模型: m3e-base @ cuda

附加信息
在其他issue中有看到类似问题,但是没看到解决方法,包括 #2122 #2094
有回复说是openai的bug,建议完全按照wiki的方式实现本地部署,但已经尽量按照wiki的方式实现本地部署,好像是没有完全注释掉openai相关内容带来上述错误,所以请教一下,还需要修改哪些

@mountain1275 mountain1275 added the bug Something isn't working label Dec 7, 2023
@liunux4odoo
Copy link
Collaborator

model_path 改成绝对路径试试。如果不行,需要贴出完整的报错信息。

@mountain1275
Copy link
Author

已经修改成对应的绝对路径

 "m3e-base": "/workspace/m3e-base"
"chatglm3-6b": "/workspace/THUDM/chatglm3-6b"

但是依旧是同样的问题

==============================Langchain-Chatchat Configuration==============================
操作系统:Linux-5.15.0-46-generic-x86_64-with-glibc2.31.
python版本:3.10.13 (main, Sep 11 2023, 13:44:35) [GCC 11.2.0]
项目版本:v0.2.7
langchain版本:0.0.340. fastchat版本:0.2.33


当前使用的分词器:ChineseRecursiveTextSplitter
当前启动的LLM模型:['chatglm3-6b'] @ cuda
{'device': 'cuda',
 'gpus': '0,1',
 'host': '0.0.0.0',
 'infer_turbo': False,
 'model_path': '/workspace/THUDM/chatglm3-6b',
 'num_gpus': 2,
 'port': 20002}
当前Embbedings模型: m3e-base @ cuda
==============================Langchain-Chatchat Configuration==============================


2023-12-08 00:39:38,190 - startup.py[line:647] - INFO: 正在启动服务:
2023-12-08 00:39:38,190 - startup.py[line:648] - INFO: 如需查看 llm_api 日志,请前往 /workspace/Langchain-Chatchat-0.2.7/logs
2023-12-08 00:39:42 | ERROR | stderr | INFO:     Started server process [2539]
2023-12-08 00:39:42 | ERROR | stderr | INFO:     Waiting for application startup.
2023-12-08 00:39:42 | ERROR | stderr | INFO:     Application startup complete.
2023-12-08 00:39:42 | ERROR | stderr | INFO:     Uvicorn running on http://0.0.0.0:20000 (Press CTRL+C to quit)
2023-12-08 00:39:43 | INFO | model_worker | Loading the model ['chatglm3-6b'] on worker f107b056 ...
Loading checkpoint shards:   0%|                                                                                                | 0/7 [00:00<?, ?it/s]
Loading checkpoint shards:  14%|████████████▌                                                                           | 1/7 [00:01<00:11,  1.96s/it]
Loading checkpoint shards:  29%|█████████████████████████▏                                                              | 2/7 [00:03<00:09,  1.85s/it]
Loading checkpoint shards:  43%|█████████████████████████████████████▋                                                  | 3/7 [00:05<00:07,  1.78s/it]
Loading checkpoint shards:  57%|██████████████████████████████████████████████████▎                                     | 4/7 [00:07<00:05,  1.83s/it]
Loading checkpoint shards:  71%|██████████████████████████████████████████████████████████████▊                         | 5/7 [00:09<00:03,  1.85s/it]
Loading checkpoint shards:  86%|███████████████████████████████████████████████████████████████████████████▍            | 6/7 [00:11<00:01,  1.86s/it]
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:12<00:00,  1.65s/it]
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:12<00:00,  1.76s/it]
2023-12-08 00:39:56 | ERROR | stderr | 
2023-12-08 00:39:56 | INFO | model_worker | Register to controller
INFO:     Started server process [2578]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:7861 (Press CTRL+C to quit)

==============================Langchain-Chatchat Configuration==============================
操作系统:Linux-5.15.0-46-generic-x86_64-with-glibc2.31.
python版本:3.10.13 (main, Sep 11 2023, 13:44:35) [GCC 11.2.0]
项目版本:v0.2.7
langchain版本:0.0.340. fastchat版本:0.2.33


当前使用的分词器:ChineseRecursiveTextSplitter
当前启动的LLM模型:['chatglm3-6b'] @ cuda
{'device': 'cuda',
 'gpus': '0,1',
 'host': '0.0.0.0',
 'infer_turbo': False,
 'model_path': '/workspace/THUDM/chatglm3-6b',
 'num_gpus': 2,
 'port': 20002}
当前Embbedings模型: m3e-base @ cuda


服务端运行信息:
    OpenAI API Server: http://127.0.0.1:20000/v1
    Chatchat  API  Server: http://127.0.0.1:7861
    Chatchat WEBUI Server: http://0.0.0.0:8080
==============================Langchain-Chatchat Configuration==============================



Collecting usage statistics. To deactivate, set browser.gatherUsageStats to False.


  You can now view your Streamlit app in your browser.

  URL: http://0.0.0.0:8080

{'base_url': 'http://127.0.0.1:7861', 'timeout': 300.0, 'proxies': {'all://127.0.0.1': None, 'all://localhost': None, 'http://127.0.0.1': None, 'http://': 'http://10.3.131.151:3128', 'https://': 'http://10.3.131.151:3128', 'all://': None, 'localhost': None, '127.0.0.0/8': None, '10.10.11.0/24': None, '10.244.0.0/16': None}}
2023-12-08 00:40:58,120 - utils.py[line:94] - ERROR: ValueError: error when post /llm_model/list_running_models: Proxy keys should use proper URL forms rather than plain scheme strings. Instead of "localhost", use "localhost://"
{'base_url': 'http://127.0.0.1:7861', 'timeout': 300.0, 'proxies': {'all://127.0.0.1': None, 'all://localhost': None, 'http://127.0.0.1': None, 'http://': 'http://10.3.131.151:3128', 'https://': 'http://10.3.131.151:3128', 'all://': None, 'localhost': None, '127.0.0.0/8': None, '10.10.11.0/24': None, '10.244.0.0/16': None}}
2023-12-08 00:40:58,196 - utils.py[line:94] - ERROR: ValueError: error when post /llm_model/list_running_models: Proxy keys should use proper URL forms rather than plain scheme strings. Instead of "localhost", use "localhost://"
{'base_url': 'http://127.0.0.1:7861', 'timeout': 300.0, 'proxies': {'all://127.0.0.1': None, 'all://localhost': None, 'http://127.0.0.1': None, 'http://': 'http://10.3.131.151:3128', 'https://': 'http://10.3.131.151:3128', 'all://': None, 'localhost': None, '127.0.0.0/8': None, '10.10.11.0/24': None, '10.244.0.0/16': None}}
2023-12-08 00:40:58,272 - utils.py[line:94] - ERROR: ValueError: error when post /llm_model/list_running_models: Proxy keys should use proper URL forms rather than plain scheme strings. Instead of "localhost", use "localhost://"
{'base_url': 'http://127.0.0.1:7861', 'timeout': 300.0, 'proxies': {'all://127.0.0.1': None, 'all://localhost': None, 'http://127.0.0.1': None, 'http://': 'http://10.3.131.151:3128', 'https://': 'http://10.3.131.151:3128', 'all://': None, 'localhost': None, '127.0.0.0/8': None, '10.10.11.0/24': None, '10.244.0.0/16': None}}
2023-12-08 00:40:58,463 - utils.py[line:94] - ERROR: ValueError: error when post /llm_model/list_running_models: Proxy keys should use proper URL forms rather than plain scheme strings. Instead of "localhost", use "localhost://"
{'base_url': 'http://127.0.0.1:7861', 'timeout': 300.0, 'proxies': {'all://127.0.0.1': None, 'all://localhost': None, 'http://127.0.0.1': None, 'http://': 'http://10.3.131.151:3128', 'https://': 'http://10.3.131.151:3128', 'all://': None, 'localhost': None, '127.0.0.0/8': None, '10.10.11.0/24': None, '10.244.0.0/16': None}}
2023-12-08 00:40:58,540 - utils.py[line:94] - ERROR: ValueError: error when post /llm_model/list_running_models: Proxy keys should use proper URL forms rather than plain scheme strings. Instead of "localhost", use "localhost://"
{'base_url': 'http://127.0.0.1:7861', 'timeout': 300.0, 'proxies': {'all://127.0.0.1': None, 'all://localhost': None, 'http://127.0.0.1': None, 'http://': 'http://10.3.131.151:3128', 'https://': 'http://10.3.131.151:3128', 'all://': None, 'localhost': None, '127.0.0.0/8': None, '10.10.11.0/24': None, '10.244.0.0/16': None}}
2023-12-08 00:40:58,618 - utils.py[line:94] - ERROR: ValueError: error when post /llm_model/list_running_models: Proxy keys should use proper URL forms rather than plain scheme strings. Instead of "localhost", use "localhost://"
2023-12-08 00:40:58.619 Uncaught app exception
Traceback (most recent call last):
  File "/opt/conda/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 534, in _run_script
    exec(code, module.__dict__)
  File "/workspace/Langchain-Chatchat-0.2.7/webui.py", line 64, in <module>
    pages[selected_page]["func"](api=api, is_lite=is_lite)
  File "/workspace/Langchain-Chatchat-0.2.7/webui_pages/dialogue/dialogue.py", line 83, in dialogue_page
    running_models = list(api.list_running_models())
TypeError: 'NoneType' object is not iterable

没有升成0.28的原因是那个通信问题我也会碰到,改成开发版也会有其他错误出现,就保持0.27的版本了,看看这个问题能不能找到

@liunux4odoo
Copy link
Collaborator

@mountain1275
Copy link
Author

这个127.0.0.1可以修改成0.0.0.0吗,因为服务器和我本地之间隔了个跳板机,本地没办法直接访问服务器7861端口,上面的8080端口是修改成0.0.0.0之后利用隧道映射做的。想把这个localhost改成0.0.0.0的,通过下面的修改行不通

# 各服务器默认绑定host。如改为"0.0.0.0"需要修改下方所有XX_SERVER的host
DEFAULT_BIND_HOST = "0.0.0.0" if sys.platform != "win32" else "127.0.0.1"

# webui.py server
WEBUI_SERVER = {
    "host": DEFAULT_BIND_HOST,
    "port": 8080,
}

# api.py server
API_SERVER = {
    "host": "0.0.0.0",
    "port": 7861,
}

# fastchat openai_api server
FSCHAT_OPENAI_API = {
    "host": "0.0.0.0",
    "port": 20000,
}

@talentjls
Copy link

The same error occurred

@black-fruit
Copy link

解决了吗

@zRzRzRzRzRzRzR
Copy link
Collaborator

这个127.0.0.1可以修改成0.0.0.0吗,因为服务器和我本地之间隔了个跳板机,本地没办法直接访问服务器7861端口,上面的8080端口是修改成0.0.0.0之后利用隧道映射做的。想把这个localhost改成0.0.0.0的,通过下面的修改行不通

# 各服务器默认绑定host。如改为"0.0.0.0"需要修改下方所有XX_SERVER的host
DEFAULT_BIND_HOST = "0.0.0.0" if sys.platform != "win32" else "127.0.0.1"

# webui.py server
WEBUI_SERVER = {
    "host": DEFAULT_BIND_HOST,
    "port": 8080,
}

# api.py server
API_SERVER = {
    "host": "0.0.0.0",
    "port": 7861,
}

# fastchat openai_api server
FSCHAT_OPENAI_API = {
    "host": "0.0.0.0",
    "port": 20000,
}

可以改成0.0.0.0

@mountain1275
Copy link
Author

可以指点一下具体要修改哪儿吗?我目前做了一些修改,是失败的
下面是我的修改,已经将model和server的config文件中显式的127都修改了,但是在运行后的输出里显示,7861端口前面还是127本地可访问

# 各服务器默认绑定host。如改为"0.0.0.0"需要修改下方所有XX_SERVER的host
DEFAULT_BIND_HOST = "0.0.0.0"# if sys.platform != "win32" else "127.0.0.1"

# webui.py server
WEBUI_SERVER = {
    "host": DEFAULT_BIND_HOST,
    "port": 8080,
}

# api.py server
API_SERVER = {
    "host": "0.0.0.0",
    "port": 7861,
}

# fastchat openai_api server
FSCHAT_OPENAI_API = {
    "host": "0.0.0.0",
    "port": 20000,
}

输出信息显示没有修改成功,顺便,后续执行还是和上面的一样的报错


==============================Langchain-Chatchat Configuration==============================
操作系统:Linux-5.15.0-46-generic-x86_64-with-glibc2.31.
python版本:3.10.13 (main, Sep 11 2023, 13:44:35) [GCC 11.2.0]
项目版本:v0.2.7
langchain版本:0.0.340. fastchat版本:0.2.33


当前使用的分词器:ChineseRecursiveTextSplitter
当前启动的LLM模型:['chatglm3-6b'] @ cuda
{'device': 'cuda',
 'gpus': '0,1',
 'host': '0.0.0.0',
 'infer_turbo': False,
 'model_path': '/workspace/THUDM/chatglm3-6b',
 'num_gpus': 2,
 'port': 20002}
当前Embbedings模型: m3e-base @ cuda


服务端运行信息:
    OpenAI API Server: http://127.0.0.1:20000/v1
    Chatchat  API  Server: http://127.0.0.1:7861
    Chatchat WEBUI Server: http://0.0.0.0:8080
==============================Langchain-Chatchat Configuration==============================



Collecting usage statistics. To deactivate, set browser.gatherUsageStats to False.


  You can now view your Streamlit app in your browser.

  URL: http://0.0.0.0:8080

@mountain1275
Copy link
Author

解决了吗

@black-fruit 还没有解决,或者你可以试试 @liunux4odoo 上面的建议,我目前卡在了访问7861端口这,如果你能访问7861,可以看看http://127.0.0.1:7861/docs 里手动调用/list_running_models,看看报啥错。

@kimsoso
Copy link

kimsoso commented Dec 11, 2023

相同的报错,默认不是调用chatglm就会出这个问题

@LukeCara
Copy link

也是同样的问题,模型都调成了绝对路径

@LukeCara
Copy link

解决了,但不知道是怎么解决的

@black-fruit
Copy link

相同的报错,默认不是调用chatglm就会出这个问题

我调用glm才会是这个问题

@black-fruit
Copy link

好神奇,我改成openai api就解决了,但是我要的是glm

@black-fruit
Copy link

目前测试mps会卡在torch这一步。。。。。。

@black-fruit
Copy link

挂个悬赏可以从根本解决问题

@Wel2018
Copy link

Wel2018 commented Dec 13, 2023

这个问题我刚才也遇到了,我把端口号全部改为3开头的就神奇的解决了。
image
此外我还尝试了使用 vscode ssh 模式在命令行启动就不会出问题,只有在服务器本地启动报这个错

@lhua1980
Copy link

好久不用 ubuntu下又装了一版 遇到同样的问题
mark下~

@black-fruit
Copy link

太神奇了

@black-fruit
Copy link

开发者没想过自己程序有问题

@ForgetThatNight
Copy link

信问

这个项目bug太多了,而且代码有些乱不好调试,还是继续用wenda吧

@zRzRzRzRzRzRzR zRzRzRzRzRzRzR self-assigned this Dec 18, 2023
@zRzRzRzRzRzRzR
Copy link
Collaborator

mps没有人能测试,我们使用的设备已经在 wiki中写了,确保你的系统环境符合要求

@mountain1275
Copy link
Author

mountain1275 commented Dec 26, 2023

我的这个问题解决了,应该是服务器代理的问题

@Rikao999
Copy link

这个问题有大佬解决了么?我重装了http==0.25,结果还是报错

@tyt327
Copy link

tyt327 commented Jan 5, 2024

有大佬解决了吗?我也这个问题。。。

@AIfengstudy
Copy link

有人解决没有啊?????????这个问题很明显啊,就按官方走,配置文件里改个chatglm3-6b的模型地址,其余保持不变,会爆这个错

@lidong10
Copy link

和楼主一样的现象,琢磨了两天,最后发现我
经常挂梯子pigcha,发现设置-网络-网络代理变成手动的,改成自动就好了。推测是pigcha改动了网络代理导致

@488283943
Copy link

解决了,但是不知道怎么解决的

怎么解决的啊?我用的是chatglm2-6b,改成相对路径和绝对路径都试了还是报错TypeError: 'NoneType' object is not iterable,web界面能打开

@lidong10
Copy link

lidong10 commented Mar 19, 2024 via email

@lidong10
Copy link

lidong10 commented Mar 19, 2024 via email

@488283943
Copy link

电脑的网络设置,我挂梯子,梯子自动会改我的网络设置,你是win11还Ubuntu?你只要看系统中的网络设置发自我的手机-------- 原始邮件 --------发件人: CC @.>日期: 2024年3月19日周二 15:22收件人: chatchat-space/Langchain-Chatchat @.>抄送: lidong10 @.>, Comment @.>主 题: Re: [chatchat-space/Langchain-Chatchat] TypeError: 'NoneType' object is not iterable(本地部署) (Issue #2314) 解决了,但是不知道怎么解决的 怎么解决的啊?我用的是chatglm2-6b,改成相对路径和绝对路径都试了还是报错TypeError: 'NoneType' object is not iterable,web界面能打开 —Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you commented.Message ID: @.***>
我是 win10 我梯子挂和不挂是一样的,运行streamlit run webui.py报错TypeError: 'NoneType' object is not iterable,运行python startup.py -a的情况下目前只报错
Uploading 微信图片_20240319164546.png…

@SHIsue
Copy link

SHIsue commented Apr 10, 2024

解决了吗

@black-fruit 还没有解决,或者你可以试试 @liunux4odoo 上面的建议,我目前卡在了访问7861端口这,如果你能访问7861,可以看看http://127.0.0.1:7861/docs 里手动调用/list_running_models,看看报啥错。

回复500,感觉是挂梯子的问题

@SHIsue
Copy link

SHIsue commented Apr 10, 2024

电脑的网络设置,我挂梯子,梯子自动会改我的网络设置,你是win11还Ubuntu?你只要看系统中的网络设置发自我的手机-------- 原始邮件 --------发件人: CC @.>日期: 2024年3月19日周二 15:22收件人: chatchat-space/Langchain-Chatchat @.>抄送: lidong10 @.>, Comment @.>主 题: Re: [chatchat-space/Langchain-Chatchat] TypeError: 'NoneType' object is not iterable(本地部署) (Issue #2314) 解决了,但是不知道怎么解决的 怎么解决的啊?我用的是chatglm2-6b,改成相对路径和绝对路径都试了还是报错TypeError: 'NoneType' object is not iterable,web界面能打开 —Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you commented.Message ID: @.***>
我是 win10 我梯子挂和不挂是一样的,运行streamlit run webui.py报错TypeError: 'NoneType' object is not iterable,运行python startup.py -a的情况下目前只报错
Uploading 微信图片_20240319164546.png…

如何修改这个网络设置呢?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests