Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

404 not found #3513

Closed
javachens opened this issue Mar 25, 2024 · 7 comments
Closed

404 not found #3513

javachens opened this issue Mar 25, 2024 · 7 comments
Assignees
Labels
bug Something isn't working

Comments

@javachens
Copy link

企业微信截图_17113550771807

==============================Langchain-Chatchat Configuration==============================
操作系统:Windows-10-10.0.19045-SP0.
python版本:3.10.5 (tags/v3.10.5:f377153, Jun 6 2022, 16:14:13) [MSC v.1929 64 bit (AMD64)]
项目版本:v0.2.10
langchain版本:0.0.354. fastchat版本:0.2.35

当前使用的分词器:ChineseRecursiveTextSplitter
当前启动的LLM模型:['chatglm3-6b', 'zhipu-api', 'openai-api'] @ cpu
{'device': 'cuda',
'host': '127.0.0.1',
'infer_turbo': False,
'model_path': 'E:\zzxzpa\Langchain-Chatchat\chatglm3-6b',
'model_path_exists': True,
'port': 20002}
{'api_key': '',
'device': 'auto',
'host': '127.0.0.1',
'infer_turbo': False,
'online_api': True,
'port': 21001,
'provider': 'ChatGLMWorker',
'version': 'glm-4',
'worker_class': <class 'server.model_workers.zhipu.ChatGLMWorker'>}
{'api_base_url': 'https://api.openai.com/v1',
'api_key': '',
'device': 'auto',
'host': '127.0.0.1',
'infer_turbo': False,
'model_name': 'gpt-4',
'online_api': True,
'openai_proxy': '',
'port': 20002}
当前Embbedings模型: bge-large-zh @ cpu
==============================Langchain-Chatchat Configuration==============================

2024-03-25 16:10:13,307 - startup.py[line:655] - INFO: 正在启动服务:
2024-03-25 16:10:13,307 - startup.py[line:656] - INFO: 如需查看 llm_api 日志,请前往 E:\zzxzpa\Langchain-Chatchat\logs
E:\code\baseProject\venv\lib\site-packages\trio_core_multierror.py:511: RuntimeWarning: You seem to already have a custom sys.excepthook handler installed. I'll skip installing Trio's custom handler, but this means MultiErrors wil
l not show full tracebacks.
warnings.warn(
E:\code\baseProject\venv\lib\site-packages\langchain_core_api\deprecation.py:117: LangChainDeprecationWarning: 模型启动功能将于 Langchain-Chatchat 0.3.x重写,支持更多模式和加速启动,0.2.x中相关功能将废弃
warn_deprecated(
E:\code\baseProject\venv\lib\site-packages\trio_core_multierror.py:511: RuntimeWarning: You seem to already have a custom sys.excepthook handler installed. I'll skip installing Trio's custom handler, but this means MultiErrors wil
l not show full tracebacks.
warnings.warn(
E:\code\baseProject\venv\lib\site-packages\trio_core_multierror.py:511: RuntimeWarning: You seem to already have a custom sys.excepthook handler installed. I'll skip installing Trio's custom handler, but this means MultiErrors wil
l not show full tracebacks.
warnings.warn(
E:\code\baseProject\venv\lib\site-packages\trio_core_multierror.py:511: RuntimeWarning: You seem to already have a custom sys.excepthook handler installed. I'll skip installing Trio's custom handler, but this means MultiErrors wil
l not show full tracebacks.
warnings.warn(
2024-03-25 16:10:21 | INFO | model_worker | Register to controller
2024-03-25 16:10:21 | ERROR | stderr | INFO: Started server process [17400]
2024-03-25 16:10:21 | ERROR | stderr | INFO: Waiting for application startup.
2024-03-25 16:10:21 | ERROR | stderr | INFO: Application startup complete.
2024-03-25 16:10:21 | ERROR | stderr | INFO: Uvicorn running on http://127.0.0.1:20000 (Press CTRL+C to quit)
2024-03-25 16:10:23 | INFO | model_worker | Loading the model ['chatglm3-6b'] on worker 306c51f3 ...
2024-03-25 16:10:23 | WARNING | transformers_modules.chatglm3-6b.tokenization_chatglm | Setting eos_token is not supported, use the default one.
2024-03-25 16:10:23 | WARNING | transformers_modules.chatglm3-6b.tokenization_chatglm | Setting pad_token is not supported, use the default one.
2024-03-25 16:10:23 | WARNING | transformers_modules.chatglm3-6b.tokenization_chatglm | Setting unk_token is not supported, use the default one.
2024-03-25 16:10:44 | INFO | stdout | INFO: 127.0.0.1:1720 - "GET / HTTP/1.1" 404 Not Found
2024-03-25 16:11:20 | INFO | stdout | INFO: 127.0.0.1:1721 - "GET / HTTP/1.1" 404 Not Found
2024-03-25 16:11:21 | INFO | stdout | INFO: 127.0.0.1:1721 - "GET / HTTP/1.1" 404 Not Found

@javachens javachens added the bug Something isn't working label Mar 25, 2024
Copy link

dosubot bot commented Mar 25, 2024

It seems like you've got this under control, if you want help or have specific questions, let me know what I can do for you!

About Dosu

This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

@zRzRzRzRzRzRzR zRzRzRzRzRzRzR self-assigned this Mar 25, 2024
@javachens
Copy link
Author

Can anyone answer this question?

Copy link

dosubot bot commented Mar 26, 2024

It seems like this issue requires a deeper look, @imClumsyPanda could you please lend your expertise?

@glide-the
Copy link
Collaborator

看起来你使用了cpu运行了比较大的模型,请检查cuda配置

@MoncozGC
Copy link

MoncozGC commented Apr 3, 2024

我也有同样的问题,请教下该如何解决呢 @glide-the @imClumsyPanda

@zRzRzRzRzRzRzR
Copy link
Collaborator

是不是用了CPU跑不动

@zincopper
Copy link

Can anyone answer this question?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

6 participants