Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

使用ollamaAPI ERROR: APIConnectionError: Caught exception: Connection error. #3878

Closed
Ptianyu opened this issue Apr 25, 2024 · 1 comment
Closed
Labels
bug Something isn't working

Comments

@Ptianyu
Copy link

Ptianyu commented Apr 25, 2024

想通过ollamaAPI直接调取大模型

操作系统:Linux-5.19.0-42-generic-x86_64-with-glibc2.31.
python版本:3.11.7 (main, Dec 15 2023, 18:12:31) [GCC 11.2.0]
项目版本:v0.2.10
langchain版本:0.0.354. fastchat版本:0.2.35

当前使用的分词器:ChineseRecursiveTextSplitter
当前启动的LLM模型:['ollama-api'] @ cuda
{'api_base_url': '192.168.1.110/api/generate',
'api_key': 'ollama',
'device': 'cuda',
'host': '0.0.0.0',
'infer_turbo': False,
'model_name': 'llama3',
'online_api': True,
'openai_proxy': '',
'port': 20002}
当前Embbedings模型: bge-large-zh @ cuda
==============================Langchain-Chatchat Configuration==============================

2024-04-25 14:33:09,572 - startup.py[line:655] - INFO: 正在启动服务:
2024-04-25 14:33:09,572 - startup.py[line:656] - INFO: 如需查看 llm_api 日志,请前往 /home/langchain/Langchain-Chatchat-master/logs
/home/langchain/langchain_chat_env/lib/python3.11/site-packages/langchain_core/_api/deprecation.py:119: LangChainDeprecationWarning: 模型启动功能将于 Langchain-Chatchat 0.3.x重写,支持更多模式和加速启动,0.2.x中相关功能将废弃
warn_deprecated(
2024-04-25 14:33:15 | ERROR | stderr | INFO: Started server process [3642]
2024-04-25 14:33:15 | ERROR | stderr | INFO: Waiting for application startup.
2024-04-25 14:33:15 | ERROR | stderr | INFO: Application startup complete.
2024-04-25 14:33:15 | ERROR | stderr | INFO: Uvicorn running on http://0.0.0.0:20000 (Press CTRL+C to quit)
INFO: Started server process [3643]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:7861 (Press CTRL+C to quit)

==============================Langchain-Chatchat Configuration==============================
操作系统:Linux-5.19.0-42-generic-x86_64-with-glibc2.31.
python版本:3.11.7 (main, Dec 15 2023, 18:12:31) [GCC 11.2.0]
项目版本:v0.2.10
langchain版本:0.0.354. fastchat版本:0.2.35

当前使用的分词器:ChineseRecursiveTextSplitter
当前启动的LLM模型:['ollama-api'] @ cuda
{'api_base_url': '192.168.1.110/api/generate',
'api_key': 'ollama',
'device': 'cuda',
'host': '0.0.0.0',
'infer_turbo': False,
'model_name': 'llama3',
'online_api': True,
'openai_proxy': '',
'port': 20002}
当前Embbedings模型: bge-large-zh @ cuda

服务端运行信息:
OpenAI API Server: http://127.0.0.1:20000/v1
Chatchat API Server: http://127.0.0.1:7861
Chatchat WEBUI Server: http://0.0.0.0:8501
==============================Langchain-Chatchat Configuration==============================

Collecting usage statistics. To deactivate, set browser.gatherUsageStats to False.

You can now view your Streamlit app in your browser.

URL: http://0.0.0.0:8501

2024-04-25 14:33:40,318 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK"
INFO: 127.0.0.1:49074 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK
2024-04-25 14:33:40,320 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK"
2024-04-25 14:33:40,453 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK"
INFO: 127.0.0.1:49074 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK
2024-04-25 14:33:40,454 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK"
INFO: 127.0.0.1:49074 - "POST /llm_model/list_config_models HTTP/1.1" 200 OK
2024-04-25 14:33:40,463 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_config_models "HTTP/1.1 200 OK"
2024-04-25 14:34:14,976 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK"
INFO: 127.0.0.1:43226 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK
2024-04-25 14:34:14,977 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK"
2024-04-25 14:34:15,099 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK"
INFO: 127.0.0.1:43226 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK
2024-04-25 14:34:15,100 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK"
INFO: 127.0.0.1:43226 - "POST /llm_model/list_config_models HTTP/1.1" 200 OK
2024-04-25 14:34:15,116 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_config_models "HTTP/1.1 200 OK"
INFO: 127.0.0.1:43226 - "POST /llm_model/get_model_config HTTP/1.1" 200 OK
2024-04-25 14:34:17,764 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/get_model_config "HTTP/1.1 200 OK"
2024-04-25 14:34:17,845 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK"
INFO: 127.0.0.1:37024 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK
2024-04-25 14:34:17,846 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK"
2024-04-25 14:34:17,867 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK"
INFO: 127.0.0.1:37024 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK
2024-04-25 14:34:17,868 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK"
INFO: 127.0.0.1:37024 - "POST /llm_model/list_config_models HTTP/1.1" 200 OK
2024-04-25 14:34:17,879 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_config_models "HTTP/1.1 200 OK"
2024-04-25 14:34:21,415 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK"
INFO: 127.0.0.1:37030 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK
2024-04-25 14:34:21,416 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK"
2024-04-25 14:34:21,438 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK"
INFO: 127.0.0.1:37030 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK
2024-04-25 14:34:21,439 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK"
INFO: 127.0.0.1:37030 - "POST /llm_model/list_config_models HTTP/1.1" 200 OK
2024-04-25 14:34:21,450 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_config_models "HTTP/1.1 200 OK"
INFO: 127.0.0.1:37030 - "POST /chat/chat HTTP/1.1" 200 OK
/home/langchain/langchain_chat_env/lib/python3.11/site-packages/langchain_core/_api/deprecation.py:119: LangChainDeprecationWarning: The class ChatOpenAI was deprecated in LangChain 0.0.10 and will be removed in 0.2.0. An updated version of the class exists in the langchain-openai package and should be used instead. To use it run pip install -U langchain-openai and import as from langchain_openai import ChatOpenAI.
warn_deprecated(
2024-04-25 14:34:21,878 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/chat/chat "HTTP/1.1 200 OK"
2024-04-25 14:34:21,892 - _base_client.py[line:1603] - INFO: Retrying request to /chat/completions in 0.946995 seconds
2024-04-25 14:34:22,840 - _base_client.py[line:1603] - INFO: Retrying request to /chat/completions in 1.596617 seconds
2024-04-25 14:34:24,439 - utils.py[line:38] - ERROR: Connection error.
Traceback (most recent call last):
File "/home/langchain/langchain_chat_env/lib/python3.11/site-packages/httpx/_transports/default.py", line 67, in map_httpcore_exceptions
yield
File "/home/langchain/langchain_chat_env/lib/python3.11/site-packages/httpx/_transports/default.py", line 371, in handle_async_request
resp = await self._pool.handle_async_request(req)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/langchain/langchain_chat_env/lib/python3.11/site-packages/httpcore/_async/connection_pool.py", line 167, in handle_async_request
raise UnsupportedProtocol(
httpcore.UnsupportedProtocol: Request URL is missing an 'http://' or 'https://' protocol.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/home/langchain/langchain_chat_env/lib/python3.11/site-packages/openai/_base_client.py", line 1514, in _request
response = await self._client.send(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/langchain/langchain_chat_env/lib/python3.11/site-packages/httpx/_client.py", line 1646, in send
response = await self._send_handling_auth(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/langchain/langchain_chat_env/lib/python3.11/site-packages/httpx/_client.py", line 1674, in _send_handling_auth
response = await self._send_handling_redirects(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/langchain/langchain_chat_env/lib/python3.11/site-packages/httpx/_client.py", line 1711, in _send_handling_redirects
response = await self._send_single_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/langchain/langchain_chat_env/lib/python3.11/site-packages/httpx/_client.py", line 1748, in _send_single_request
response = await transport.handle_async_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/langchain/langchain_chat_env/lib/python3.11/site-packages/httpx/_transports/default.py", line 370, in handle_async_request
with map_httpcore_exceptions():
File "/home/langchain/langchain_chat_env/lib/python3.11/contextlib.py", line 158, in exit
self.gen.throw(typ, value, traceback)
File "/home/langchain/langchain_chat_env/lib/python3.11/site-packages/httpx/_transports/default.py", line 84, in map_httpcore_exceptions
raise mapped_exc(message) from exc
httpx.UnsupportedProtocol: Request URL is missing an 'http://' or 'https://' protocol.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/langchain/langchain_chat_env/lib/python3.11/site-packages/httpx/_transports/default.py", line 67, in map_httpcore_exceptions
yield
File "/home/langchain/langchain_chat_env/lib/python3.11/site-packages/httpx/_transports/default.py", line 371, in handle_async_request
resp = await self._pool.handle_async_request(req)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/langchain/langchain_chat_env/lib/python3.11/site-packages/httpcore/_async/connection_pool.py", line 167, in handle_async_request
raise UnsupportedProtocol(
httpcore.UnsupportedProtocol: Request URL is missing an 'http://' or 'https://' protocol.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/home/langchain/langchain_chat_env/lib/python3.11/site-packages/openai/_base_client.py", line 1514, in _request
response = await self._client.send(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/langchain/langchain_chat_env/lib/python3.11/site-packages/httpx/_client.py", line 1646, in send
response = await self._send_handling_auth(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/langchain/langchain_chat_env/lib/python3.11/site-packages/httpx/_client.py", line 1674, in _send_handling_auth
response = await self._send_handling_redirects(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/langchain/langchain_chat_env/lib/python3.11/site-packages/httpx/_client.py", line 1711, in _send_handling_redirects
response = await self._send_single_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/langchain/langchain_chat_env/lib/python3.11/site-packages/httpx/_client.py", line 1748, in _send_single_request
response = await transport.handle_async_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/langchain/langchain_chat_env/lib/python3.11/site-packages/httpx/_transports/default.py", line 370, in handle_async_request
with map_httpcore_exceptions():
File "/home/langchain/langchain_chat_env/lib/python3.11/contextlib.py", line 158, in exit
self.gen.throw(typ, value, traceback)
File "/home/langchain/langchain_chat_env/lib/python3.11/site-packages/httpx/_transports/default.py", line 84, in map_httpcore_exceptions
raise mapped_exc(message) from exc
httpx.UnsupportedProtocol: Request URL is missing an 'http://' or 'https://' protocol.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/langchain/langchain_chat_env/lib/python3.11/site-packages/httpx/_transports/default.py", line 67, in map_httpcore_exceptions
yield
File "/home/langchain/langchain_chat_env/lib/python3.11/site-packages/httpx/_transports/default.py", line 371, in handle_async_request
resp = await self._pool.handle_async_request(req)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/langchain/langchain_chat_env/lib/python3.11/site-packages/httpcore/_async/connection_pool.py", line 167, in handle_async_request
raise UnsupportedProtocol(
httpcore.UnsupportedProtocol: Request URL is missing an 'http://' or 'https://' protocol.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/home/langchain/langchain_chat_env/lib/python3.11/site-packages/openai/_base_client.py", line 1514, in _request
response = await self._client.send(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/langchain/langchain_chat_env/lib/python3.11/site-packages/httpx/_client.py", line 1646, in send
response = await self._send_handling_auth(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/langchain/langchain_chat_env/lib/python3.11/site-packages/httpx/_client.py", line 1674, in _send_handling_auth
response = await self._send_handling_redirects(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/langchain/langchain_chat_env/lib/python3.11/site-packages/httpx/_client.py", line 1711, in _send_handling_redirects
response = await self._send_single_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/langchain/langchain_chat_env/lib/python3.11/site-packages/httpx/_client.py", line 1748, in _send_single_request
response = await transport.handle_async_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/langchain/langchain_chat_env/lib/python3.11/site-packages/httpx/_transports/default.py", line 370, in handle_async_request
with map_httpcore_exceptions():
File "/home/langchain/langchain_chat_env/lib/python3.11/contextlib.py", line 158, in exit
self.gen.throw(typ, value, traceback)
File "/home/langchain/langchain_chat_env/lib/python3.11/site-packages/httpx/_transports/default.py", line 84, in map_httpcore_exceptions
raise mapped_exc(message) from exc
httpx.UnsupportedProtocol: Request URL is missing an 'http://' or 'https://' protocol.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/home/langchain/Langchain-Chatchat-master/server/utils.py", line 36, in wrap_done
await fn
File "/home/langchain/langchain_chat_env/lib/python3.11/site-packages/langchain/chains/base.py", line 385, in acall
raise e
File "/home/langchain/langchain_chat_env/lib/python3.11/site-packages/langchain/chains/base.py", line 379, in acall
await self._acall(inputs, run_manager=run_manager)
File "/home/langchain/langchain_chat_env/lib/python3.11/site-packages/langchain/chains/llm.py", line 275, in _acall
response = await self.agenerate([inputs], run_manager=run_manager)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/langchain/langchain_chat_env/lib/python3.11/site-packages/langchain/chains/llm.py", line 142, in agenerate
return await self.llm.agenerate_prompt(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/langchain/langchain_chat_env/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 570, in agenerate_prompt
return await self.agenerate(
^^^^^^^^^^^^^^^^^^^^^
File "/home/langchain/langchain_chat_env/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 530, in agenerate
raise exceptions[0]
File "/home/langchain/langchain_chat_env/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 715, in _agenerate_with_cache
result = await self._agenerate(
^^^^^^^^^^^^^^^^^^^^^^
File "/home/langchain/langchain_chat_env/lib/python3.11/site-packages/langchain_community/chat_models/openai.py", line 524, in _agenerate
return await agenerate_from_stream(stream_iter)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/langchain/langchain_chat_env/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 89, in agenerate_from_stream
async for chunk in stream:
File "/home/langchain/langchain_chat_env/lib/python3.11/site-packages/langchain_community/chat_models/openai.py", line 490, in _astream
async for chunk in await acompletion_with_retry(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/langchain/langchain_chat_env/lib/python3.11/site-packages/langchain_community/chat_models/openai.py", line 107, in acompletion_with_retry
return await llm.async_client.create(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/langchain/langchain_chat_env/lib/python3.11/site-packages/openai/resources/chat/completions.py", line 1159, in create
return await self._post(
^^^^^^^^^^^^^^^^^
File "/home/langchain/langchain_chat_env/lib/python3.11/site-packages/openai/_base_client.py", line 1782, in post
return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/langchain/langchain_chat_env/lib/python3.11/site-packages/openai/_base_client.py", line 1485, in request
return await self._request(
^^^^^^^^^^^^^^^^^^^^
File "/home/langchain/langchain_chat_env/lib/python3.11/site-packages/openai/_base_client.py", line 1538, in _request
return await self._retry_request(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/langchain/langchain_chat_env/lib/python3.11/site-packages/openai/_base_client.py", line 1607, in _retry_request
return await self._request(
^^^^^^^^^^^^^^^^^^^^
File "/home/langchain/langchain_chat_env/lib/python3.11/site-packages/openai/_base_client.py", line 1538, in _request
return await self._retry_request(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/langchain/langchain_chat_env/lib/python3.11/site-packages/openai/_base_client.py", line 1607, in _retry_request
return await self._request(
^^^^^^^^^^^^^^^^^^^^
File "/home/langchain/langchain_chat_env/lib/python3.11/site-packages/openai/_base_client.py", line 1548, in _request
raise APIConnectionError(request=request) from err
openai.APIConnectionError: Connection error.
2024-04-25 14:34:24,445 - utils.py[line:40] -Uploading 176f854cea0b959a5d0bfcf7c2461bf.png…

@Ptianyu Ptianyu added the bug Something isn't working label Apr 25, 2024
Copy link

dosubot bot commented Apr 25, 2024

The error "APIConnectionError: Caught exception: Connection error" you're encountering is likely due to an incorrect configuration of the api_base_url in your setup. The URL '192.168.1.110/api/generate' is missing the required 'http://' or 'https://' protocol. It should be formatted as either 'http://192.168.1.110/api/generate' or 'https://192.168.1.110/api/generate'.

To address this issue, please update your configuration to include the correct protocol in the api_base_url. This adjustment should resolve the connection error you're experiencing with the ollamaAPI.

Details

Sources


About Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

To continue the conversation, mention @dosu.

@Ptianyu Ptianyu closed this as completed Apr 25, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant