Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ERROR: RemoteProtocolError: API通信遇到错误:peer closed connection without sending complete message body (incomplete chunked read) #3438

Closed
A-runaaaa opened this issue Mar 20, 2024 · 7 comments
Assignees
Labels
bug Something isn't working stale

Comments

@A-runaaaa
Copy link

return await self.app(scope, receive, send)
       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "/home/guanjr/miniconda3/envs/KQAP/lib/python3.11/site-packages/fastapi/applications.py", line 1054, in call
await super().call(scope, receive, send)
File "/home/guanjr/miniconda3/envs/KQAP/lib/python3.11/site-packages/starlette/applications.py", line 119, in call
await self.middleware_stack(scope, receive, send)
File "/home/guanjr/miniconda3/envs/KQAP/lib/python3.11/site-packages/starlette/middleware/errors.py", line 186, in call
raise exc
File "/home/guanjr/miniconda3/envs/KQAP/lib/python3.11/site-packages/starlette/middleware/errors.py", line 164, in call
await self.app(scope, receive, _send)
File "/home/guanjr/miniconda3/envs/KQAP/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 62, in call
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
File "/home/guanjr/miniconda3/envs/KQAP/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
raise exc
File "/home/guanjr/miniconda3/envs/KQAP/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "/home/guanjr/miniconda3/envs/KQAP/lib/python3.11/site-packages/starlette/routing.py", line 762, in call
await self.middleware_stack(scope, receive, send)
File "/home/guanjr/miniconda3/envs/KQAP/lib/python3.11/site-packages/starlette/routing.py", line 782, in app
await route.handle(scope, receive, send)
File "/home/guanjr/miniconda3/envs/KQAP/lib/python3.11/site-packages/starlette/routing.py", line 297, in handle
await self.app(scope, receive, send)
File "/home/guanjr/miniconda3/envs/KQAP/lib/python3.11/site-packages/starlette/routing.py", line 77, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
File "/home/guanjr/miniconda3/envs/KQAP/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
raise exc
File "/home/guanjr/miniconda3/envs/KQAP/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "/home/guanjr/miniconda3/envs/KQAP/lib/python3.11/site-packages/starlette/routing.py", line 75, in app
await response(scope, receive, send)
File "/home/guanjr/miniconda3/envs/KQAP/lib/python3.11/site-packages/sse_starlette/sse.py", line 255, in call
async with anyio.create_task_group() as task_group:
File "/home/guanjr/miniconda3/envs/KQAP/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 597, in aexit
raise exceptions[0]
File "/home/guanjr/miniconda3/envs/KQAP/lib/python3.11/site-packages/sse_starlette/sse.py", line 258, in wrap
await func()
File "/home/guanjr/miniconda3/envs/KQAP/lib/python3.11/site-packages/sse_starlette/sse.py", line 245, in stream_response
async for data in self.body_iterator:
File "/home/guanjr/Langchain-Chatchat-master_2024_2/server/chat/file_chat.py", line 138, in knowledge_base_chat_iterator
input_msg = History(role="user", content=prompt_template).to_msg_template(False)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pydantic/main.py", line 341, in pydantic.main.BaseModel.init
pydantic.error_wrappers.ValidationError: 1 validation error for History
content
none is not an allowed value (type=type_error.none.not_allowed)
2024-03-20 03:03:43,820 - utils.py[line:197] - ERROR: RemoteProtocolError: API通信遇到错误:peer closed connection without sending complete message body (incomplete chunked read)

@A-runaaaa A-runaaaa added the bug Something isn't working label Mar 20, 2024
Copy link

dosubot bot commented Mar 20, 2024

嘿,@A-runaaaa!又见面了,希望这次我们能一起解决新的挑战。

The RemoteProtocolError you're encountering during API communication in the Langchain-Chatchat application could be due to several factors. Here are some potential causes:

  1. SSL/TLS Configuration Issues: If you're running the API with SSL/TLS encryption and the ssl_keyfile and ssl_certfile are misconfigured, corrupted, or not properly recognized, it could lead to protocol errors.

  2. Uvicorn and HTTP/2: The script uses Uvicorn as the ASGI server. If there's an attempt to use HTTP/2 without proper support or configuration in Uvicorn or the client, it could result in protocol errors.

  3. Improper Request Handling: If the API or the underlying llm_api_stale and api modules improperly handle HTTP requests or responses, it could trigger protocol errors.

  4. Network Issues or Misconfigurations: Network issues such as incorrect proxy settings, firewalls blocking certain types of traffic, or misconfigured DNS settings could lead to protocol errors.

  5. Uvicorn Version Compatibility: If there's a version incompatibility between Uvicorn and other dependencies or the Python version being used, it might lead to unexpected behavior, including protocol errors.

  6. Concurrency and Worker Configuration: If there's a misconfiguration in how workers are set up or how they handle concurrent requests, it could potentially lead to protocol errors.

To diagnose and fix a RemoteProtocolError, you should start by checking the SSL/TLS configuration if used, ensuring that Uvicorn and all dependencies are correctly configured and up to date, examining the request and response handling logic for potential issues, and reviewing network configurations and server logs for more detailed error messages.

As for the server/api_allinone_stale.py script, its function is to launch the application's API server. This script initializes and starts the API server with options for using default or multiple non-default models, multi-GPU setups, and SSL configurations. It uses uvicorn to run the server, which can be configured to listen on a specified host and port, and optionally use SSL certificates for HTTPS.

The fastapi library in the Langchain-Chatchat application is used to create and configure a web server that defines and handles various API endpoints for chat functionalities, model management, server state inquiries, and knowledge base management.

I hope this information helps! If you have any more questions or need further clarification, feel free to ask.

Sources

About Dosu

This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

@A-runaaaa
Copy link
Author

如果是代理问题我应该怎么解决@dosu-bot

Copy link

dosubot bot commented Mar 20, 2024

Based on the context provided, it seems like you're encountering a RemoteProtocolError during API communication in the Langchain-Chatchat application. This error is often related to network issues, such as a connection being closed before the complete response is received. If you suspect this might be due to a proxy issue, here are some steps you can take to resolve it:

  1. Increase Timeout Settings: If the proxy or server has a timeout setting, ensure it's configured to allow enough time for the request to be processed. This error can occur if the proxy closes the connection due to a timeout before the server sends the complete response.

  2. Configure Keep-Alive: Ensure that the proxy and the server support and are configured to use HTTP Keep-Alive. This keeps the connection open for multiple requests, which can help prevent the connection from closing prematurely.

  3. Chunk Size Adjustment: If the application is sending data in chunks (as suggested by "incomplete chunked read"), you might want to adjust the CHUNK_SIZE in your application. Either increasing or decreasing the chunk size could help, depending on the specific limitations or behavior of the proxy.

  4. Proxy Configuration: Review the proxy configuration for any settings that might automatically close connections or interfere with HTTP chunked responses. Ensure that the proxy is configured to handle chunked transfers correctly.

  5. Error Handling and Retries: Implement error handling in your application to catch this specific error and attempt a retry, possibly with a backoff strategy. This can be particularly useful if the error is transient.

  6. Inspect Proxy Logs: Check the logs of the proxy to see if there are any additional details about why the connection was closed. This can provide clues on what might need to be changed in either the proxy settings or your application.

  7. Update Dependencies: Ensure that all dependencies, including any libraries used for HTTP communication or proxy interaction, are up to date. Sometimes, bugs related to handling of connections or chunked encoding are fixed in newer versions.

Given the context of your application, specifically the use of FastAPI and asynchronous operations, you might want to particularly focus on how the proxy handles asynchronous requests and chunked responses. Adjusting the CHUNK_SIZE in your file_chat.py or implementing more robust error handling around the asynchronous operations could be effective starting points.

As for the specific function of the code at line 138 in the "/home/guanjr/Langchain-Chatchat-master_2024_2/server/chat/file_chat.py" file where the error occurs, it is to create a LLMChain object with a prompt constructed from the chat history and the input message, and then asynchronously call the acall method of this LLMChain object with the context and question as parameters. This is part of the asynchronous generator function knowledge_base_chat_iterator, which is used to handle the chat interaction with the knowledge base and generate responses.

Here's the relevant code snippet for context:

chain = LLMChain(prompt=chat_prompt, llm=model)

# Begin a task that runs in the background.
task = asyncio.create_task(wrap_done(
    chain.acall({"context": context, "question": query}),
    callback.done),
)

This code snippet initializes a LLMChain object with the prompt set to chat_prompt (which is constructed from the chat history and the input message) and the llm set to model (which is an instance of a language model). It then starts an asynchronous task that calls the acall method of the LLMChain object with a dictionary containing the context and the question. The wrap_done function is used to execute a callback when the task is completed.

For more information, you can refer to the following sources:

If you need further assistance, please provide more details about the specific error message and stack trace, as well as any relevant code snippets or configurations.

Sources

About Dosu

This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

@zRzRzRzRzRzRzR zRzRzRzRzRzRzR self-assigned this Apr 3, 2024
@zRzRzRzRzRzRzR
Copy link
Collaborator

依赖没装对吧

@A-runaaaa
Copy link
Author

A-runaaaa commented Apr 7, 2024 via email

Copy link

github-actions bot commented May 7, 2024

这个问题已经被标记为 stale ,因为它已经超过 30 天没有任何活动。

@github-actions github-actions bot added the stale label May 7, 2024
Copy link

这个问题已经被自动关闭,因为它被标为 stale 后超过 14 天没有任何活动。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working stale
Projects
None yet
Development

No branches or pull requests

2 participants