Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

APIConnectionError: Error communicating with OpenAI. #5296

Closed
4 of 14 tasks
AvikantSrivastava opened this issue May 26, 2023 · 12 comments
Closed
4 of 14 tasks

APIConnectionError: Error communicating with OpenAI. #5296

AvikantSrivastava opened this issue May 26, 2023 · 12 comments

Comments

@AvikantSrivastava
Copy link

System Info

python 3.11

fastapi==0.95.1
langchain==0.0.180
pydantic==1.10.7
uvicorn==0.21.1
openai==0.27.4

Who can help?

@agola11
@hwchase17

Information

  • The official example notebooks/scripts
  • My own modified scripts

Related Components

  • LLMs/Chat Models
  • Embedding Models
  • Prompts / Prompt Templates / Prompt Selectors
  • Output Parsers
  • Document Loaders
  • Vector Stores / Retrievers
  • Memory
  • Agents / Agent Executors
  • Tools / Toolkits
  • Chains
  • Callbacks/Tracing
  • Async

Reproduction

I am trying to create a streaming endpoint in Fast API, below are the files

main.py

from fastapi import FastAPI
from src.chat_stream import ChatOpenAIStreamingResponse, send_message, StreamRequest

app = FastAPI()

@app.post("/chat_streaming", response_class=StreamingResponse)
async def chat(body: StreamRequest ):
    return ChatOpenAIStreamingResponse(send_message(body.message), media_type="text/event-stream")

src/chat_stream.py

from typing import Awaitable, Callable, Union

Sender = Callable[[Union[str, bytes]], Awaitable[None]]

from starlette.types import Send
from typing import Any, Optional, Awaitable, Callable, Iterator, Union
from langchain.schema import HumanMessage
from pydantic import BaseModel
from fastapi.responses import StreamingResponse
from langchain.chat_models import ChatOpenAI
from langchain.callbacks.base import AsyncCallbackHandler
from langchain.callbacks.manager import AsyncCallbackManager


class EmptyIterator(Iterator[Union[str, bytes]]):
    def __iter__(self):
        return self

    def __next__(self):
        raise StopIteration


class AsyncStreamCallbackHandler(AsyncCallbackHandler):
    """Callback handler for streaming, inheritance from AsyncCallbackHandler."""

    def __init__(self, send: Sender):
        super().__init__()
        self.send = send

    async def on_llm_new_token(self, token: str, **kwargs: Any) -> None:
        """Rewrite on_llm_new_token to send token to client."""
        await self.send(f"data: {token}\n\n")


class ChatOpenAIStreamingResponse(StreamingResponse):
    """Streaming response for openai chat model, inheritance from StreamingResponse."""

    def __init__(
        self,
        generate: Callable[[Sender], Awaitable[None]],
        status_code: int = 200,
        media_type: Optional[str] = None,
    ) -> None:
        super().__init__(
            content=EmptyIterator(), status_code=status_code, media_type=media_type
        )
        self.generate = generate

    async def stream_response(self, send: Send) -> None:
        """Rewrite stream_response to send response to client."""
        await send(
            {
                "type": "http.response.start",
                "status": self.status_code,
                "headers": self.raw_headers,
            }
        )

        async def send_chunk(chunk: Union[str, bytes]):
            if not isinstance(chunk, bytes):
                chunk = chunk.encode(self.charset)
            await send({"type": "http.response.body", "body": chunk, "more_body": True})

        # send body to client
        await self.generate(send_chunk)

        # send empty body to client to close connection
        await send({"type": "http.response.body", "body": b"", "more_body": False})


def send_message(message: str) -> Callable[[Sender], Awaitable[None]]:
    async def generate(send: Sender):
        model = ChatOpenAI(
            streaming=True,
            verbose=True,
            callback_manager=AsyncCallbackManager([AsyncStreamCallbackHandler(send)]),
        )
        await model.agenerate(messages=[[HumanMessage(content=message)]])

    return generate


class StreamRequest(BaseModel):
    """Request body for streaming."""

    message: str

Expected behavior

The Endpoint should stream the response from LLM Chain, instead I am getting this error

Retrying langchain.chat_models.openai.acompletion_with_retry.<locals>._completion_with_retry in 1.0 seconds as it raised APIConnectionError: Error communicating with OpenAI.
Retrying langchain.chat_models.openai.acompletion_with_retry.<locals>._completion_with_retry in 2.0 seconds as it raised APIConnectionError: Error communicating with OpenAI.
Retrying langchain.chat_models.openai.acompletion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised APIConnectionError: Error communicating with OpenAI.
Retrying langchain.chat_models.openai.acompletion_with_retry.<locals>._completion_with_retry in 8.0 seconds as it raised APIConnectionError: Error communicating with OpenAI.
Retrying langchain.chat_models.openai.acompletion_with_retry.<locals>._completion_with_retry in 16.0 seconds as it raised APIConnectionError: Error communicating with OpenAI.
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "/Project/venv/lib/python3.11/site-packages/aiohttp/connector.py", line 980, in _wrap_create_connection
    return await self._loop.create_connection(*args, **kwargs)  # type: ignore[return-value]  # noqa
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/base_events.py", line 1098, in create_connection
    transport, protocol = await self._create_connection_transport(
                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/base_events.py", line 1131, in _create_connection_transport
    await waiter
  File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/sslproto.py", line 577, in _on_handshake_complete
    raise handshake_exc
  File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/sslproto.py", line 559, in _do_handshake
    self._sslobj.do_handshake()
  File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/ssl.py", line 979, in do_handshake
    self._sslobj.do_handshake()
ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:992)

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/Project/venv/lib/python3.11/site-packages/openai/api_requestor.py", line 588, in arequest_raw
    result = await session.request(**request_kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Project/venv/lib/python3.11/site-packages/aiohttp/client.py", line 536, in _request
    conn = await self._connector.connect(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Project/venv/lib/python3.11/site-packages/aiohttp/connector.py", line 540, in connect
    proto = await self._create_connection(req, traces, timeout)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Project/venv/lib/python3.11/site-packages/aiohttp/connector.py", line 901, in _create_connection
    _, proto = await self._create_direct_connection(req, traces, timeout)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Project/venv/lib/python3.11/site-packages/aiohttp/connector.py", line 1206, in _create_direct_connection
    raise last_exc
  File "/Project/venv/lib/python3.11/site-packages/aiohttp/connector.py", line 1175, in _create_direct_connection
    transp, proto = await self._wrap_create_connection(
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Project/venv/lib/python3.11/site-packages/aiohttp/connector.py", line 982, in _wrap_create_connection
    raise ClientConnectorCertificateError(req.connection_key, exc) from exc
aiohttp.client_exceptions.ClientConnectorCertificateError: Cannot connect to host api.openai.com:443 ssl:True [SSLCertVerificationError: (1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:992)')]

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/Project/venv/lib/python3.11/site-packages/uvicorn/protocols/http/h11_impl.py", line 429, in run_asgi
    result = await app(  # type: ignore[func-returns-value]
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Project/venv/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in __call__
    return await self.app(scope, receive, send)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Project/venv/lib/python3.11/site-packages/fastapi/applications.py", line 276, in __call__
    await super().__call__(scope, receive, send)
  File "/Project/venv/lib/python3.11/site-packages/starlette/applications.py", line 122, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/Project/venv/lib/python3.11/site-packages/starlette/middleware/errors.py", line 184, in __call__
    raise exc
  File "/Project/venv/lib/python3.11/site-packages/starlette/middleware/errors.py", line 162, in __call__
    await self.app(scope, receive, _send)
  File "/Project/venv/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
    raise exc
  File "/Project/venv/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
    await self.app(scope, receive, sender)
  File "/Project/venv/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
    raise e
  File "/Project/venv/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
    await self.app(scope, receive, send)
  File "/Project/venv/lib/python3.11/site-packages/starlette/routing.py", line 718, in __call__
    await route.handle(scope, receive, send)
  File "/Project/venv/lib/python3.11/site-packages/starlette/routing.py", line 276, in handle
    await self.app(scope, receive, send)
  File "/Project/venv/lib/python3.11/site-packages/starlette/routing.py", line 69, in app
    await response(scope, receive, send)
  File "/Project/venv/lib/python3.11/site-packages/starlette/responses.py", line 270, in __call__
    async with anyio.create_task_group() as task_group:
  File "/Project/venv/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 662, in __aexit__
    raise exceptions[0]
  File "/Project/venv/lib/python3.11/site-packages/starlette/responses.py", line 273, in wrap
    await func()
  File "/Project/src/app.py", line 67, in stream_response
    await self.generate(send_chunk)
  File "/Project/src/app.py", line 80, in generate
    await model.agenerate(messages=[[HumanMessage(content=message)]])
  File "/Project/venv/lib/python3.11/site-packages/langchain/chat_models/base.py", line 63, in agenerate
    results = await asyncio.gather(
              ^^^^^^^^^^^^^^^^^^^^^
  File "/Project/venv/lib/python3.11/site-packages/langchain/chat_models/openai.py", line 297, in _agenerate
    async for stream_resp in await acompletion_with_retry(
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Project/venv/lib/python3.11/site-packages/langchain/chat_models/openai.py", line 63, in acompletion_with_retry
    return await _completion_with_retry(**kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Project/venv/lib/python3.11/site-packages/tenacity/_asyncio.py", line 88, in async_wrapped
    return await fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Project/venv/lib/python3.11/site-packages/tenacity/_asyncio.py", line 47, in __call__
    do = self.iter(retry_state=retry_state)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Project/venv/lib/python3.11/site-packages/tenacity/__init__.py", line 325, in iter
    raise retry_exc.reraise()
          ^^^^^^^^^^^^^^^^^^^
  File "/Project/venv/lib/python3.11/site-packages/tenacity/__init__.py", line 158, in reraise
    raise self.last_attempt.result()
          ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/concurrent/futures/_base.py", line 449, in result
    return self.__get_result()
           ^^^^^^^^^^^^^^^^^^^
  File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/concurrent/futures/_base.py", line 401, in __get_result
    raise self._exception
  File "/Project/venv/lib/python3.11/site-packages/tenacity/_asyncio.py", line 50, in __call__
    result = await fn(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Project/venv/lib/python3.11/site-packages/langchain/chat_models/openai.py", line 61, in _completion_with_retry
    return await llm.client.acreate(**kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Project/venv/lib/python3.11/site-packages/openai/api_resources/chat_completion.py", line 45, in acreate
    return await super().acreate(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Project/venv/lib/python3.11/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 217, in acreate
    response, _, api_key = await requestor.arequest(
                           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Project/venv/lib/python3.11/site-packages/openai/api_requestor.py", line 300, in arequest
    result = await self.arequest_raw(
             ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Project/venv/lib/python3.11/site-packages/openai/api_requestor.py", line 605, in arequest_raw
    raise error.APIConnectionError("Error communicating with OpenAI") from e
openai.error.APIConnectionError: Error communicating with OpenAI
@jhrsya
Copy link

jhrsya commented Jun 26, 2023

I got this bug, too. Could you tell me how to fix this bug.

@agajdosi
Copy link

I've got this error only with async functions of langchain, but the error does not say much (on train, I though my connection is just terrible). Later I've set max_retries to lower number:chat = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0.9, request_timeout=600, max_retries=3) and got errors printed out about wrong SSL certificates. Even minimal request via aiohttp was not successfull.

This helped: https://stackoverflow.com/questions/69605350/aiohttp-raises-an-certificate-error-with-some-sites-that-browser-opens-normally

@luokerenx4
Copy link

I get the same problem and find the wrong is from VPN. Sometimes the VPN rules will only support part of OPENAI.
If you can't get an available VPN, try Web IDE like Colab.

@Avinash-Raj
Copy link
Contributor

This happens mostly on apple silicon machines. Fixed it by,

bash /Applications/Python*/Install\ Certificates.command

https://stackoverflow.com/a/76519628/3297613

@sparklog
Copy link

thanks, it works!

This happens mostly on apple silicon machines. Fixed it by,

bash /Applications/Python*/Install\ Certificates.command
https://stackoverflow.com/a/76519628/3297613

@vcjj
Copy link

vcjj commented Jul 23, 2023

i got the same question when using agent.arun(). I use windows, the mac problem does not fit me
here is my demo

import asyncio
from typing import Any, Dict, List

from langchain.chat_models import ChatOpenAI
from langchain.schema import LLMResult, HumanMessage
from langchain.callbacks.base import AsyncCallbackHandler, BaseCallbackHandler
from langchain.agents import AgentType, initialize_agent, load_tools
from langchain.chains.conversation.memory import ConversationBufferMemory, ConversationStringBufferMemory
from langchain.llms import OpenAI
import os

os.environ["OPENAI_API_KEY"] = 'xxx'

class MyCustomSyncHandler(BaseCallbackHandler):
    def on_llm_new_token(self, token: str, **kwargs) -> None:
        print(f"Sync handler being called in a `thread_pool_executor`: token: {token}")


class MyCustomAsyncHandler(AsyncCallbackHandler):
    """Async callback handler that can be used to handle callbacks from langchain."""

    async def on_llm_start(
        self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any
    ) -> None:
        """Run when chain starts running."""
        print("zzzz....")
        await asyncio.sleep(0.3)
        class_name = serialized["name"]
        print("Hi! I just woke up. Your llm is starting")

    async def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:
        """Run when chain ends running."""
        print("zzzz....")
        print("Hi! I just woke up. Your llm is ending")



# llm = OpenAI(temperature=0)
handler1 = MyCustomAsyncHandler()
llm = ChatOpenAI(
        temperature=0,
        verbose=True,
        # callbacks=[handler1],
    )
memory = ConversationStringBufferMemory(memory_key="chat_history", output_key='output')
tools = load_tools(["llm-math"], llm=llm)
agent = initialize_agent(tools, llm, verbose=True, memory=memory, agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION)


async def run():
    arun = await agent.arun("what is 1+1")
    print(arun)

asyncio.run(run())

same error

Retrying langchain.llms.openai.acompletion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised APIConnectionError: Error communicating with OpenAI.

@Kevin-free
Copy link

This happens mostly on apple silicon machines. Fixed it by,

bash /Applications/Python*/Install\ Certificates.command

https://stackoverflow.com/a/76519628/3297613

bash /Applications/Python*/Install\ Certificates.command
-- pip install --upgrade certifi
/Library/Frameworks/Python.framework/Versions/3.8/bin/python3.8: No module named pip.main; 'pip' is a package and cannot be directly executed
Traceback (most recent call last):
File "", line 44, in
File "", line 24, in main
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/subprocess.py", line 364, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['/Library/Frameworks/Python.framework/Versions/3.8/bin/python3.8', '-E', '-s', '-m', 'pip', 'install', '--upgrade', 'certifi']' returned non-zero exit status 1.

Is this ok?

@VaibhavSingh98
Copy link

Was this issue resolved? I'm also using fastapi and I face this issue after sometime. Initially I receive the responses and after some idle time I receive the same error

@pai4451
Copy link

pai4451 commented Sep 27, 2023

Was this issue resolved? I'm also using fastapi and I face this issue after sometime. Initially I receive the responses and after some idle time I receive the same error

Hi, did you fix it? I am also encounter the same error on my linux machine.

@imarquart
Copy link

I am having this issue with Langchain, FastAPI and StreamingResponse in Docker. I am using LCEL, including standard Runnables and custom Runnables.
The issue occurs both when generating via stream() and astream()

The issue does not occur when calling the LCEL chain directly - even on the same machine.

Any insights would be useful.

@pai4451
Copy link

pai4451 commented Nov 8, 2023

I am having this issue with Langchain, FastAPI and StreamingResponse in Docker. I am using LCEL, including standard Runnables and custom Runnables. The issue occurs both when generating via stream() and astream()

The issue does not occur when calling the LCEL chain directly - even on the same machine.

Any insights would be useful.

@imarquart In my case this was caused by SSL issue, I’m behind my company’s firewall. Maybe you can have a look.

Copy link

dosubot bot commented Feb 8, 2024

Hi, @AvikantSrivastava,

I'm helping the LangChain team manage their backlog and am marking this issue as stale. The issue you raised pertains to a streaming endpoint in FastAPI encountering an APIConnectionError when communicating with OpenAI due to a certificate verification failure. Several potential solutions have been shared, including adjusting max_retries, using a Web IDE like Colab, and resolving SSL issues. Users have reported experiencing the issue on different platforms, such as Apple silicon machines and Linux. The issue remains unresolved, and it has garnered attention from multiple users seeking assistance and sharing their experiences with similar problems.

Could you please confirm if this issue is still relevant to the latest version of the LangChain repository? If it is, please let the LangChain team know by commenting on the issue. Otherwise, feel free to close the issue yourself, or the issue will be automatically closed in 7 days. Thank you!

@dosubot dosubot bot added the stale Issue has not had recent activity or appears to be solved. Stale issues will be automatically closed label Feb 8, 2024
@dosubot dosubot bot closed this as not planned Won't fix, can't repro, duplicate, stale Feb 15, 2024
@dosubot dosubot bot removed the stale Issue has not had recent activity or appears to be solved. Stale issues will be automatically closed label Feb 15, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests