Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding timeout to ChatOpenAI raises ConfigError #13124

Closed
1 of 14 tasks
andreas-thomik opened this issue Nov 9, 2023 · 4 comments
Closed
1 of 14 tasks

Adding timeout to ChatOpenAI raises ConfigError #13124

andreas-thomik opened this issue Nov 9, 2023 · 4 comments
Labels
🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature Ɑ: models Related to LLMs or chat model modules

Comments

@andreas-thomik
Copy link

System Info

Running langchain==0.0.332 with python 3.11 and openai==1.2.0 on Windows.

Who can help?

No response

Information

  • The official example notebooks/scripts
  • My own modified scripts

Related Components

  • LLMs/Chat Models
  • Embedding Models
  • Prompts / Prompt Templates / Prompt Selectors
  • Output Parsers
  • Document Loaders
  • Vector Stores / Retrievers
  • Memory
  • Agents / Agent Executors
  • Tools / Toolkits
  • Chains
  • Callbacks/Tracing
  • Async

Reproduction

from langchain.chat_models import ChatOpenAI

ChatOpenAI(
    model="gpt-3.5-turbo",
    request_timeout=5,
)

Expected behavior

Should run without errors.

Likely due to newly introduced httpx.Timeout type in request_timeout (#12948). Always importing httpx and tiktoken (i.e. not conditionally on TYPE_CHECKING) fixes the issue.

Copy link

dosubot bot commented Nov 9, 2023

🤖

Hello @andreas-thomik,

I'm Dosu, an AI bot here to assist you while we wait for a human maintainer. I'm here to help you solve bugs, answer your questions, and guide you as a contributor. I'm currently reviewing your issue regarding the request_timeout in ChatOpenAI. I will get back to you with a full answer shortly.

@dosubot dosubot bot added Ɑ: models Related to LLMs or chat model modules 🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature labels Nov 9, 2023
Copy link

dosubot bot commented Nov 9, 2023

🤖

The ConfigError you're encountering when adding a timeout to ChatOpenAI in LangChain version 0.0.332 with Python 3.11 and openai 1.2.0 on Windows is likely due to the fact that the timeout parameter is not being correctly passed to the underlying OpenAI API call.

In the ChatOpenAI class, the timeout parameter is defined as a class attribute:

request_timeout: Union[float, Tuple[float, float], httpx.Timeout, None] = Field(
    default=None, alias="timeout"
)

However, it seems that this parameter is not being used in the acompletion_with_retry function, which is responsible for making the actual API call:

async def acompletion_with_retry(
    llm: ChatOpenAI,
    run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,
    **kwargs: Any,
) -> Any:
    """Use tenacity to retry the async completion call."""
    if is_openai_v1():
        return await llm.async_client.create(**kwargs)

    retry_decorator = _create_retry_decorator(llm, run_manager=run_manager)

    @retry_decorator
    async def _completion_with_retry(**kwargs: Any) -> Any:
        # Use OpenAI's async api https://github.com/openai/openai-python#async-api
        return await llm.client.acreate(**kwargs)

    return await _completion_with_retry(**kwargs)

To resolve this issue, you need to pass the timeout parameter to the OpenAI API call. You can do this by modifying the _completion_with_retry function inside acompletion_with_retry to include the timeout parameter from the ChatOpenAI instance (llm):

@retry_decorator
async def _completion_with_retry(**kwargs: Any) -> Any:
    # Use OpenAI's async api https://github.com/openai/openai-python#async-api
    return await llm.client.acreate(timeout=llm.request_timeout, **kwargs)

This modification ensures that the timeout parameter is correctly passed to the OpenAI API call, which should resolve the ConfigError.

Please note that this is a workaround and not a permanent solution. The LangChain team should ideally fix this in a future release. You can refer to the source code for more details.

Let me know if this helps or if you need further assistance.

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

baskaryan added a commit that referenced this issue Nov 9, 2023
@baskaryan
Copy link
Collaborator

Thanks for flagging @andreas-thomik. #13154 should fix and will be released in 0.0.334, let me know if you still see it after that

@andreas-thomik
Copy link
Author

Seems fixed to me, thank you!

xieqihui pushed a commit to xieqihui/langchain that referenced this issue Nov 21, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature Ɑ: models Related to LLMs or chat model modules
Projects
None yet
Development

No branches or pull requests

2 participants