Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Getting invalid URL post after updating langchain from 0.0.200 to 0.0.267 #10044

Closed
1 of 14 tasks
salexln opened this issue Aug 31, 2023 · 2 comments
Closed
1 of 14 tasks
Labels
🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature Ɑ: models Related to LLMs or chat model modules

Comments

@salexln
Copy link

salexln commented Aug 31, 2023

System Info

langchain - 0.0.267
openai - 0.27.7
MacOS 13.4.1 (22F82)

Who can help?

No response

Information

  • The official example notebooks/scripts
  • My own modified scripts

Related Components

  • LLMs/Chat Models
  • Embedding Models
  • Prompts / Prompt Templates / Prompt Selectors
  • Output Parsers
  • Document Loaders
  • Vector Stores / Retrievers
  • Memory
  • Agents / Agent Executors
  • Tools / Toolkits
  • Chains
  • Callbacks/Tracing
  • Async

Reproduction

This code was working when I used langchain 0.0.200 (The call is to AzureOpenAI endpoint)

from langchain.chat_models import ChatOpenAI

llm = ChatOpenAI(model_name=model_config.model_engine,
                 deployment_id=model_config.model_engine,
                 temperature=model_config.temperature,
                 max_tokens=model_config.max_tokens_for_request,
                 top_p=model_config.top_p,
                 openai_api_key=endpoint_config.api_key.secret,
                 api_base=endpoint_config.api_base,
                 api_type=endpoint_config.api_type,
                 api_version=endpoint_config.api_version)

chat_prompt = [SystemMessage(...), HumanMessage(...)]
response = llm(chat_prompt)

Once I updated langchain to 0.0.267, I'm getting this error

openai.error.InvalidRequestError: Invalid URL (POST /v1/openai/deployments/gpt-35-turbo/chat/completions)

The endpoint itself is working (once I reverted back to 0.0.200 all started to work again)

Expected behavior

I except to get an answer from the LLM

@dosubot dosubot bot added Ɑ: models Related to LLMs or chat model modules 🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature labels Aug 31, 2023
@dosubot
Copy link

dosubot bot commented Aug 31, 2023

🤖

Hello,

Thank you for providing detailed information about the issue you're facing. It seems like the problem might be related to changes made in the ChatOpenAI class between LangChain versions 0.0.200 and 0.0.267.

In the newer versions of LangChain, the methods in the ChatOpenAI class have been made asynchronous. This means that the methods _generate and _stream have been changed to _agenerate and _astream respectively. Also, the completion_with_retry method call has been changed to acompletion_with_retry and is now awaited. This indicates that this method is now asynchronous.

Here are the relevant code changes:

Old version:

def _generate(
    self,
    messages: List[BaseMessage],
    stop: Optional[List[str]] = None,
    run_manager: Optional[CallbackManagerForLLMRun] = None,
    stream: Optional[bool] = None,
    **kwargs: Any,
) -> ChatResult:
    ...
    response = self.completion_with_retry(
        messages=message_dicts, run_manager=run_manager, **params
    )
    ...
    if run_manager:
        run_manager.on_llm_new_token(chunk.content, chunk=chunk)

New version:

async def _agenerate(
    self,
    messages: List[BaseMessage],
    stop: Optional[List[str]] = None,
    run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,
    stream: Optional[bool] = None,
    **kwargs: Any,
) -> ChatResult:
    ...
    response = await acompletion_with_retry(
        self, messages=message_dicts, run_manager=run_manager, **params
    )
    ...
    if run_manager:
        await run_manager.on_llm_new_token(token=chunk.content, chunk=chunk)

You can find these changes in the libs/langchain/langchain/chat_models/openai.py file.

This change to asynchronous methods might be causing the issue you're experiencing. You might need to adjust your code to handle these asynchronous methods properly.

As for the "Invalid URL" error, it could be due to a change in the way the AzureMLEndpointClient is initialized or how the endpoint URL is being passed to it. Please ensure that the endpoint URL is correctly set in either the AzureMLOnlineEndpoint constructor or the AZUREML_ENDPOINT_URL environment variable. The URL should be a valid Azure ML endpoint URL, which typically follows the format https://<your-endpoint>.<your_region>.inference.ml.azure.com/score.

I hope this helps! If you have any other questions or need further clarification, please let me know.

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

Copy link

dosubot bot commented Nov 30, 2023

Hi, @salexln! I'm Dosu, and I'm here to help the LangChain team manage our backlog. I wanted to let you know that we are marking this issue as stale.

Based on the information provided, it seems that you encountered an "Invalid URL" error after updating langchain from version 0.0.200 to 0.0.267. Reverting back to version 0.0.200 resolved the issue. It appears that the problem might be related to changes made in the ChatOpenAI class between versions 0.0.200 and 0.0.267. I suggested adjusting the code to handle asynchronous methods properly and mentioned that the error could be due to a change in how the endpoint URL is being passed to the AzureMLEndpointClient.

Before we proceed, we would like to confirm if this issue is still relevant to the latest version of the LangChain repository. If it is, please let us know by commenting on this issue. Otherwise, feel free to close the issue yourself, or the issue will be automatically closed in 7 days.

Thank you for your understanding and cooperation. If you have any further questions or concerns, please don't hesitate to ask.

Best regards,
Dosu

@dosubot dosubot bot added the stale Issue has not had recent activity or appears to be solved. Stale issues will be automatically closed label Nov 30, 2023
@dosubot dosubot bot closed this as not planned Won't fix, can't repro, duplicate, stale Dec 7, 2023
@dosubot dosubot bot removed the stale Issue has not had recent activity or appears to be solved. Stale issues will be automatically closed label Dec 7, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature Ɑ: models Related to LLMs or chat model modules
Projects
None yet
Development

No branches or pull requests

1 participant