Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ainvoke take along time? #21356

Open
5 tasks done
vsvn-ThuyTQ opened this issue May 7, 2024 · 0 comments
Open
5 tasks done

ainvoke take along time? #21356

vsvn-ThuyTQ opened this issue May 7, 2024 · 0 comments
Labels
🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature 🔌: openai Primarily related to OpenAI integrations

Comments

@vsvn-ThuyTQ
Copy link

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain rather than my code.
  • The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).

Example Code

class PricingCalcHandler(BaseCallbackHandler):
    def __init__(self, request: Request = None) -> None:
        super().__init__()
        self.request = request
    def on_llm_start(
        self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any
    ) -> Any:
        logger.debug(f"on_llm_start {serialized}")
        
    def on_llm_end(self, llm_result: LLMResult, *, run_id: UUID, parent_run_id: UUID , **kwargs: Any) -> Any:
        try:
            logger.debug(f"run_id {run_id} llm_result {llm_result}")
            if self.request and llm_result:
                logger.info(f'run id {run_id} save pricing!')
        except Exception as e:
            logger.error(e)
def llm_with_callback(request: Request = None):
    pricing_handler = PricingCalcHandler(request)
    return AzureChatOpenAI(
        azure_deployment = os.environ.get('AZURE_OPENAI_DEPLOYMENT'),
        azure_endpoint=os.environ.get('AZURE_OPENAI_ENDPOINT'),
        api_key = os.environ.get('AZURE_OPENAI_KEY'),
        api_version="2023-09-01-preview",
        cache=config.USE_CACHE_LLM,
        model_kwargs = {"seed": Constants.GPT_SEED},
        max_retries=3,
        temperature=0,
        callbacks=[pricing_handler]
    ) 
 llm = llm_with_callback(request=self._request)
 self.feature_prompt = self.feature_prompt_template.format(content=bookmarks_as_string)
 features_raw = await llm.ainvoke(self.feature_prompt)

Error Message and Stack Trace (if applicable)

I wait for 1 hour without receiving a response from gpt

Description

I don't understand why when I run invoke, I wait for 1 hour without receiving a response from gpt. When I debug I realized that gpt only runs until on_llm_start and stays here for about 1 hour with no signs of stopping.

System Info

Windows
langchain

@dosubot dosubot bot added 🔌: openai Primarily related to OpenAI integrations 🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature labels May 7, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature 🔌: openai Primarily related to OpenAI integrations
Projects
None yet
Development

No branches or pull requests

1 participant