Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

handleLLMNewToken called with empty tokens when using function calling #1640

Closed
gramliu opened this issue Jun 14, 2023 · 3 comments
Closed

Comments

@gramliu
Copy link
Contributor

gramliu commented Jun 14, 2023

When calling ChatOpenAI.predictMessages with an OpenAI function call specified, the handleLLMNewToken callback is repeatedly called with empty tokens. This is likely because when OpenAI pushes new chunks to the HTTP event-stream, the structure of the response data is slightly different.

For normal text responses, the schema of a choice follows:

{
  "delta": { "content": string },
  "index": number,
   "finish_reason: null
}

With function calling, however, it now looks like:

{
  "delta": {
    "function_call": {
       "name": string,
       "arguments": string
    }
  },
  "index": number,
  "finish_reason": null
}

Edit: This does seem to be the case: see

@vladholubiev
Copy link

vladholubiev commented Jun 27, 2023

Here are some of the ideas:

Single callback

callbacks: [
    {
      handleLLMNewFunctionCall(function_call: {name: string;args:string}) {
        /*
        function_call = {
          name: "xxx",
          args: 'yyy'
        }
         */
      },
    }
  ]

Separate callbacks for function name and argument

callbacks: [
    {
      handleLLMNewFunctionCallName(function_call_name: string) {
        /*
        function_call_name = "xxx"
         */
      },
      handleLLMNewFunctionCallArgs(function_call_args: string) {
        /*
        function_call_args = "yyy"
         */
      }
    }
  ]

From my observations, looks like OpenAI always emits entire function_name in stream as a first chunk. So probably second option makes more sense.

@vladholubiev
Copy link

vladholubiev commented Jul 17, 2023

Looks like a PR was submitted #2025

@dosubot
Copy link

dosubot bot commented Oct 27, 2023

Hi, @gramliu! I'm Dosu, and I'm helping the langchainjs team manage their backlog. I wanted to let you know that we are marking this issue as stale.

Based on my understanding, the issue you reported is related to the handleLLMNewToken callback receiving empty tokens when using function calling in ChatOpenAI.predictMessages. It seems that there are two possible solutions suggested by vladholubiev: using a single callback or separate callbacks for the function name and argument. Additionally, there is a comment mentioning that a pull request has been submitted to address the issue.

Before we close this issue, we wanted to check with you if it is still relevant to the latest version of the langchainjs repository. If it is, please let us know by commenting on this issue. Otherwise, feel free to close the issue yourself, or it will be automatically closed in 7 days.

Thank you for your contribution to langchainjs!

@dosubot dosubot bot added the stale Issue has not had recent activity or appears to be solved. Stale issues will be automatically closed label Oct 27, 2023
@dosubot dosubot bot closed this as not planned Won't fix, can't repro, duplicate, stale Nov 3, 2023
@dosubot dosubot bot removed the stale Issue has not had recent activity or appears to be solved. Stale issues will be automatically closed label Nov 3, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants