-
Notifications
You must be signed in to change notification settings - Fork 13.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue: When using Azure OpenAI APIs, the results contain stop sequence '<|im_end|>' in the output. How to eliminate it? #4246
Comments
Same issue here! No answer so far... |
I tried to reproduce this, and according to my tests it depends on the prompt used.
and this does not contain return any
Now let use a more complicated prompt:
Now this kind of prompt is generating
|
@zioproto can you try to take away the |
I am getting
@vnktsh can you share your prompt to understand if we can find anything similar to my prompt ? |
@pieroit I think this is related: openai/openai-python#363 |
I think the following can confirm that this is not a LangChain bug:
|
@zioproto you need to use a chain and fill the prompt, or hardcode something, otherwise It comes to my mind that maybe the problem is about "completion" vs "chat" models? Maybe we are using there a class made to parse completion, with a chat model |
@pieroit I know it does not make sense to use
So far all my tests are on the completion API. This problem is not related to any python implementation, because I can reproduce it with curl calling the API directly:
|
@pieroit you had a correct hint about chat and completion models. The So the API call with The mistake is that I should have used @vnktsh can you confirm which model you are using and if you are using |
I confirm that I solved the problem of the trailing My root cause was using If you want to use If you need the model @vnktsh please confirm this works also for you. Feel free to close the issue if the problem is solved. Thanks |
But a chat model should have |
This worked perfectly! Thanks! |
This did not end up working for me unfortunately. I used... AzureOpenAI(
model="text-davinci-003",
temperature=0.01,
...
) With the following prompt...
Response I get is |
You are right,when I use 'ChatOpenAI' it does not generate '<|im_end|>' anymore. |
Hi, @vnktsh! I'm Dosu, and I'm here to help the LangChain team manage their backlog. I wanted to let you know that we are marking this issue as stale. From what I understand, the issue you reported is related to the responses from Azure OpenAI APIs containing a stop sequence. However, it seems that the issue is still unresolved at the moment. If this issue is still relevant to the latest version of the LangChain repository, please let the LangChain team know by commenting on this issue. Otherwise, feel free to close the issue yourself, or the issue will be automatically closed in 7 days. Thank you for your contribution, and please don't hesitate to reach out if you have any further questions or concerns. |
Issue you'd like to raise.
When using Azure OpenAI deployments and Langchain Agents, the responses contain stop sequence '<|im_end|>'. This is affecting subsequent prompts and chains. Is there a way to ignore this token from responses?
Example:
Suggestion:
Provide a way to let agents and chain ignore these start and stop sequences.
The text was updated successfully, but these errors were encountered: