New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug: Unhandled streaming API Error when max tokens are exceeded. #50
Comments
@watsy0007 This is expected behaviour with the gpt-3.5-turbo model. The context window length (amount of input you can give it) of that model is restricted to 4K tokens. There are later models which have longer context windows. If it's important, you can check for the approximate number of tokens in your input before calling openai. Use one of the gpt tokenizer libraries, egs. https://github.com/LiboShen/gpt3-tokenizer-elixir. |
thanks for your reply i followed the instructions in the notebook and used the following code
and then i get error as follows
i expect the code to behave normally or throw some kind of RuntimeError ? |
@watsy0007 you are correct. I misunderstood the nature of the problem. I'm a little busy at the moment, so I'd be happy if you go ahead and try to fix it with a PR. So far, I have tried to avoid hard wiring any error handling in the library, preferring to leave it to user code. i would prefer to continue that approach of minimal wrapping around the bare http api, so please keep that in mind. |
I completely argee with you. |
@watsy0007 did you make some progress on this bug? i have some time to work on it ATM, and will proceed on my own if you're busy. |
@restlessronin not yet, i'm really looking forward to your solution. 😄 |
@watsy0007 I have published v 0.2.3 to hex with what seemed a reasonable fix for this issue. When the streaming endpoints return an error message, What do you think? Does this seem like a reasonable fix to you? |
Just FYI it helped me, I was supplying one invalid parameter to streaming and couldn't see the response using 0.2.1, with your fix it was clearly visible 😉 |
@Valian thanks for taking the time to let me know. Good to learn that it was useful. |
Describe the bug
To Reproduce
gpt-3.5-turbo
stream
mode to execute codeCode snippets
No response
OS
MacOS
Elixir version
1.15.2
Library version
v0.2.1
The text was updated successfully, but these errors were encountered: