-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
requested number of tokens exceed the max supported by the model #195
Comments
Oh interesting, it might be due to the |
Ran into this issue with base davinci and had to set a custom prompthelper with a lower max_input size |
@triptu @VivaLaPanda do you happen to have an example notebook / data I can try out? feel free to DM me as well (can join the discord here, https://discord.gg/58FeekwU, username is jerryjliu98) |
i'm able to repro a somewhat similar issue, will take a look! hopefully will have a fix in a bit |
ok i think i figured it out! so sorry about that, there was a bug in the way I counted tokens when splitting. I should have a fix soon |
awesome! joined the discord, let me know if you still need a data sample. |
Sample code
Surprisingly this seems to be happening for me for all long texts. It doesn't happen when davinci is used though and went unnoticed at first due to #182. Anyway I can help in debugging? which function/file should I look into?
Stack Trace
The text was updated successfully, but these errors were encountered: