You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In practice, for small-to-medium PRs, our compression strategy works well with a limitation of 8K tokens.
For very large PRs (tenths of files), some files may be overlooked.
You can always switch to a model with a larger context, like 'gpt-3.5-turbo-16k-0613' or 'gpt-4-32k'.
hey is there anyway to deal with token limitation
The text was updated successfully, but these errors were encountered: