Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

how to deal with token limitation #112

Closed
sheenhx opened this issue Jul 21, 2023 · 1 comment
Closed

how to deal with token limitation #112

sheenhx opened this issue Jul 21, 2023 · 1 comment

Comments

@sheenhx
Copy link

sheenhx commented Jul 21, 2023

hey is there anyway to deal with token limitation

@mrT23
Copy link
Collaborator

mrT23 commented Jul 21, 2023

We have an extensive compression strategy to deal with token limitation, see:
https://github.com/Codium-ai/pr-agent/blob/main/PR_COMPRESSION.md

In practice, for small-to-medium PRs, our compression strategy works well with a limitation of 8K tokens.
For very large PRs (tenths of files), some files may be overlooked.

You can always switch to a model with a larger context, like 'gpt-3.5-turbo-16k-0613' or 'gpt-4-32k'.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants