Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Change the default max_tokens configuration value #419

Open
jfmainville opened this issue Mar 29, 2024 · 2 comments
Open

Change the default max_tokens configuration value #419

jfmainville opened this issue Mar 29, 2024 · 2 comments

Comments

@jfmainville
Copy link

Currently, the max_tokens value is set to 300 in the default configuration file (config.lua) which causes a high risk of answers from being cutoff when interacting with ChatGPT a model. In that regard, I was wondering if we could increase the max_tokens value to 4096 to reduce this risk?

Also, as the default model is gpt-3.5-turbo at the moment, which supports up to 4096 tokens by default (reference), it would make the process more convenient to new users. This action could also be done for the other available actions like code_readability_analysis and code_completion for example. We could standardize the definition of the max_tokens attribute across all available actions and models.

@thiswillbeyourgithub
Copy link

I agree. I was frequently very annoyed to see my chat completions abruptly stop until I figured out that I just needded to increase max_tokens

@ser
Copy link

ser commented Apr 16, 2024

my exactly first interaction was cut and it took me a while to understand why

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants