You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, the max_tokens value is set to 300 in the default configuration file (config.lua) which causes a high risk of answers from being cutoff when interacting with ChatGPT a model. In that regard, I was wondering if we could increase the max_tokens value to 4096 to reduce this risk?
Also, as the default model is gpt-3.5-turbo at the moment, which supports up to 4096 tokens by default (reference), it would make the process more convenient to new users. This action could also be done for the other available actions like code_readability_analysis and code_completion for example. We could standardize the definition of the max_tokens attribute across all available actions and models.
The text was updated successfully, but these errors were encountered:
Currently, the
max_tokens
value is set to300
in the default configuration file (config.lua
) which causes a high risk of answers from being cutoff when interacting with ChatGPT a model. In that regard, I was wondering if we could increase themax_tokens
value to4096
to reduce this risk?Also, as the default model is
gpt-3.5-turbo
at the moment, which supports up to4096
tokens by default (reference), it would make the process more convenient to new users. This action could also be done for the other available actions likecode_readability_analysis
andcode_completion
for example. We could standardize the definition of themax_tokens
attribute across all available actions and models.The text was updated successfully, but these errors were encountered: