We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Without options we are stuck with the defaults: https://docs.gpt4all.io/gpt4all_python.html#generation-parameters
def generate( prompt, max_tokens=200, temp=0.7, top_k=40, top_p=0.1, repeat_penalty=1.18, repeat_last_n=64, n_batch=8, n_predict=None, streaming=False, ):
That max_tokens=200 is particularly limiting.
max_tokens=200
Note that n_predict is a duplicate of max_tokens (for backwards compatibility) so I can ignore that one.
n_predict
max_tokens
The text was updated successfully, but these errors were encountered:
Bump max_tokens from 200 up to 400 by default, refs #3
2368f39
Documentation for new model options, refs simonw#17, closes simonw#3
624b75b
7f3c8ab
Release 0.3
32eb69d
Refs #3, #10, #17, #18, #21
No branches or pull requests
Without options we are stuck with the defaults: https://docs.gpt4all.io/gpt4all_python.html#generation-parameters
That
max_tokens=200
is particularly limiting.Note that
n_predict
is a duplicate ofmax_tokens
(for backwards compatibility) so I can ignore that one.The text was updated successfully, but these errors were encountered: