Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a bunch of options #3

Closed
simonw opened this issue Jul 9, 2023 · 0 comments
Closed

Add a bunch of options #3

simonw opened this issue Jul 9, 2023 · 0 comments
Labels
enhancement New feature or request

Comments

@simonw
Copy link
Owner

simonw commented Jul 9, 2023

Without options we are stuck with the defaults: https://docs.gpt4all.io/gpt4all_python.html#generation-parameters

def generate(
    prompt,
    max_tokens=200,
    temp=0.7,
    top_k=40,
    top_p=0.1,
    repeat_penalty=1.18,
    repeat_last_n=64,
    n_batch=8,
    n_predict=None,
    streaming=False,
):

That max_tokens=200 is particularly limiting.

Note that n_predict is a duplicate of max_tokens (for backwards compatibility) so I can ignore that one.

@simonw simonw added the enhancement New feature or request label Jul 9, 2023
simonw added a commit to RangerMauve/llm-gpt4all that referenced this issue Jan 24, 2024
@simonw simonw closed this as completed in 7f3c8ab Jan 24, 2024
simonw added a commit that referenced this issue Jan 24, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant