Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Set the max_tokens default to 4096 #26

Merged
merged 1 commit into from
Mar 8, 2023
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 3 additions & 2 deletions ata/src/help.rs
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ Thanks to <https://github.com/kkawakam/rustyline#emacs-mode-default-mode>.

const EXAMPLE_TOML: &str = r#"api_key = "<YOUR SECRET API KEY>"
model = "gpt-3.5-turbo"
max_tokens = 1000
max_tokens = 4096
temperature = 0.8"#;

pub fn missing_toml(args: Vec<String>) {
Expand All @@ -54,7 +54,8 @@ To fix this, use `{} --config=<Path to ata.toml>` or create `{1}`. For the last

Next, replace `<YOUR SECRET API KEY>` with your API key, which you can request via https://beta.openai.com/account/api-keys.

The `max_tokens` sets the maximum amount of tokens that the server will answer with.
The `max_tokens` sets the maximum amount of tokens that the server can answer with.
Longer answers will be truncated.

The `temperature` sets the `sampling temperature`. From the OpenAI API docs: "What sampling temperature to use. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 (argmax sampling) for ones with a well-defined answer." According to Stephen Wolfram [1], setting it to a higher value such as 0.8 will likely work best in practice.

Expand Down