Skip to content

Commit

Permalink
Set the max_tokens default to 4096 (#26)
Browse files Browse the repository at this point in the history
Co-authored-by: MARIO SEIXAS <seixas@ib.bsb.br>
  • Loading branch information
rikhuijzer and marioseixas committed Mar 8, 2023
1 parent 6b489b3 commit f9859c3
Showing 1 changed file with 3 additions and 2 deletions.
5 changes: 3 additions & 2 deletions ata/src/help.rs
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ Thanks to <https://github.com/kkawakam/rustyline#emacs-mode-default-mode>.

const EXAMPLE_TOML: &str = r#"api_key = "<YOUR SECRET API KEY>"
model = "gpt-3.5-turbo"
max_tokens = 1000
max_tokens = 4096
temperature = 0.8"#;

pub fn missing_toml(args: Vec<String>) {
Expand All @@ -54,7 +54,8 @@ To fix this, use `{} --config=<Path to ata.toml>` or create `{1}`. For the last
Next, replace `<YOUR SECRET API KEY>` with your API key, which you can request via https://beta.openai.com/account/api-keys.
The `max_tokens` sets the maximum amount of tokens that the server will answer with.
The `max_tokens` sets the maximum amount of tokens that the server can answer with.
Longer answers will be truncated.
The `temperature` sets the `sampling temperature`. From the OpenAI API docs: "What sampling temperature to use. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 (argmax sampling) for ones with a well-defined answer." According to Stephen Wolfram [1], setting it to a higher value such as 0.8 will likely work best in practice.
Expand Down

0 comments on commit f9859c3

Please sign in to comment.