Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Document inferencing option defaults #901

Merged
merged 1 commit into from
Sep 22, 2023
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion content/spin/serverless-ai-api-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ The set of operations is common across all supporting language SDKs:
| Operation | Parameters | Returns | Behavior |
|:-----|:----------------|:-------|:----------------|
| `infer` | model`string`<br /> prompt`string`| `string` | The `infer` is performed on a specific model.<br /> <br />The name of the model is the first parameter provided (i.e. `llama2-chat`, `codellama-instruct`, or other; passed in as a `string`).<br /> <br />The second parameter is a prompt; passed in as a `string`.<br />|
| `infer_with_options` | model`string`<br /> prompt`string`<br /> params`list` | `string` | The `infer_with_options` is performed on a specific model.<br /> <br />The name of the model is the first parameter provided (i.e. `llama2-chat`, `codellama-instruct`, or other; passed in as a `string`).<br /><br /> The second parameter is a prompt; passed in as a `string`.<br /><br /> The third parameter is a mix of float and unsigned integers relating to inferencing parameters in this order: <br />- `max-tokens` (unsigned 32 integer) Note: the backing implementation may return less tokens. <br /> - `repeat-penalty` (float 32) The amount the model should avoid repeating tokens. <br /> - `repeat-penalty-last-n-token-count` (unsigned 32 integer) The number of tokens the model should apply the repeat penalty to. <br /> - `temperature` (float 32) The randomness with which the next token is selected. <br /> - `top-k` (unsigned 32 integer) The number of possible next tokens the model will choose from. <br /> - `top-p` (float 32) The probability total of next tokens the model will choose from. <br /><br /> The result from `infer_with_options` is a `string` |
| `infer_with_options` | model`string`<br /> prompt`string`<br /> params`list` | `string` | The `infer_with_options` is performed on a specific model.<br /> <br />The name of the model is the first parameter provided (i.e. `llama2-chat`, `codellama-instruct`, or other; passed in as a `string`).<br /><br /> The second parameter is a prompt; passed in as a `string`.<br /><br /> The third parameter is a mix of float and unsigned integers relating to inferencing parameters in this order: <br /><br />- `max-tokens` (unsigned 32 integer) Note: the backing implementation may return less tokens. <br /> Default is 100<br /><br /> - `repeat-penalty` (float 32) The amount the model should avoid repeating tokens. <br /> Default is 1.1<br /><br /> - `repeat-penalty-last-n-token-count` (unsigned 32 integer) The number of tokens the model should apply the repeat penalty to. <br /> Default is 64<br /><br /> - `temperature` (float 32) The randomness with which the next token is selected. <br /> Default is 0.8<br /><br /> - `top-k` (unsigned 32 integer) The number of possible next tokens the model will choose from. <br /> Default is 40<br /><br /> - `top-p` (float 32) The probability total of next tokens the model will choose from. <br /> Default is 0.9<br /><br /> The result from `infer_with_options` is a `string` |
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking at the code I would definitely not describe the options record as a "list" - for many programmers that implies a vector or array kind of thing, rather than a record of named values.

I'm also concerned this could be a significant pain to maintain - that's a lot of text and embedded HTML markup squooshed onto one line, easy to get things mixed up! Instead, I would pull out a separate table called something like "Inference Options" to hold the record info (same content, but now Markdownable), and have the params entry and description refer to that table. E.g.

Suggested change
| `infer_with_options` | model`string`<br /> prompt`string`<br /> params`list` | `string` | The `infer_with_options` is performed on a specific model.<br /> <br />The name of the model is the first parameter provided (i.e. `llama2-chat`, `codellama-instruct`, or other; passed in as a `string`).<br /><br /> The second parameter is a prompt; passed in as a `string`.<br /><br /> The third parameter is a mix of float and unsigned integers relating to inferencing parameters in this order: <br /><br />- `max-tokens` (unsigned 32 integer) Note: the backing implementation may return less tokens. <br /> Default is 100<br /><br /> - `repeat-penalty` (float 32) The amount the model should avoid repeating tokens. <br /> Default is 1.1<br /><br /> - `repeat-penalty-last-n-token-count` (unsigned 32 integer) The number of tokens the model should apply the repeat penalty to. <br /> Default is 64<br /><br /> - `temperature` (float 32) The randomness with which the next token is selected. <br /> Default is 0.8<br /><br /> - `top-k` (unsigned 32 integer) The number of possible next tokens the model will choose from. <br /> Default is 40<br /><br /> - `top-p` (float 32) The probability total of next tokens the model will choose from. <br /> Default is 0.9<br /><br /> The result from `infer_with_options` is a `string` |
| `infer_with_options` | model `string`,<br />prompt `string`,<br />params `record` | `string` | The `infer_with_options` is performed on a specific model.<br /> <br />The first parameter is the name of the model (e.g. `llama2-chat`, `codellama-instruct`, or other), passed in as a `string`.<br /><br /> The second parameter is a prompt, passed in as a `string`.<br /><br /> The third parameter is a record of options to control how the inferencing is done - see the Inferencing Options table for details. |

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Returns column already says that it returns a string - we can drop that last sentence.

| `generate-embeddings` | model`string`<br /> prompt`list<string>`| `string` | The `generate-embeddings` is performed on a specific model.<br /> <br />The name of the model is the first parameter provided (i.e. `all-minilm-l6-v2`, passed in as a `string`).<br /> <br />The second parameter is a prompt; passed in as a `list` of `string`s.<br /><br /> The result from `generate-embeddings` is a two-dimension array containing float32 type values only |

The exact detail of calling these operations from your application depends on your language:
Expand Down
Loading