Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Document inferencing option defaults #901

Merged
merged 1 commit into from
Sep 22, 2023
Merged

Conversation

mikkelhegn
Copy link
Member

Signed-off-by: Mikkel Mørk Hegnhøj <mikkel@fermyon.com>
@@ -53,7 +53,7 @@ The set of operations is common across all supporting language SDKs:
| Operation | Parameters | Returns | Behavior |
|:-----|:----------------|:-------|:----------------|
| `infer` | model`string`<br /> prompt`string`| `string` | The `infer` is performed on a specific model.<br /> <br />The name of the model is the first parameter provided (i.e. `llama2-chat`, `codellama-instruct`, or other; passed in as a `string`).<br /> <br />The second parameter is a prompt; passed in as a `string`.<br />|
| `infer_with_options` | model`string`<br /> prompt`string`<br /> params`list` | `string` | The `infer_with_options` is performed on a specific model.<br /> <br />The name of the model is the first parameter provided (i.e. `llama2-chat`, `codellama-instruct`, or other; passed in as a `string`).<br /><br /> The second parameter is a prompt; passed in as a `string`.<br /><br /> The third parameter is a mix of float and unsigned integers relating to inferencing parameters in this order: <br />- `max-tokens` (unsigned 32 integer) Note: the backing implementation may return less tokens. <br /> - `repeat-penalty` (float 32) The amount the model should avoid repeating tokens. <br /> - `repeat-penalty-last-n-token-count` (unsigned 32 integer) The number of tokens the model should apply the repeat penalty to. <br /> - `temperature` (float 32) The randomness with which the next token is selected. <br /> - `top-k` (unsigned 32 integer) The number of possible next tokens the model will choose from. <br /> - `top-p` (float 32) The probability total of next tokens the model will choose from. <br /><br /> The result from `infer_with_options` is a `string` |
| `infer_with_options` | model`string`<br /> prompt`string`<br /> params`list` | `string` | The `infer_with_options` is performed on a specific model.<br /> <br />The name of the model is the first parameter provided (i.e. `llama2-chat`, `codellama-instruct`, or other; passed in as a `string`).<br /><br /> The second parameter is a prompt; passed in as a `string`.<br /><br /> The third parameter is a mix of float and unsigned integers relating to inferencing parameters in this order: <br /><br />- `max-tokens` (unsigned 32 integer) Note: the backing implementation may return less tokens. <br /> Default is 100<br /><br /> - `repeat-penalty` (float 32) The amount the model should avoid repeating tokens. <br /> Default is 1.1<br /><br /> - `repeat-penalty-last-n-token-count` (unsigned 32 integer) The number of tokens the model should apply the repeat penalty to. <br /> Default is 64<br /><br /> - `temperature` (float 32) The randomness with which the next token is selected. <br /> Default is 0.8<br /><br /> - `top-k` (unsigned 32 integer) The number of possible next tokens the model will choose from. <br /> Default is 40<br /><br /> - `top-p` (float 32) The probability total of next tokens the model will choose from. <br /> Default is 0.9<br /><br /> The result from `infer_with_options` is a `string` |
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking at the code I would definitely not describe the options record as a "list" - for many programmers that implies a vector or array kind of thing, rather than a record of named values.

I'm also concerned this could be a significant pain to maintain - that's a lot of text and embedded HTML markup squooshed onto one line, easy to get things mixed up! Instead, I would pull out a separate table called something like "Inference Options" to hold the record info (same content, but now Markdownable), and have the params entry and description refer to that table. E.g.

Suggested change
| `infer_with_options` | model`string`<br /> prompt`string`<br /> params`list` | `string` | The `infer_with_options` is performed on a specific model.<br /> <br />The name of the model is the first parameter provided (i.e. `llama2-chat`, `codellama-instruct`, or other; passed in as a `string`).<br /><br /> The second parameter is a prompt; passed in as a `string`.<br /><br /> The third parameter is a mix of float and unsigned integers relating to inferencing parameters in this order: <br /><br />- `max-tokens` (unsigned 32 integer) Note: the backing implementation may return less tokens. <br /> Default is 100<br /><br /> - `repeat-penalty` (float 32) The amount the model should avoid repeating tokens. <br /> Default is 1.1<br /><br /> - `repeat-penalty-last-n-token-count` (unsigned 32 integer) The number of tokens the model should apply the repeat penalty to. <br /> Default is 64<br /><br /> - `temperature` (float 32) The randomness with which the next token is selected. <br /> Default is 0.8<br /><br /> - `top-k` (unsigned 32 integer) The number of possible next tokens the model will choose from. <br /> Default is 40<br /><br /> - `top-p` (float 32) The probability total of next tokens the model will choose from. <br /> Default is 0.9<br /><br /> The result from `infer_with_options` is a `string` |
| `infer_with_options` | model `string`,<br />prompt `string`,<br />params `record` | `string` | The `infer_with_options` is performed on a specific model.<br /> <br />The first parameter is the name of the model (e.g. `llama2-chat`, `codellama-instruct`, or other), passed in as a `string`.<br /><br /> The second parameter is a prompt, passed in as a `string`.<br /><br /> The third parameter is a record of options to control how the inferencing is done - see the Inferencing Options table for details. |

@@ -53,7 +53,7 @@ The set of operations is common across all supporting language SDKs:
| Operation | Parameters | Returns | Behavior |
|:-----|:----------------|:-------|:----------------|
| `infer` | model`string`<br /> prompt`string`| `string` | The `infer` is performed on a specific model.<br /> <br />The name of the model is the first parameter provided (i.e. `llama2-chat`, `codellama-instruct`, or other; passed in as a `string`).<br /> <br />The second parameter is a prompt; passed in as a `string`.<br />|
| `infer_with_options` | model`string`<br /> prompt`string`<br /> params`list` | `string` | The `infer_with_options` is performed on a specific model.<br /> <br />The name of the model is the first parameter provided (i.e. `llama2-chat`, `codellama-instruct`, or other; passed in as a `string`).<br /><br /> The second parameter is a prompt; passed in as a `string`.<br /><br /> The third parameter is a mix of float and unsigned integers relating to inferencing parameters in this order: <br />- `max-tokens` (unsigned 32 integer) Note: the backing implementation may return less tokens. <br /> - `repeat-penalty` (float 32) The amount the model should avoid repeating tokens. <br /> - `repeat-penalty-last-n-token-count` (unsigned 32 integer) The number of tokens the model should apply the repeat penalty to. <br /> - `temperature` (float 32) The randomness with which the next token is selected. <br /> - `top-k` (unsigned 32 integer) The number of possible next tokens the model will choose from. <br /> - `top-p` (float 32) The probability total of next tokens the model will choose from. <br /><br /> The result from `infer_with_options` is a `string` |
| `infer_with_options` | model`string`<br /> prompt`string`<br /> params`list` | `string` | The `infer_with_options` is performed on a specific model.<br /> <br />The name of the model is the first parameter provided (i.e. `llama2-chat`, `codellama-instruct`, or other; passed in as a `string`).<br /><br /> The second parameter is a prompt; passed in as a `string`.<br /><br /> The third parameter is a mix of float and unsigned integers relating to inferencing parameters in this order: <br /><br />- `max-tokens` (unsigned 32 integer) Note: the backing implementation may return less tokens. <br /> Default is 100<br /><br /> - `repeat-penalty` (float 32) The amount the model should avoid repeating tokens. <br /> Default is 1.1<br /><br /> - `repeat-penalty-last-n-token-count` (unsigned 32 integer) The number of tokens the model should apply the repeat penalty to. <br /> Default is 64<br /><br /> - `temperature` (float 32) The randomness with which the next token is selected. <br /> Default is 0.8<br /><br /> - `top-k` (unsigned 32 integer) The number of possible next tokens the model will choose from. <br /> Default is 40<br /><br /> - `top-p` (float 32) The probability total of next tokens the model will choose from. <br /> Default is 0.9<br /><br /> The result from `infer_with_options` is a `string` |
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Returns column already says that it returns a string - we can drop that last sentence.

@itowlson
Copy link
Contributor

More generally we should go back and review how we present APIs. The tabular structure was, I think, adopted in a bit of a hurry for Spin 1.0 (and I think the blame for that is on me), and was already showing its limitations a few APIs ago. Not only does it run hard into the limitations of Markdown tables, it doesn't provide room for talking about the language projections against the API itself. (This was partly an artifact of the hurry. Presenting the abstract API and then notes on the language projections was quicker than writing out multiple similar SDK surfaces at reference quality. But it does have the advantage of focusing on the language neutral nature of the APIs-as-WIT-interfaces.) So I'd be keen to revisit the API pages generally in the light of our greater experience, which would provide a better framework for APIs like this. That's out of scope for this PR of course!

@mikkelhegn
Copy link
Member Author

@itowlson - This is good feedback, and I agree. The APIs are also different between Rust and JS/TS, as in the latter there is only one function, with an optional option argument.

This PR only added the default values for the options, and I believe your suggestion removes that again, is that true?

If we don't want to add those defaults value here now, then I'll close this PR, and we can add an issue to fix based on your comments. Let me know what you think.

@itowlson
Copy link
Contributor

@mikkelhegn My suggestion does not merely remove the default values - it removes ALL details of the options! ...with the recommendation to move all that info, including the default values, to a separate table.

But yes, I somehow missed that the big honkin' text was already there and you were only adding the defaults. Doh! Given that we already have the big cell, I'm okay with adding the defaults to that big cell, Apologies for the misreading.

@mikkelhegn
Copy link
Member Author

No worries @itowlson :-)

@mikkelhegn mikkelhegn merged commit d965286 into fermyon:main Sep 22, 2023
3 checks passed
@mikkelhegn mikkelhegn deleted the ai-api branch September 22, 2023 04:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants