Skip to content

Commit

Permalink
docs: update google mode capabilities & jsdoc (#1539)
Browse files Browse the repository at this point in the history
  • Loading branch information
lgrammel committed May 9, 2024
1 parent 4eb443b commit ec07fbf
Show file tree
Hide file tree
Showing 5 changed files with 9 additions and 8 deletions.
2 changes: 1 addition & 1 deletion content/docs/03-ai-sdk-core/02-providers-and-models.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -52,4 +52,4 @@ Here are the capabilities of popular models:
| [Anthropic](/providers/ai-sdk-providers/anthropic) | `claude-3-haiku-20240307` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Cross size={18} /> |
| [Mistral](/providers/ai-sdk-providers/mistral) | `mistral-large-latest` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
| [Mistral](/providers/ai-sdk-providers/mistral) | `mistral-small-latest` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
| [Google](/providers/ai-sdk-providers/google-generative-ai) | `models/gemini-1.5-pro-latest` | <Check size={18} /> | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> |
| [Google](/providers/ai-sdk-providers/google-generative-ai) | `models/gemini-1.5-pro-latest` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
Original file line number Diff line number Diff line change
Expand Up @@ -91,4 +91,4 @@ The following optional settings are available for Google Generative AI models:

| Model | Image Input | Object Generation | Tool Usage | Tool Streaming |
| ------------------------------ | ------------------- | ------------------- | ------------------- | ------------------- |
| `models/gemini-1.5-pro-latest` | <Check size={18} /> | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> |
| `models/gemini-1.5-pro-latest` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
4 changes: 2 additions & 2 deletions packages/core/core/generate-text/generate-text.ts
Original file line number Diff line number Diff line change
Expand Up @@ -17,14 +17,14 @@ Generate a text and call tools for a given prompt using a language model.
This function does not stream the output. If you want to stream the output, use `streamText` instead.
@param model - The language model to use.
@param tools - The tools that the model can call. The model needs to support calling tools.
@param tools - Tools that are accessible to and can be called by the model. The model needs to support calling tools.
@param system - A system message that will be part of the prompt.
@param prompt - A simple text prompt. You can either use `prompt` or `messages` but not both.
@param messages - A list of messages. You can either use `prompt` or `messages` but not both.
@param maxTokens - Maximum number of tokens to generate.
@param temperature - Temperature setting.
@param temperature - Temperature setting.
The value is passed through to the provider. The range depends on the provider and model.
It is recommended to set either `temperature` or `topP`, but not both.
@param topP - Nucleus sampling.
Expand Down
6 changes: 3 additions & 3 deletions packages/core/core/generate-text/stream-text.ts
Original file line number Diff line number Diff line change
Expand Up @@ -29,14 +29,14 @@ Generate a text and call tools for a given prompt using a language model.
This function streams the output. If you do not want to stream the output, use `generateText` instead.
@param model - The language model to use.
@param tools - The tools that the model can call. The model needs to support calling tools.
@param tools - Tools that are accessible to and can be called by the model. The model needs to support calling tools.
@param system - A system message that will be part of the prompt.
@param prompt - A simple text prompt. You can either use `prompt` or `messages` but not both.
@param messages - A list of messages. You can either use `prompt` or `messages` but not both.
@param maxTokens - Maximum number of tokens to generate.
@param temperature - Temperature setting.
@param temperature - Temperature setting.
The value is passed through to the provider. The range depends on the provider and model.
It is recommended to set either `temperature` or `topP`, but not both.
@param topP - Nucleus sampling.
Expand Down Expand Up @@ -144,7 +144,7 @@ export class StreamTextResult<TOOLS extends Record<string, CoreTool>> {
private originalStream: ReadableStream<TextStreamPart<TOOLS>>;

/**
Warnings from the model provider (e.g. unsupported settings)
Warnings from the model provider (e.g. unsupported settings).
*/
readonly warnings: CallWarning[] | undefined;

Expand Down
3 changes: 2 additions & 1 deletion packages/core/core/tool/tool.ts
Original file line number Diff line number Diff line change
Expand Up @@ -14,12 +14,13 @@ An optional description of what the tool does. Will be used by the language mode

/**
The schema of the input that the tool expects. The language model will use this to generate the input.
It is also used to validate the output of the language model.
Use descriptions to make the input understandable for the language model.
*/
parameters: PARAMETERS;

/**
An optional execute function for the actual execution function of the tool.
An async function that is called with the arguments from the tool call and produces a result.
If not provided, the tool will not be executed automatically.
*/
execute?: (args: z.infer<PARAMETERS>) => PromiseLike<RESULT>;
Expand Down

0 comments on commit ec07fbf

Please sign in to comment.