From 4d31ee2275a17be0077d106183ce2200bda7d03b Mon Sep 17 00:00:00 2001 From: PriNova Date: Thu, 26 Dec 2024 19:07:38 +0000 Subject: [PATCH 1/2] feat(docs): add additional configuration options The changes introduce support for the OpenAI provider in the Cody VS Code extension configuration, along with additional configuration options for input and output token sizes, and provider-specific options. This provides more flexibility and options for users when configuring their Cody integration. --- docs/cody/clients/install-vscode.mdx | 24 ++++++++++++++---------- 1 file changed, 14 insertions(+), 10 deletions(-) diff --git a/docs/cody/clients/install-vscode.mdx b/docs/cody/clients/install-vscode.mdx index f9aa58ccf..3861551e9 100644 --- a/docs/cody/clients/install-vscode.mdx +++ b/docs/cody/clients/install-vscode.mdx @@ -410,16 +410,20 @@ Example VS Code user settings JSON configuration: ### Provider configuration options -- `provider`: `"google"`, `"groq"` or `"ollama"` - - The LLM provider type. -- `model`: `string` - - The ID of the model, e.g. `"gemini-1.5-pro-latest"` -- `tokens`: `number` - optional - - The context window size of the model. Default: `7000`. -- `apiKey`: `string` - optional - - The API key for the endpoint. Required if the provider is `"google"` or `"groq"`. -- `apiEndpoint`: `string` - optional - - The endpoint URL, if you don't want to use the provider’s default endpoint. +- `"provider"`: `"google"`, `"groq"`, `"ollama"` or `"openai"` + - The LLM provider type. +- `"model"`: `string` + - The ID of the model, e.g. "gemini-2.0-flash-exp" +- `"inputTokens"`: `number` - optional + - The context window size of the model's input. Default: 7000. +- `"outputTokens"`: `number` - optional + - The context window size of the model's output. Default: 4000. +- `"apiKey"`: `string` - optional + - The API key for the endpoint. Required if the provider is "google", "groq" or "OpenAI". +- `"apiEndpoint"`: `string` - optional + - The endpoint URL, if you don't want to use the provider’s default endpoint. +- `"options"` : `object` - optional + - Additional parameters like `temperature`, `topK`, `topP` based on provider documentation. ### Debugging experimental models From 6802a4ecf603a3b5ee2332e4f0da6c4174c670bb Mon Sep 17 00:00:00 2001 From: PriNova Date: Thu, 26 Dec 2024 19:25:37 +0000 Subject: [PATCH 2/2] small formatting fixes --- docs/cody/clients/install-vscode.mdx | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/cody/clients/install-vscode.mdx b/docs/cody/clients/install-vscode.mdx index 3861551e9..da4fa9d02 100644 --- a/docs/cody/clients/install-vscode.mdx +++ b/docs/cody/clients/install-vscode.mdx @@ -413,13 +413,13 @@ Example VS Code user settings JSON configuration: - `"provider"`: `"google"`, `"groq"`, `"ollama"` or `"openai"` - The LLM provider type. - `"model"`: `string` - - The ID of the model, e.g. "gemini-2.0-flash-exp" + - The ID of the model, e.g. `"gemini-2.0-flash-exp"` - `"inputTokens"`: `number` - optional - The context window size of the model's input. Default: 7000. - `"outputTokens"`: `number` - optional - The context window size of the model's output. Default: 4000. - `"apiKey"`: `string` - optional - - The API key for the endpoint. Required if the provider is "google", "groq" or "OpenAI". + - The API key for the endpoint. Required if the provider is `"google"`, `"groq"`, `"ollama"` or `"openai"`. - `"apiEndpoint"`: `string` - optional - The endpoint URL, if you don't want to use the provider’s default endpoint. - `"options"` : `object` - optional