diff --git a/pages/generative-apis/reference-content/adding-ai-to-vscode-using-continue.mdx b/pages/generative-apis/reference-content/adding-ai-to-vscode-using-continue.mdx
index 78df4601d1..a12fbf4487 100644
--- a/pages/generative-apis/reference-content/adding-ai-to-vscode-using-continue.mdx
+++ b/pages/generative-apis/reference-content/adding-ai-to-vscode-using-continue.mdx
@@ -36,56 +36,123 @@ To link Continue with Scaleway's Generative APIs, you can use built-in menus fro
- Click **Continue** in the menu on the left.
- In the prompt section, click on **Select model** dropdown, then on **Add Chat model**.
- Select **Scaleway** as provider.
-- Select the model you want to use (we recommend `Qwen 2.5 Coder 32b` to get started with).
+- Select the model you want to use (we recommend `Qwen 2.5 Coder 32b` to get started with chat and autocompletion only).
- Enter your **Scaleway secret key**.
To start with, we recommend you use a Scaleway secret key having access to your `default` Scaleway project.
-These actions will edit automatically your `config.json` file. To edit it manually, see [Configure Continue through configuration file](#configure-continue-through-configuration-file).
+These actions will automatically edit your `config.yaml` file. To edit it manually, see [Configure Continue through configuration file](#configure-continue-through-configuration-file).
- Embeddings and autocomplete models are not yet supported through graphical interface configuration. To enable them, you need to edit the configuration manually, see [Configure Continue through configuration file](#configure-continue-through-configuration-file).
+ Agents, embeddings, and autocomplete models are not yet supported through graphical interface configuration. Manually edit the configuration to enable them. See [Configure Continue through configuration file](#configure-continue-through-configuration-file) for more information.
#### Configure Continue through a configuration file
To link Continue with Scaleway's Generative APIs, you can configure a settings file:
-- Create a `config.json` file inside your `.continue` directory.
-- Add the following configuration to enable Scaleway's Generative API:
- ```json
- {
- "models": [
- {
- "model": "qwen2.5-coder-32b-instruct",
- "title": "Qwen2.5 Coder",
- "provider": "scaleway",
- "apiKey": "###SCW_SECRET_KEY###"
- }
- ],
- "embeddingsProvider": {
- "model": "bge-multilingual-gemma2",
- "provider": "scaleway",
- "apiKey": "###SCW_SECRET_KEY###"
- },
- "tabAutocompleteModel": {
- "model": "qwen2.5-coder-32b",
- "title": "Qwen2.5 Coder Autocomplete",
- "provider": "scaleway",
- "apiKey": "###SCW_SECRET_KEY###"
- }
- }
+- Open your `config.yaml` settings file:
+ - If you have already configured a **Local Assistant**, click **Local Assistant**, then click the **wheel icon** to open your existing `config.yaml`
+ - Otherwise, create a `config.yaml` file inside your `.continue` directory.
+- Add the following configuration to enable Scaleway's Generative API. This configuration uses three different models for each tasks:
+ - `devstral-small-2505` for agentic workflows through a chat interface
+ - `qwen2.5-coder-32b` for autocompletion when editing a file
+ - `bge-multilingual-gemma2` for embedding and retrieving code context
+ ```yaml
+ name: Continue Config
+ version: 0.0.1
+ models:
+ - name: Devstral - Scaleway
+ provider: openai
+ model: devstral-small-2505
+ apiBase: https://api.scaleway.ai/v1/
+ apiKey: ###SCW_SECRET_KEY###
+ defaultCompletionOptions:
+ maxTokens: 8000
+ contextLength: 50000
+ roles:
+ - chat
+ - apply
+ - embed
+ - edit
+ capabilities:
+ - tool_use
+ - name: Autocomplete - Scaleway
+ provider: openai
+ model: qwen2.5-coder-32b
+ apiBase: https://api.scaleway.ai/v1/
+ apiKey: ###SCW_SECRET_KEY###
+ defaultCompletionOptions:
+ maxTokens: 8000
+ contextLength: 50000
+ roles:
+ - autocomplete
+ - name: Embeddings Model - Scaleway
+ provider: openai
+ model: bge-multilingual-gemma2
+ apiBase: https://api.scaleway.ai/v1/
+ apiKey: ###SCW_SECRET_KEY###
+ roles:
+ - embed
+ embedOptions:
+ maxChunkSize: 256
+ maxBatchSize: 32
+ context:
+ - provider: problems
+ - provider: tree
+ - provider: url
+ - provider: search
+ - provider: folder
+ - provider: codebase
+ - provider: web
+ params:
+ n: 3
+ - provider: open
+ params:
+ onlyPinned: true
+ - provider: docs
+ - provider: terminal
+ - provider: code
+ - provider: diff
+ - provider: currentFile
```
- Save the file at the correct location:
- - Linux/macOS: `~/.continue/config.json`
- - Windows: `%USERPROFILE%\.continue\config.json`
+ - Linux/macOS: `~/.continue/config.yaml`
+ - Windows: `%USERPROFILE%\.continue\config.yaml`
+- In **Local Assistant**, click on **Reload config** or restart VS Code.
+
+Alternatively, a `config.json` file can be used with the following format. Note that this format is deprecated, and we recommend using `config.yaml` instead.
+```json
+{
+ "models": [
+ {
+ "model": "devstral-small-2505",
+ "title": "Devstral - Scaleway",
+ "provider": "openai",
+ "apiKey": "###SCW_SECRET_KEY###"
+ }
+ ],
+ "embeddingsProvider": {
+ "model": "bge-multilingual-gemma2",
+ "provider": "openai",
+ "apiKey": "###SCW_SECRET_KEY###"
+ },
+ "tabAutocompleteModel": {
+ "model": "qwen2.5-coder-32b",
+ "title": "Autocomplete - Scaleway",
+ "provider": "openai",
+ "apiKey": "###SCW_SECRET_KEY###"
+ }
+}
+```
- For more details on configuring `config.json`, refer to the [official Continue documentation](https://docs.continue.dev/reference).
+ For more details on configuring `config.yaml`, refer to the [official Continue documentation](https://docs.continue.dev/reference).
If you want to limit access to a specific Scaleway Project, you should add the field `"apiBase": "https://api.scaleway.ai/###PROJECT_ID###/v1/"` for each model (ie. `models`, `embeddingsProvider` and `tabAutocompleteModel`) since the default URL `https://api.scaleway.ai/v1/` can only be used with the `default` project.
+
### Activate Continue in VS Code
After configuring the API, open VS Code and activate Continue:
@@ -99,15 +166,10 @@ After configuring the API, open VS Code and activate Continue:
### Going further
-You can add additional parameters to configure your model behaviour by editing `config.json`.
-For instance, you can add the following `systemMessage` value to modify LLM messages `"role":"system"` and/or `"role":"developer"` and provide less verbose answers:
-```json
-{
- "models": [
- {
- "model": "...",
- "systemMessage": "You are an expert software developer. You give concise responses."
- }
- ]
-}
+You can add more parameters to configure your model's behavior by editing `config.yaml`.
+For instance, you can add the following `chatOptions.baseSystemMessage` value to modify LLM messages `"role":"system"` and/or `"role":"developer"` and provide less verbose answers:
+```yaml
+model:...
+chatOptions:
+ baseSystemMessage: "You are an expert developer. Only write concise answers."
```