From 4762623549d4e44e7c8c8aa34edb0cd2ab7d12e4 Mon Sep 17 00:00:00 2001 From: Nate Date: Wed, 14 Jan 2026 19:00:37 -0800 Subject: [PATCH] docs: fix title case in documentation headers MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Standardize header capitalization across documentation to follow title case conventions (capitalize major words like nouns, verbs, adjectives; keep minor words lowercase). Changes include: - Model provider docs: "Chat model" → "Chat Model", etc. - Plan mode docs: "Understanding Plan mode" → "Understanding Plan Mode" - CLI docs: "Quick start" → "Quick Start", "Next steps" → "Next Steps" - Guide docs: various headers fixed for consistency - Also fixed typo: "Availible models" → "Available Models" Co-Authored-By: Claude Opus 4.5 --- docs/autocomplete/how-to-use-it.mdx | 8 ++--- docs/cli/overview.mdx | 4 +-- docs/customize/deep-dives/prompts.mdx | 2 +- .../customize/model-providers/more/cohere.mdx | 8 ++--- .../model-providers/more/function-network.mdx | 8 ++--- docs/customize/model-providers/more/morph.mdx | 4 +-- .../customize/model-providers/more/nebius.mdx | 6 ++-- .../model-providers/more/ovhcloud.mdx | 4 +-- .../model-providers/more/scaleway.mdx | 6 ++-- .../model-providers/more/siliconflow.mdx | 8 ++--- docs/customize/model-providers/more/vllm.mdx | 8 ++--- docs/customize/model-roles/chat.mdx | 2 +- docs/faqs.mdx | 2 +- docs/guides/cli.mdx | 2 +- .../codebase-documentation-awareness.mdx | 34 +++++++++---------- docs/ide-extensions/plan/how-it-works.mdx | 24 ++++++------- docs/ide-extensions/plan/quick-start.mdx | 10 +++--- docs/reference.mdx | 2 +- docs/reference/continue-mcp.mdx | 2 +- docs/reference/json-reference.mdx | 2 +- docs/reference/yaml-migration.mdx | 6 ++-- 21 files changed, 76 insertions(+), 76 deletions(-) diff --git a/docs/autocomplete/how-to-use-it.mdx b/docs/autocomplete/how-to-use-it.mdx index 36a54ff6d19..93dbe6b75a8 100644 --- a/docs/autocomplete/how-to-use-it.mdx +++ b/docs/autocomplete/how-to-use-it.mdx @@ -13,18 +13,18 @@ description: "Learn how to use Continue's AI-powered code autocomplete feature w Autocomplete provides inline code suggestions as you type. To enable it, simply click the "Continue" button in the status bar at the bottom right of your IDE or ensure the "Enable Tab Autocomplete" option is checked in your IDE settings. -### Accepting a full suggestion +### Accepting a Full Suggestion Accept a full suggestion by pressing `Tab` -### Rejecting a full suggestion +### Rejecting a Full Suggestion Reject a full suggestion with `Esc` -### Partially accepting a suggestion +### Partially Accepting a Suggestion For more granular control, use `cmd/ctrl` + `→` to accept parts of the suggestion word-by-word. -### Forcing a suggestion (VS Code) +### Forcing a Suggestion (VS Code) If you want to trigger a suggestion immediately without waiting, or if you've dismissed a suggestion and want a new one, you can force it by using the keyboard shortcut **`cmd/ctrl` + `alt` + `space`**. diff --git a/docs/cli/overview.mdx b/docs/cli/overview.mdx index f2063b60587..e9f039961e7 100644 --- a/docs/cli/overview.mdx +++ b/docs/cli/overview.mdx @@ -15,7 +15,7 @@ sidebarTitle: "Overview" Automate tedious tasks. -## Get started in 30 seconds +## Get Started in 30 Seconds Prerequisites: @@ -226,7 +226,7 @@ cn -p "Scan the codebase for potential security vulnerabilities" -## Next steps +## Next Steps diff --git a/docs/customize/deep-dives/prompts.mdx b/docs/customize/deep-dives/prompts.mdx index 0e9e03bbe8b..b260a42d18a 100644 --- a/docs/customize/deep-dives/prompts.mdx +++ b/docs/customize/deep-dives/prompts.mdx @@ -7,7 +7,7 @@ sidebarTitle: "Prompts" Prompts are included as user messages and are especially useful as instructions for repetitive and/or complex tasks. -## Slash commands +## Slash Commands By setting `invokable` to `true`, you make the markdown file a prompt, which will be available when you type / in Chat, Plan, and Agent mode. diff --git a/docs/customize/model-providers/more/cohere.mdx b/docs/customize/model-providers/more/cohere.mdx index d34bb4f9735..a46a0e25116 100644 --- a/docs/customize/model-providers/more/cohere.mdx +++ b/docs/customize/model-providers/more/cohere.mdx @@ -5,7 +5,7 @@ description: "Configure Cohere's AI models with Continue, including setup for Co Before using Cohere, visit the [Cohere dashboard](https://dashboard.cohere.com/api-keys) to create an API key. -## Chat model +## Chat Model We recommend configuring **Command A** as your chat model. @@ -39,13 +39,13 @@ We recommend configuring **Command A** as your chat model. -## Autocomplete model +## Autocomplete Model Cohere currently does not offer any autocomplete models. [Click here](../../model-roles/autocomplete) to see a list of autocomplete model providers. -## Embeddings model +## Embeddings Model We recommend configuring **embed-v4.0** as your embeddings model. @@ -78,7 +78,7 @@ We recommend configuring **embed-v4.0** as your embeddings model. -## Reranking model +## Reranking Model We recommend configuring **rerank-v3.5** as your reranking model. diff --git a/docs/customize/model-providers/more/function-network.mdx b/docs/customize/model-providers/more/function-network.mdx index 07ae655779b..f35aca0aefc 100644 --- a/docs/customize/model-providers/more/function-network.mdx +++ b/docs/customize/model-providers/more/function-network.mdx @@ -11,7 +11,7 @@ To get an API key, login to the Function Network Developer Platform. If you don' -## Chat model +## Chat Model Function Network supports a number of models for chat. We recommend using LLama 3.1 70b or Qwen2.5-Coder-32B-Instruct. @@ -49,7 +49,7 @@ Function Network supports a number of models for chat. We recommend using LLama [Click here](https://docs.function.network/models-supported/chat-and-code-completion) to see a list of chat model providers. -## Autocomplete model +## Autocomplete Model Function Network supports a number of models for autocomplete. We recommend using Llama 3.1 8b or Qwen2.5-Coder-1.5B. @@ -83,7 +83,7 @@ Function Network supports a number of models for autocomplete. We recommend usin -## Embeddings model +## Embeddings Model Function Network supports a number of models for embeddings. We recommend using baai/bge-base-en-v1.5. @@ -118,6 +118,6 @@ Function Network supports a number of models for embeddings. We recommend using [Click here](https://docs.function.network/models-supported/embeddings) to see a list of embeddings model providers. -## Reranking model +## Reranking Model Function Network currently does not offer any reranking models. diff --git a/docs/customize/model-providers/more/morph.mdx b/docs/customize/model-providers/more/morph.mdx index b6bf8bb5749..8fc12f7b437 100644 --- a/docs/customize/model-providers/more/morph.mdx +++ b/docs/customize/model-providers/more/morph.mdx @@ -55,7 +55,7 @@ or -## Embeddings model +## Embeddings Model We recommend configuring **morph-embedding-v2** as your embeddings model. @@ -90,7 +90,7 @@ We recommend configuring **morph-embedding-v2** as your embeddings model. -## Reranking model +## Reranking Model We recommend configuring **morph-rerank-v2** as your reranking model. diff --git a/docs/customize/model-providers/more/nebius.mdx b/docs/customize/model-providers/more/nebius.mdx index eb100bf5f71..a678c551246 100644 --- a/docs/customize/model-providers/more/nebius.mdx +++ b/docs/customize/model-providers/more/nebius.mdx @@ -5,11 +5,11 @@ description: "Configure Nebius AI Studio with Continue to access their language You can get an API key from the [Nebius AI Studio API keys page](https://studio.nebius.ai/settings/api-keys) -## Availible models +## Available Models Available models can be found on the [Nebius AI Studio models page](https://studio.nebius.ai/models/text2text) -## Chat model +## Chat Model @@ -41,7 +41,7 @@ Available models can be found on the [Nebius AI Studio models page](https://stud -## Embeddings model +## Embeddings Model Available models can be found on the [Nebius AI Studio embeddings page](https://studio.nebius.ai/models/embedding) diff --git a/docs/customize/model-providers/more/ovhcloud.mdx b/docs/customize/model-providers/more/ovhcloud.mdx index 4335344344f..422568871b8 100644 --- a/docs/customize/model-providers/more/ovhcloud.mdx +++ b/docs/customize/model-providers/more/ovhcloud.mdx @@ -12,7 +12,7 @@ OVHcloud AI Endpoints is a serverless inference API that provides access to a cu page](https://www.ovhcloud.com/en/public-cloud/ai-endpoints/). -## Chat model +## Chat Model We recommend configuring **Qwen2.5-Coder-32B-Instruct** as your chat model. Check our [catalog](https://endpoints.ai.cloud.ovh.net/catalog) to see all of our models hsoted on AI Endpoints. @@ -47,7 +47,7 @@ Check our [catalog](https://endpoints.ai.cloud.ovh.net/catalog) to see all of ou -## Embeddings model +## Embeddings Model We recommend configuring **bge-multilingual-gemma2** as your embeddings model. diff --git a/docs/customize/model-providers/more/scaleway.mdx b/docs/customize/model-providers/more/scaleway.mdx index e6c605d4d03..0e054ae6af4 100644 --- a/docs/customize/model-providers/more/scaleway.mdx +++ b/docs/customize/model-providers/more/scaleway.mdx @@ -12,7 +12,7 @@ description: "Configure Scaleway Generative APIs with Continue to access AI mode here](https://www.scaleway.com/en/docs/ai-data/generative-apis/quickstart/). -## Chat model +## Chat Model We recommend configuring **Qwen2.5-Coder-32B-Instruct** as your chat model. [Click here](https://www.scaleway.com/en/docs/ai-data/generative-apis/reference-content/supported-models/) to see the list of available chat models. @@ -47,13 +47,13 @@ We recommend configuring **Qwen2.5-Coder-32B-Instruct** as your chat model. -## Autocomplete model +## Autocomplete Model Scaleway currently does not offer any autocomplete models. [Click here](../../model-roles/autocomplete) to see a list of autocomplete model providers. -## Embeddings model +## Embeddings Model We recommend configuring **BGE-Multilingual-Gemma2** as your embeddings model. diff --git a/docs/customize/model-providers/more/siliconflow.mdx b/docs/customize/model-providers/more/siliconflow.mdx index f75ce2a9c9a..952d46d6407 100644 --- a/docs/customize/model-providers/more/siliconflow.mdx +++ b/docs/customize/model-providers/more/siliconflow.mdx @@ -8,7 +8,7 @@ description: "Configure SiliconFlow with Continue to access their AI model platf Cloud](https://cloud.siliconflow.cn/account/ak). -## Chat model +## Chat Model We recommend configuring **Qwen/Qwen2.5-Coder-32B-Instruct** as your chat model. @@ -44,7 +44,7 @@ We recommend configuring **Qwen/Qwen2.5-Coder-32B-Instruct** as your chat model. -## Autocomplete model +## Autocomplete Model We recommend configuring **Qwen/Qwen2.5-Coder-7B-Instruct** as your autocomplete model. @@ -86,10 +86,10 @@ We recommend configuring **Qwen/Qwen2.5-Coder-7B-Instruct** as your autocomplete -## Embeddings model +## Embeddings Model SiliconFlow provide some embeddings models. [Click here](https://siliconflow.cn/models) to see a list of embeddings models. -## Reranking model +## Reranking Model SiliconFlow provide some reranking models. [Click here](https://siliconflow.cn/models) to see a list of reranking models. diff --git a/docs/customize/model-providers/more/vllm.mdx b/docs/customize/model-providers/more/vllm.mdx index 57c99dd3910..3d15982dbb7 100644 --- a/docs/customize/model-providers/more/vllm.mdx +++ b/docs/customize/model-providers/more/vllm.mdx @@ -9,7 +9,7 @@ vLLM is an open-source library for fast LLM inference which typically is used to vllm serve meta-llama/Meta-Llama-3.1-8B-Instruct ``` -## Chat model +## Chat Model We recommend configuring **Llama3.1 8B** as your chat model. @@ -43,7 +43,7 @@ We recommend configuring **Llama3.1 8B** as your chat model. -## Autocomplete model +## Autocomplete Model We recommend configuring **Qwen2.5-Coder 1.5B** as your autocomplete model. @@ -77,7 +77,7 @@ We recommend configuring **Qwen2.5-Coder 1.5B** as your autocomplete model. -## Embeddings model +## Embeddings Model We recommend configuring **Nomic Embed Text** as your embeddings model. @@ -110,7 +110,7 @@ We recommend configuring **Nomic Embed Text** as your embeddings model. -## Reranking model +## Reranking Model Continue automatically handles vLLM's response format (which uses `results` instead of `data`). diff --git a/docs/customize/model-roles/chat.mdx b/docs/customize/model-roles/chat.mdx index ba45fee751a..5465791df9e 100644 --- a/docs/customize/model-roles/chat.mdx +++ b/docs/customize/model-roles/chat.mdx @@ -157,7 +157,7 @@ If you prefer to use a model from [Google](../model-providers/top-level/gemini), -## Local, offline experience +## Local, Offline Experience For the best local, offline Chat experience, you will want to use a model that is large but fast enough on your machine. diff --git a/docs/faqs.mdx b/docs/faqs.mdx index 23a3cf22349..df5daf55cf3 100644 --- a/docs/faqs.mdx +++ b/docs/faqs.mdx @@ -48,7 +48,7 @@ If you are using VS Code and require requests to be made through a proxy, you ar Continue can be used in [code-server](https://coder.com/), but if you are running across an error in the logs that includes "This is likely because the editor is not running in a secure context", please see [their documentation on securely exposing code-server](https://coder.com/docs/code-server/latest/guide#expose-code-server). -## Changes to configs not showing in VS Code +## Changes to Configs Not Showing in VS Code If you've made changes to a config (adding, modifying, or removing it) but the changes aren't appearing in the Continue extension in VS Code, try reloading the VS Code window: diff --git a/docs/guides/cli.mdx b/docs/guides/cli.mdx index 2d17481fca7..3e00ec6c0b3 100644 --- a/docs/guides/cli.mdx +++ b/docs/guides/cli.mdx @@ -10,7 +10,7 @@ It provides a battle-tested agent loop so you can simply plug in your model, rul ![cn](/images/cn-demo.gif) -## Quick start +## Quick Start Make sure you have [Node.js 18 or higher diff --git a/docs/guides/codebase-documentation-awareness.mdx b/docs/guides/codebase-documentation-awareness.mdx index 609970263dd..4a73a44f0f9 100644 --- a/docs/guides/codebase-documentation-awareness.mdx +++ b/docs/guides/codebase-documentation-awareness.mdx @@ -7,11 +7,11 @@ sidebarTitle: Codebase and Documentation Awareness Agent mode works best when it understands the context of your project. This guide shows you how to give agent mode access to codebases and documentation, making it more helpful and accurate. -## Make agent mode aware of your open codebase +## Make Agent Mode Aware of Your Open Codebase When agent mode understands your current codebase, it can provide more relevant suggestions and answers. -### Let agent mode explore the codebase using tools +### Let Agent Mode Explore the Codebase Using Tools Agent mode can use built-in tools to navigate and understand your code: @@ -19,7 +19,7 @@ Agent mode can use built-in tools to navigate and understand your code: 2. **Code search**: Use search to find relevant code snippets 3. **Git integration**: Access commit history and understand code evolution -### Create rules to help the agent understand your codebase +### Create Rules to Help the Agent Understand Your Codebase Rules guide agent mode's behavior and understanding. Place markdown files in `.continue/rules` in your project to provide context: @@ -46,15 +46,15 @@ This is a React application with: Learn more about [rules configuration](/customize/deep-dives/rules). -## Make agent mode aware of other codebases +## Make Agent Mode Aware of Other Codebases Sometimes you need agent mode to understand code beyond your current project. -### Public codebases +### Public Codebases For open-source projects and public repositories, you have several options: -#### Rules with hyperlinks +#### Rules with Hyperlinks Create rules that point to external codebases: @@ -94,25 +94,25 @@ Once configured, agent mode can explore repositories like: - "Explore the React repository structure" - "Find how authentication is implemented in NextAuth.js" -### Internal codebases +### Internal Codebases For private and internal repositories, you need additional setup: -#### Custom MCP servers +#### Custom MCP Servers Create an MCP server that has access to your internal repositories. -#### Custom code RAG +#### Custom Code RAG For faster retrieval and lower costs with very large internal codebases, consider implementing a [custom code RAG](/guides/custom-code-rag) system. This is an advanced approach that requires more setup but can provide performance benefits at scale. -## Make agent mode aware of relevant documentation +## Make Agent Mode Aware of Relevant Documentation Documentation provides crucial context for agent mode to understand APIs, frameworks, and best practices. -### Public documentation +### Public Documentation -#### Rules with documentation links +#### Rules with Documentation Links Guide agent mode to relevant documentation: @@ -137,11 +137,11 @@ Agent mode can then answer questions like: - "How do I use React hooks?" - "What's the syntax for Tailwind CSS animations?" -### Internal documentation +### Internal Documentation For private documentation and wikis: -#### Rules with internal links +#### Rules with Internal Links Create rules that reference internal resources: @@ -157,11 +157,11 @@ Our team documentation is available at: Always follow our internal standards when suggesting code. ``` -#### Custom MCP servers for docs +#### Custom MCP Servers for Docs Create an MCP server that accesses your internal documentation. -## Migrating from deprecated context providers +## Migrating from Deprecated Context Providers If you were previously using the `@Codebase` or `@Docs` context providers, here's how to migrate to the new approach: @@ -183,7 +183,7 @@ The `@Docs` context provider has been deprecated. Instead: The new approach provides better integration with Continue's Agent mode features and more intelligent context selection. -## Next steps +## Next Steps - Learn more about [MCP servers](/reference/continue-mcp) - Explore [rules configuration](/customize/deep-dives/rules) diff --git a/docs/ide-extensions/plan/how-it-works.mdx b/docs/ide-extensions/plan/how-it-works.mdx index 8555e68ffb2..2691ae49ae4 100644 --- a/docs/ide-extensions/plan/how-it-works.mdx +++ b/docs/ide-extensions/plan/how-it-works.mdx @@ -3,11 +3,11 @@ title: "How Plan Works" description: "Plan mode provides a restricted environment with read-only tools, enabling safe exploration and planning without making changes to your codebase." --- -## Understanding Plan mode +## Understanding Plan Mode Plan mode is designed to help you understand code and construct plans before making changes. It sits between Chat mode (no tools) and Agent mode (all tools), providing a middle ground where you can explore and analyze without risk. -### The key difference: Read-only tools +### The Key Difference: Read-Only Tools While Agent mode has access to all tools including those that modify files, Plan mode restricts access to only read-only tools. This ensures that: @@ -16,7 +16,7 @@ While Agent mode has access to all tools including those that modify files, Plan - All exploration is safe and non-destructive - You can confidently investigate without unintended consequences -## How Plan mode works +## How Plan Mode Works When you select Plan mode: @@ -25,7 +25,7 @@ When you select Plan mode: 3. The model can use these tools to explore and analyze your codebase 4. When you're ready to implement changes, you switch to Agent mode -### Available tools in Plan mode +### Available Tools in Plan Mode Plan mode includes these read-only built-in tools: @@ -41,7 +41,7 @@ Plan mode includes these read-only built-in tools: - **View subdirectory** (`view_subdirectory`): Get a detailed view of a specific directory - **Codebase tool** (`codebase_tool`): Advanced codebase analysis capabilities -### MCP tools support +### MCP Tools Support In addition to built-in read-only tools, Plan mode also supports all MCP (Model Context Protocol) tools. This allows integration with external services that provide additional context or analysis capabilities without modifying your local environment. @@ -52,7 +52,7 @@ In addition to built-in read-only tools, Plan mode also supports all MCP (Model MCP tools can perform before using them in Plan mode. -## The planning workflow +## The Planning Workflow A typical Plan mode workflow follows these steps: @@ -62,7 +62,7 @@ A typical Plan mode workflow follows these steps: 4. **Verification**: Review the plan and ensure it addresses all requirements 5. **Execution**: Switch to Agent mode to implement the plan -### Example: Planning a refactor +### Example: Planning a Refactor ``` User: Help me plan a refactor to extract the authentication logic into a separate module @@ -75,7 +75,7 @@ Plan mode: 5. Suggests: "Switch to Agent mode to implement this plan" ``` -## System message and behavior +## System Message and Behavior Plan mode uses a dedicated system message that: @@ -90,9 +90,9 @@ Plan mode uses a dedicated system message that: code](https://github.com/continuedev/continue/blob/main/core/llm/defaultSystemMessages.ts). -## When to use Plan mode vs other modes +## When to Use Plan Mode vs Other Modes -### Use Plan mode when: +### Use Plan Mode When: - Exploring unfamiliar codebases - Planning complex refactors or features @@ -100,13 +100,13 @@ Plan mode uses a dedicated system message that: - Reviewing code architecture - Creating implementation strategies -### Use Chat mode when: +### Use Chat Mode When: - Having discussions without needing file access - Asking general programming questions - Getting explanations without exploring code -### Use Agent mode when: +### Use Agent Mode When: - Ready to implement changes - Need to create or modify files diff --git a/docs/ide-extensions/plan/quick-start.mdx b/docs/ide-extensions/plan/quick-start.mdx index 864a33cc942..0c42df8e75a 100644 --- a/docs/ide-extensions/plan/quick-start.mdx +++ b/docs/ide-extensions/plan/quick-start.mdx @@ -3,7 +3,7 @@ title: "Quick Start" description: "Get started with Continue's Plan mode to safely explore codebases, plan refactors, debug issues, and develop implementation strategies without modifying code before switching to execution" --- -## How to use it +## How to Use It Plan mode provides a safe environment for understanding and constructing plans without making changes. It equips the Chat model with read-only tools, allowing you to explore, analyze, and plan modifications before executing them. @@ -23,7 +23,7 @@ You can switch to `Plan` in the mode selector below the chat input box. Plan mode lives within the same interface as [Chat mode](/ide-extensions/chat/how-it-works) and [Agent mode](/ide-extensions/agent/how-it-works), so the same [input](/ide-extensions/chat/quick-start#how-to-start-a-conversation) is used to send messages and you can still use the same manual methods of providing context, such as [`@` context providers](/ide-extensions/chat/quick-start#how-to-use--for-additional-context) or adding [highlighted code from the editor](/ide-extensions/chat/quick-start#how-to-include-code-context). -#### What makes Plan different +#### What Makes Plan Different Unlike Agent mode, Plan mode: @@ -32,7 +32,7 @@ Unlike Agent mode, Plan mode: - Focuses on understanding and planning rather than execution - Provides a safe environment for exploration -### Common use cases +### Common Use Cases Plan mode is ideal for: @@ -42,7 +42,7 @@ Plan mode is ideal for: - **Architecture review**: Understanding system design and dependencies - **Pre-implementation planning**: Thinking through changes before executing -### Example workflow +### Example Workflow 1. **Start in Plan mode** to explore and understand the task 2. **Develop a plan** with the model's help @@ -54,7 +54,7 @@ For example, you might say: Plan mode will analyze the existing code, understand the current implementation, and help you create a detailed plan—all without making any changes. -## Switching to execution +## Switching to Execution When you're ready to implement your plan, simply switch to Agent mode using the mode selector or keyboard shortcut (`Cmd/Ctrl + .`). The conversation context carries over, so Agent mode can immediately start implementing the plan you developed. diff --git a/docs/reference.mdx b/docs/reference.mdx index 23a08cb4811..0e8cef73735 100644 --- a/docs/reference.mdx +++ b/docs/reference.mdx @@ -446,7 +446,7 @@ data: - chatInteraction ``` -## Using YAML anchors to avoid config duplication +## Using YAML Anchors to Avoid Config Duplication You can also use node anchors to avoid duplication of properties. To do so, adding the YAML version header `%YAML 1.1` is needed, here's an example of a `config.yaml` configuration file using anchors: diff --git a/docs/reference/continue-mcp.mdx b/docs/reference/continue-mcp.mdx index 6481470d531..9832083a89d 100644 --- a/docs/reference/continue-mcp.mdx +++ b/docs/reference/continue-mcp.mdx @@ -6,7 +6,7 @@ keywords: [mcp, documentation, search, mintlify, reference] The Continue Documentation MCP Server allows you to search and retrieve information from the Continue documentation directly within your agent conversations. -## Set up +## Set Up ### Configure Continue diff --git a/docs/reference/json-reference.mdx b/docs/reference/json-reference.mdx index e490968ffc1..85ae97e4b42 100644 --- a/docs/reference/json-reference.mdx +++ b/docs/reference/json-reference.mdx @@ -508,7 +508,7 @@ config.json } ``` -## Fully deprecated settings +## Fully Deprecated Settings Some deprecated `config.json` settings are no longer stored in config and have been moved to be editable through the user settings. If found in `config.json`, they will be auto-migrated to User Settings and removed from `config.json`. diff --git a/docs/reference/yaml-migration.mdx b/docs/reference/yaml-migration.mdx index 302237faf68..d7c5eae4762 100644 --- a/docs/reference/yaml-migration.mdx +++ b/docs/reference/yaml-migration.mdx @@ -8,7 +8,7 @@ See also - [Intro to YAML](https://yaml.org/) - [YAML Continue Config Reference](/reference) -## Create YAML file +## Create YAML File Create a `config.yaml` file in your Continue Global Directory (`~/.continue` on Mac, `%USERPROFILE%\.continue`) alongside your current config.json file. If a `config.yaml` file is present, it will be loaded instead of config.json. @@ -323,7 +323,7 @@ mcpServers: --- -## Deprecated configuration options +## Deprecated Configuration Options Some deprecated config.json settings are no longer stored in config and have been moved to be editable through the user settings (Gear Icon). If found in config.json, they will be auto-migrated to User Settings and removed from config.json. @@ -353,6 +353,6 @@ The following top-level fields from config.json have been deprecated. Most UI-re - `experimental` - `userToken` -## New Configuration options +## New Configuration Options The YAML configuration format offers new configuration options not available in the JSON format. See the [YAML Config Reference](/reference) for more information.