Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions docs/autocomplete/how-to-use-it.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -13,18 +13,18 @@ description: "Learn how to use Continue's AI-powered code autocomplete feature w

Autocomplete provides inline code suggestions as you type. To enable it, simply click the "Continue" button in the status bar at the bottom right of your IDE or ensure the "Enable Tab Autocomplete" option is checked in your IDE settings.

### Accepting a full suggestion
### Accepting a Full Suggestion

Accept a full suggestion by pressing `Tab`

### Rejecting a full suggestion
### Rejecting a Full Suggestion

Reject a full suggestion with `Esc`

### Partially accepting a suggestion
### Partially Accepting a Suggestion

For more granular control, use `cmd/ctrl` + `→` to accept parts of the suggestion word-by-word.

### Forcing a suggestion (VS Code)
### Forcing a Suggestion (VS Code)

If you want to trigger a suggestion immediately without waiting, or if you've dismissed a suggestion and want a new one, you can force it by using the keyboard shortcut **`cmd/ctrl` + `alt` + `space`**.
4 changes: 2 additions & 2 deletions docs/cli/overview.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ sidebarTitle: "Overview"
Automate tedious tasks.
</Tip>

## Get started in 30 seconds
## Get Started in 30 Seconds

Prerequisites:

Expand Down Expand Up @@ -226,7 +226,7 @@ cn -p "Scan the codebase for potential security vulnerabilities"

</CodeGroup>

## Next steps
## Next Steps

<CardGroup cols={3}>
<Card title="CLI Quickstart" href="/cli/quick-start">
Expand Down
2 changes: 1 addition & 1 deletion docs/customize/deep-dives/prompts.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ sidebarTitle: "Prompts"

Prompts are included as user messages and are especially useful as instructions for repetitive and/or complex tasks.

## Slash commands
## Slash Commands

By setting `invokable` to `true`, you make the markdown file a prompt, which will be available when you type <kbd>/</kbd> in Chat, Plan, and Agent mode.

Expand Down
8 changes: 4 additions & 4 deletions docs/customize/model-providers/more/cohere.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ description: "Configure Cohere's AI models with Continue, including setup for Co

Before using Cohere, visit the [Cohere dashboard](https://dashboard.cohere.com/api-keys) to create an API key.

## Chat model
## Chat Model

We recommend configuring **Command A** as your chat model.

Expand Down Expand Up @@ -39,13 +39,13 @@ We recommend configuring **Command A** as your chat model.
</Tab>
</Tabs>

## Autocomplete model
## Autocomplete Model

Cohere currently does not offer any autocomplete models.

[Click here](../../model-roles/autocomplete) to see a list of autocomplete model providers.

## Embeddings model
## Embeddings Model

We recommend configuring **embed-v4.0** as your embeddings model.

Expand Down Expand Up @@ -78,7 +78,7 @@ We recommend configuring **embed-v4.0** as your embeddings model.
</Tab>
</Tabs>

## Reranking model
## Reranking Model

We recommend configuring **rerank-v3.5** as your reranking model.

Expand Down
8 changes: 4 additions & 4 deletions docs/customize/model-providers/more/function-network.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ To get an API key, login to the Function Network Developer Platform. If you don'

</Info>

## Chat model
## Chat Model

Function Network supports a number of models for chat. We recommend using LLama 3.1 70b or Qwen2.5-Coder-32B-Instruct.

Expand Down Expand Up @@ -49,7 +49,7 @@ Function Network supports a number of models for chat. We recommend using LLama

[Click here](https://docs.function.network/models-supported/chat-and-code-completion) to see a list of chat model providers.

## Autocomplete model
## Autocomplete Model

Function Network supports a number of models for autocomplete. We recommend using Llama 3.1 8b or Qwen2.5-Coder-1.5B.

Expand Down Expand Up @@ -83,7 +83,7 @@ Function Network supports a number of models for autocomplete. We recommend usin
</Tab>
</Tabs>

## Embeddings model
## Embeddings Model

Function Network supports a number of models for embeddings. We recommend using baai/bge-base-en-v1.5.

Expand Down Expand Up @@ -118,6 +118,6 @@ Function Network supports a number of models for embeddings. We recommend using

[Click here](https://docs.function.network/models-supported/embeddings) to see a list of embeddings model providers.

## Reranking model
## Reranking Model

Function Network currently does not offer any reranking models.
4 changes: 2 additions & 2 deletions docs/customize/model-providers/more/morph.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ or
</Tab>
</Tabs>

## Embeddings model
## Embeddings Model

We recommend configuring **morph-embedding-v2** as your embeddings model.

Expand Down Expand Up @@ -90,7 +90,7 @@ We recommend configuring **morph-embedding-v2** as your embeddings model.
</Tab>
</Tabs>

## Reranking model
## Reranking Model

We recommend configuring **morph-rerank-v2** as your reranking model.

Expand Down
6 changes: 3 additions & 3 deletions docs/customize/model-providers/more/nebius.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -5,11 +5,11 @@ description: "Configure Nebius AI Studio with Continue to access their language

You can get an API key from the [Nebius AI Studio API keys page](https://studio.nebius.ai/settings/api-keys)

## Availible models
## Available Models

Available models can be found on the [Nebius AI Studio models page](https://studio.nebius.ai/models/text2text)

## Chat model
## Chat Model

<Tabs>
<Tab title="YAML">
Expand Down Expand Up @@ -41,7 +41,7 @@ Available models can be found on the [Nebius AI Studio models page](https://stud
</Tab>
</Tabs>

## Embeddings model
## Embeddings Model

Available models can be found on the [Nebius AI Studio embeddings page](https://studio.nebius.ai/models/embedding)

Expand Down
4 changes: 2 additions & 2 deletions docs/customize/model-providers/more/ovhcloud.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ OVHcloud AI Endpoints is a serverless inference API that provides access to a cu
page](https://www.ovhcloud.com/en/public-cloud/ai-endpoints/).
</Info>

## Chat model
## Chat Model

We recommend configuring **Qwen2.5-Coder-32B-Instruct** as your chat model.
Check our [catalog](https://endpoints.ai.cloud.ovh.net/catalog) to see all of our models hsoted on AI Endpoints.
Expand Down Expand Up @@ -47,7 +47,7 @@ Check our [catalog](https://endpoints.ai.cloud.ovh.net/catalog) to see all of ou
</Tab>
</Tabs>

## Embeddings model
## Embeddings Model

We recommend configuring **bge-multilingual-gemma2** as your embeddings model.

Expand Down
6 changes: 3 additions & 3 deletions docs/customize/model-providers/more/scaleway.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ description: "Configure Scaleway Generative APIs with Continue to access AI mode
here](https://www.scaleway.com/en/docs/ai-data/generative-apis/quickstart/).
</Info>

## Chat model
## Chat Model

We recommend configuring **Qwen2.5-Coder-32B-Instruct** as your chat model.
[Click here](https://www.scaleway.com/en/docs/ai-data/generative-apis/reference-content/supported-models/) to see the list of available chat models.
Expand Down Expand Up @@ -47,13 +47,13 @@ We recommend configuring **Qwen2.5-Coder-32B-Instruct** as your chat model.
</Tab>
</Tabs>

## Autocomplete model
## Autocomplete Model

Scaleway currently does not offer any autocomplete models.

[Click here](../../model-roles/autocomplete) to see a list of autocomplete model providers.

## Embeddings model
## Embeddings Model

We recommend configuring **BGE-Multilingual-Gemma2** as your embeddings model.

Expand Down
8 changes: 4 additions & 4 deletions docs/customize/model-providers/more/siliconflow.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ description: "Configure SiliconFlow with Continue to access their AI model platf
Cloud](https://cloud.siliconflow.cn/account/ak).
</Info>

## Chat model
## Chat Model

We recommend configuring **Qwen/Qwen2.5-Coder-32B-Instruct** as your chat model.

Expand Down Expand Up @@ -44,7 +44,7 @@ We recommend configuring **Qwen/Qwen2.5-Coder-32B-Instruct** as your chat model.
</Tab>
</Tabs>

## Autocomplete model
## Autocomplete Model

We recommend configuring **Qwen/Qwen2.5-Coder-7B-Instruct** as your autocomplete model.

Expand Down Expand Up @@ -86,10 +86,10 @@ We recommend configuring **Qwen/Qwen2.5-Coder-7B-Instruct** as your autocomplete
</Tab>
</Tabs>

## Embeddings model
## Embeddings Model

SiliconFlow provide some embeddings models. [Click here](https://siliconflow.cn/models) to see a list of embeddings models.

## Reranking model
## Reranking Model

SiliconFlow provide some reranking models. [Click here](https://siliconflow.cn/models) to see a list of reranking models.
8 changes: 4 additions & 4 deletions docs/customize/model-providers/more/vllm.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ vLLM is an open-source library for fast LLM inference which typically is used to
vllm serve meta-llama/Meta-Llama-3.1-8B-Instruct
```

## Chat model
## Chat Model

We recommend configuring **Llama3.1 8B** as your chat model.

Expand Down Expand Up @@ -43,7 +43,7 @@ We recommend configuring **Llama3.1 8B** as your chat model.
</Tab>
</Tabs>

## Autocomplete model
## Autocomplete Model

We recommend configuring **Qwen2.5-Coder 1.5B** as your autocomplete model.

Expand Down Expand Up @@ -77,7 +77,7 @@ We recommend configuring **Qwen2.5-Coder 1.5B** as your autocomplete model.
</Tab>
</Tabs>

## Embeddings model
## Embeddings Model

We recommend configuring **Nomic Embed Text** as your embeddings model.

Expand Down Expand Up @@ -110,7 +110,7 @@ We recommend configuring **Nomic Embed Text** as your embeddings model.
</Tab>
</Tabs>

## Reranking model
## Reranking Model

Continue automatically handles vLLM's response format (which uses `results` instead of `data`).

Expand Down
2 changes: 1 addition & 1 deletion docs/customize/model-roles/chat.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -157,7 +157,7 @@ If you prefer to use a model from [Google](../model-providers/top-level/gemini),
</Tab>
</Tabs>
## Local, offline experience
## Local, Offline Experience
For the best local, offline Chat experience, you will want to use a model that is large but fast enough on your machine.
Expand Down
2 changes: 1 addition & 1 deletion docs/faqs.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ If you are using VS Code and require requests to be made through a proxy, you ar

Continue can be used in [code-server](https://coder.com/), but if you are running across an error in the logs that includes "This is likely because the editor is not running in a secure context", please see [their documentation on securely exposing code-server](https://coder.com/docs/code-server/latest/guide#expose-code-server).

## Changes to configs not showing in VS Code
## Changes to Configs Not Showing in VS Code

If you've made changes to a config (adding, modifying, or removing it) but the changes aren't appearing in the Continue extension in VS Code, try reloading the VS Code window:

Expand Down
2 changes: 1 addition & 1 deletion docs/guides/cli.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ It provides a battle-tested agent loop so you can simply plug in your model, rul

![cn](/images/cn-demo.gif)

## Quick start
## Quick Start

<Info>
Make sure you have [Node.js 18 or higher
Expand Down
Loading
Loading