Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/cody/capabilities/chat.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ Cody chat can run offline with Ollama. The offline mode does not require you to

![offline-cody-with-ollama](https://storage.googleapis.com/sourcegraph-assets/Docs/cody-offline-ollama.jpg)

You can still switch to your Sourcegraph account whenever you want to use Claude, OpenAI, Gemini, Mixtral, etc.
You can still switch to your Sourcegraph account whenever you want to use Claude, OpenAI, Gemini, etc.

## LLM selection

Expand Down
24 changes: 9 additions & 15 deletions docs/cody/capabilities/supported-models.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -8,25 +8,20 @@ Cody supports a variety of cutting-edge large language models for use in chat an

| **Provider** | **Model** | **Free** | **Pro** | **Enterprise** | | | | |
| :------------ | :-------------------------------------------------------------------------------------------------------------------------------------------- | :----------- | :----------- | :------------- | --- | --- | --- | --- |
| OpenAI | [gpt-3.5 turbo](https://platform.openai.com/docs/models/gpt-3-5-turbo) | ✅ | ✅ | ✅ | | | | |
| OpenAI | [gpt-4](https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo#:~:text=to%20Apr%202023-,gpt%2D4,-Currently%20points%20to) | - | - | ✅ | | | | |
| OpenAI | [gpt-4 turbo](https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo#:~:text=TRAINING%20DATA-,gpt%2D4%2D0125%2Dpreview,-New%20GPT%2D4) | - | ✅ | ✅ | | | | |
| OpenAI | [gpt-4o](https://platform.openai.com/docs/models/gpt-4o) | - | ✅ | ✅ | | | | |
| Anthropic | [claude-3 Haiku](https://docs.anthropic.com/claude/docs/models-overview#model-comparison) | ✅ | ✅ | ✅ | | | | |
| OpenAI | [gpt-4o](https://platform.openai.com/docs/models#gpt-4o) | - | ✅ | ✅ | | | | |
| OpenAI | [gpt-4o-mini](https://platform.openai.com/docs/models#gpt-4o-mini) | ✅ | ✅ | ✅ | | | | |
| OpenAI | [o3-mini-medium](https://openai.com/index/openai-o3-mini/) (experimental) | ✅ | ✅ | ✅ | | | | |
| OpenAI | [o3-mini-high](https://openai.com/index/openai-o3-mini/) (experimental) | - | - | ✅ | | | | |
| OpenAI | [o1](https://platform.openai.com/docs/models#o1) | - | ✅ | ✅ | | | | |
| Anthropic | [claude-3.5 Haiku](https://docs.anthropic.com/claude/docs/models-overview#model-comparison) | ✅ | ✅ | ✅ | | | | |
| Anthropic | [claude-3 Sonnet](https://docs.anthropic.com/claude/docs/models-overview#model-comparison) | ✅ | ✅ | ✅ | | | | |
| Anthropic | [claude-3.5 Sonnet](https://docs.anthropic.com/claude/docs/models-overview#model-comparison) | ✅ | ✅ | ✅ | | | | |
| Anthropic | [claude-3.5 Sonnet (New)](https://docs.anthropic.com/claude/docs/models-overview#model-comparison) | ✅ | ✅ | ✅ | | | | |
| Anthropic | [claude-3 Opus](https://docs.anthropic.com/claude/docs/models-overview#model-comparison) | - | ✅ | ✅ | | | | |
| Mistral | [mixtral 8x7b](https://mistral.ai/technology/#models:~:text=of%20use%20cases.-,Mixtral%208x7B,-Currently%20the%20best) | ✅ | ✅ | - | | | | |
| Mistral | [mixtral 8x22b](https://mistral.ai/technology/#models:~:text=of%20use%20cases.-,Mixtral%208x7B,-Currently%20the%20best) | ✅ | ✅ | - | | | | |
| Ollama | [variety](https://ollama.com/) | experimental | experimental | - | | | | |
| Google Gemini | [1.5 Pro](https://deepmind.google/technologies/gemini/pro/) | ✅ | ✅ | ✅ (Beta) | | | | |
| Google Gemini | [1.5 Flash](https://deepmind.google/technologies/gemini/flash/) | ✅ | ✅ | ✅ (Beta) | | | | |
| Google Gemini | [2.0 Flash Experimental](https://deepmind.google/technologies/gemini/flash/) | ✅ | ✅ | ✅ | | | | |
| | | | | | | | | |
| Google Gemini | [1.5 Pro](https://deepmind.google/technologies/gemini/pro/) | ✅ | ✅ | ✅ (beta) | | | | |
| Google Gemini | [2.0 Flash](https://deepmind.google/technologies/gemini/flash/) | ✅ | ✅ | ✅ | | | | |
| Google Gemini | [2.0 Flash-Lite Preview](https://deepmind.google/technologies/gemini/flash/) (experimental) | ✅ | ✅ | ✅ | | | | |

<Callout type="note">To use Claude 3 (Opus and Sonnets) models with Cody Enterprise, make sure you've upgraded your Sourcegraph instance to the latest version.</Callout>
<Callout type="note">To use Claude 3 Sonnet models with Cody Enterprise, make sure you've upgraded your Sourcegraph instance to the latest version.</Callout>

## Autocomplete

Expand All @@ -37,7 +32,6 @@ Cody uses a set of models for autocomplete which are suited for the low latency
| Fireworks.ai | [DeepSeek-Coder-V2](https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct) | ✅ | ✅ | ✅ | | | | |
| Fireworks.ai | [StarCoder](https://arxiv.org/abs/2305.06161) | - | - | ✅ | | | | |
| Anthropic | [claude Instant](https://docs.anthropic.com/claude/docs/models-overview#model-comparison) | - | - | ✅ | | | | |
| Google Gemini (Beta) | [1.5 Flash](https://deepmind.google/technologies/gemini/flash/) | - | - | ✅ | | | | |
| Ollama (Experimental) | [variety](https://ollama.com/) | ✅ | ✅ | - | | | | |
| | | | | | | | | |

Expand Down
2 changes: 1 addition & 1 deletion docs/cody/clients/install-eclipse.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ The chat input field has a default `@-mention` [context chips](#context-retrieva

## LLM selection

Cody offers a variety of large language models (LLMs) to power your chat experience. Cody Free users can access the latest base models from Anthropic, OpenAI, Google, and Mixtral. At the same time, Cody Pro and Enterprise users can access more extended models.
Cody offers a variety of large language models (LLMs) to power your chat experience. Cody Free users can access the latest base models from Anthropic, OpenAI, Google. At the same time, Cody Pro and Enterprise users can access more extended models.

Local models are also available through Ollama to Cody Free and Cody Pro users. To use a model in Cody chat, simply download it and run it in Ollama.

Expand Down
2 changes: 1 addition & 1 deletion docs/cody/clients/install-visual-studio.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ The chat input field has a default `@-mention` [context chips](#context-retrieva

## LLM selection

Cody offers a variety of large language models (LLMs) to power your chat experience. Cody Free users can access the latest base models from Anthropic, OpenAI, Google, and Mixtral. At the same time, Cody Pro and Enterprise users can access more extended models.
Cody offers a variety of large language models (LLMs) to power your chat experience. Cody Free users can access the latest base models from Anthropic, OpenAI, Google. At the same time, Cody Pro and Enterprise users can access more extended models.

Local models are also available through Ollama to Cody Free and Cody Pro users. To use a model in Cody chat, download it and run it in Ollama.

Expand Down
8 changes: 4 additions & 4 deletions docs/cody/clients/install-vscode.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -136,7 +136,7 @@ For Edit:

- On any file, select some code and a right-click
- Select Cody->Edit Code (optionally, you can do this with Opt+K/Alt+K)
- Select the default model available (this is Claude 3 Opus)
- Select the default model available
- See the selection of models and click the model you desire. This model will now be the default model going forward on any new edits

### Selecting Context with @-mentions
Expand Down Expand Up @@ -273,11 +273,11 @@ Users on Cody **Free** and **Pro** can choose from a list of [supported LLM mode

![LLM-models-for-cody-free](https://storage.googleapis.com/sourcegraph-assets/Docs/llm-dropdown-options-2025.png)

Enterprise users get Claude 3 (Opus and Sonnet) as the default LLM models without extra cost. Moreover, Enterprise users can use Claude 3.5 models through Cody Gateway, Anthropic BYOK, Amazon Bedrock (limited availability), and GCP Vertex.
Enterprise users get Claude 3.5 Sonnet as the default LLM models without extra cost. Moreover, Enterprise users can use Claude 3.5 models through Cody Gateway, Anthropic BYOK, Amazon Bedrock (limited availability), and GCP Vertex.

<Callout type="info">For enterprise users on Amazon Bedrock: 3.5 Sonnet is unavailable in `us-west-2` but available in `us-east-1`. Check the current model availability on AWS and your customer's instance location before switching. Provisioned throughput via AWS is not supported for 3.5 Sonnet.</Callout>

You also get additional capabilities like BYOLLM (Bring Your Own LLM), supporting Single-Tenant and Self Hosted setups for flexible coding environments. Your site administrator determines the LLM, and cannot be changed within the editor. However, Cody Enterprise users when using Cody Gateway have the ability to [configure custom models](/cody/core-concepts/cody-gateway#configuring-custom-models) Anthropic (like Claude 2.0 and Claude Instant), OpenAI (GPT 3.5 and GPT 4) and Google Gemini 1.5 models (Flash and Pro).
You also get additional capabilities like BYOLLM (Bring Your Own LLM), supporting Single-Tenant and Self Hosted setups for flexible coding environments. Your site administrator determines the LLM, and cannot be changed within the editor. However, Cody Enterprise users when using Cody Gateway have the ability to [configure custom models](/cody/core-concepts/cody-gateway#configuring-custom-models) from Anthropic, OpenAI, and Google Gemini.

<Callout type="note">Read more about all the supported LLM models [here](/cody/capabilities/supported-models)</Callout>

Expand Down Expand Up @@ -333,7 +333,7 @@ You can use Cody with or without an internet connection. The offline mode does n

![offline-cody-with-ollama](https://storage.googleapis.com/sourcegraph-assets/Docs/cody-offline-ollama.jpg)

You still have the option to switch to your Sourcegraph account whenever you want to use Claude, OpenAI, Gemini, Mixtral, etc.
You still have the option to switch to your Sourcegraph account whenever you want to use Claude, OpenAI, Gemini, etc.

## Experimental models

Expand Down
Loading