Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
28 changes: 19 additions & 9 deletions docs/cody/capabilities/autocomplete.mdx
Original file line number Diff line number Diff line change
@@ -1,14 +1,24 @@
# Autocomplete

<p className="subtitle">Learn how Cody helps you get contextually-aware autocompletions for your codebase.</p>
<p className="subtitle">Learn how Cody helps you get contextually-aware autocompletions for your codebase.</p>

Cody provides intelligent **autocomplete** suggestions as you type using context from your code, such as your open files and file history. Cody autocompletes single lines or whole functions in any programming language, configuration file, or documentation. It’s powered by the latest instant LLM models for accuracy and performance.

Autocomplete supports any programming language because it uses LLMs trained on broad data. It works exceptionally well for Python, Go, JavaScript, and TypeScript.

<video width="1920" height="1080" loop playsInline controls style={{ width: '100%', height: 'auto' }}>
<source src="https://storage.googleapis.com/sourcegraph-assets/Docs/Media/cody-in-action.mp4" type="video/mp4" />
</video>
<video width="1920" height="1080" loop playsInline controls style={{ width: '100%', height: 'auto' }}>
<source src="https://storage.googleapis.com/sourcegraph-assets/Docs/Media/cody-in-action.mp4" type="video/mp4" />
</video>

## Cody's autocomplete capabilities

Cody's autocompletion model has been designed to enhance speed, accuracy, and the overall user experience. Both Cody Free and Pro users can expect the following with Cody's autocomplete:

- **Increased speed and reduced latency**: The P75 latency is reduced by 350 ms, making the autocomplete function faster
- **Improved accuracy for multi-line completions**: Completions across multiple lines are more relevant and accurately aligned with the surrounding code context
- **Higher completion acceptance rates**: The average completion acceptance rate (CAR) is improved by more than 4%, providing a more intuitive user interaction

On the technical side, Cody's autocomplete is optimized for both server-side and client-side performance, ensuring seamless integration into your coding workflow. The **default** autocomplete model for Cody Free and Pro users is **[DeepSeek V2](https://huggingface.co/deepseek-ai/DeepSeek-V2)**, which significantly helps boost both the responsiveness and accuracy of autocomplete. Cody Enterprise users get **StarCoder** as the default autocomplete model.

## Prerequisites

Expand All @@ -32,17 +42,17 @@ By default, a fully configured Sourcegraph instance picks a default LLM to gener
- Here, edit the `completionModel` option inside the `completions`
- Click the **Save** button to save the changes

<Callout type="note">Cody autocomplete works only with Anthropic's Claude Instant model. Support for other models will be coming later.</Callout>
<Callout type="note">Cody autocomplete works only with Anthropic's Claude Instant model. Support for other models will be coming later.</Callout>

<Callout type="info">Self-hosted customers must update to version 5.0.4 or more to use autocomplete.</Callout>
<Callout type="info">Self-hosted customers must update to version 5.0.4 or more to use autocomplete.</Callout>

Before configuring the autocomplete feature, it's recommended to read more about [Enabling Cody on Sourcegraph Enterprise](/cody/clients/enable-cody-enterprise) guide.

Cody Autocomplete goes beyond basic suggestions. It understands your code context, offering tailored recommendations based on your current project, language, and coding patterns. Let's view a quick demo using the VS Code extension.

<video width="1920" height="1080" loop playsInline controls style={{ width: '100%', height: 'auto' }}>
<source src="https://storage.googleapis.com/sourcegraph-assets/Docs/Media/contexual-autocpmplete.mp4" type="video/mp4" />
</video>
<video width="1920" height="1080" loop playsInline controls style={{ width: '100%', height: 'auto' }}>
<source src="https://storage.googleapis.com/sourcegraph-assets/Docs/Media/contexual-autocpmplete.mp4" type="video/mp4" />
</video>

Here, Cody provides suggestions based on your current project, language, and coding patterns. Initially, the `code.js` file is empty. Start writing a function for `bubbleSort`. As you type, Cody suggests the function name and the function parameters.

Expand Down
23 changes: 12 additions & 11 deletions docs/cody/capabilities/supported-models.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -19,8 +19,8 @@ Cody supports a variety of cutting-edge large language models for use in Chat an
| Mistral | [mixtral 8x7b](https://mistral.ai/technology/#models:~:text=of%20use%20cases.-,Mixtral%208x7B,-Currently%20the%20best) | ✅ | ✅ | - | | | | |
| Mistral | [mixtral 8x22b](https://mistral.ai/technology/#models:~:text=of%20use%20cases.-,Mixtral%208x7B,-Currently%20the%20best) | ✅ | ✅ | - | | | | |
| Ollama | [variety](https://ollama.com/) | experimental | experimental | - | | | | |
| Google Gemini | [1.5 Pro](https://deepmind.google/technologies/gemini/pro/) | ✅ | ✅ | ✅ (Beta) | | | | |
| Google Gemini | [1.5 Flash](https://deepmind.google/technologies/gemini/flash/) | ✅ | ✅ | ✅ (Beta) | | | | |
| Google Gemini | [1.5 Pro](https://deepmind.google/technologies/gemini/pro/) | ✅ | ✅ | ✅ (Beta) | | | | |
| Google Gemini | [1.5 Flash](https://deepmind.google/technologies/gemini/flash/) | ✅ | ✅ | ✅ (Beta) | | | | |
| | | | | | | | | |

<Callout type="note">To use Claude 3 (Opus and Sonnets) models with Cody Enterprise, make sure you've upgraded your Sourcegraph instance to the latest version.</Callout>
Expand All @@ -29,14 +29,15 @@ Cody supports a variety of cutting-edge large language models for use in Chat an

Cody uses a set of models for autocomplete which are suited for the low latency use case.

| **Provider** | **Model** | **Free** | **Pro** | **Enterprise** |
| :----------- | :---------------------------------------------------------------------------------------- | :------------- | :------------- | :------------- |
| Fireworks.ai | [StarCoder](https://arxiv.org/abs/2305.06161) | ✅ | ✅ | ✅ |
| Anthropic | [claude Instant](https://docs.anthropic.com/claude/docs/models-overview#model-comparison) | - | - | ✅ |
| Google Gemini (Beta) | [1.5 Flash](https://deepmind.google/technologies/gemini/flash/) | - | - | ✅ | | | | |
| Ollama (Experimental) | [variety](https://ollama.com/) | ✅ | ✅ | - |
| | | | | |
| **Provider** | **Model** | **Free** | **Pro** | **Enterprise** | | | | |
| :-------------------- | :---------------------------------------------------------------------------------------- | :------- | :------ | :------------- | --- | --- | --- | --- |
| Fireworks.ai | [DeepSeek-V2](https://huggingface.co/deepseek-ai/DeepSeek-V2) | ✅ | ✅ | - | | | | |
| Fireworks.ai | [StarCoder](https://arxiv.org/abs/2305.06161) | - | - | ✅ | | | | |
| Anthropic | [claude Instant](https://docs.anthropic.com/claude/docs/models-overview#model-comparison) | - | - | ✅ | | | | |
| Google Gemini (Beta) | [1.5 Flash](https://deepmind.google/technologies/gemini/flash/) | - | - | ✅ | | | | |
| Ollama (Experimental) | [variety](https://ollama.com/) | ✅ | ✅ | - | | | | |
| | | | | | | | | |

<Callout type="note">[See here for Ollama setup instructions](https://sourcegraph.com/docs/cody/clients/install-vscode#supported-local-ollama-models-with-cody)</Callout>
<Callout type="note">The default autocomplete model for Cody Free and Pro user is DeepSeek-V2. Enterprise users get StarCoder as the default model.</Callout>

For information on context token limits, see our [documentation here](/cody/core-concepts/token-limits).
Read here for [Ollama setup instructions](https://sourcegraph.com/docs/cody/clients/install-vscode#supported-local-ollama-models-with-cody). For information on context token limits, see our [documentation here](/cody/core-concepts/token-limits).
8 changes: 7 additions & 1 deletion docs/cody/clients/install-vscode.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -153,7 +153,6 @@ For Edit:
- Select the default model available (this is Claude 3 Opus)
- See the selection of models and click the model you desire. This model will now be the default model going forward on any new edits


### Selecting Context with @-mentions

Cody's chat allows you to add files and symbols as context in your messages.
Expand Down Expand Up @@ -272,7 +271,14 @@ For customization and advanced use cases, you can create **Custom Commands** tai

<Callout type="info">Learn more about Custom Commands [here](/cody/capabilities/commands#custom-commands)</Callout>

## Smart Apply code suggestions

Cody lets you dynamically insert code from chat into your files with **Smart Apply**. Every time Cody provides you with a code suggestion, you can click the **Apply** button. Cody will then analyze your open code file, find where that relevant code should live, and add a diff.

For chat messages where Cody provides multiple code suggestions, you can apply each in sequence to go from chat suggestions to written code.

## Keyboard shortcuts

Cody provides a set of powerful keyboard shortcuts to streamline your workflow and boost productivity. These shortcuts allow you to quickly access Cody's features without leaving your keyboard.

* `Opt+L` (macOS) or `Alt+L` (Windows/Linux): Toggles between the chat view and the last active text editor. If a chat view doesn't exist, it opens a new one. When used with an active selection in a text editor, it adds the selected code to the chat for context.
Expand Down