diff --git a/docs/cody/capabilities/autocomplete.mdx b/docs/cody/capabilities/autocomplete.mdx index 8ae6311ce..831a61c51 100644 --- a/docs/cody/capabilities/autocomplete.mdx +++ b/docs/cody/capabilities/autocomplete.mdx @@ -1,14 +1,24 @@ # Autocomplete -

Learn how Cody helps you get contextually-aware autocompletions for your codebase.

+

Learn how Cody helps you get contextually-aware autocompletions for your codebase.

Cody provides intelligent **autocomplete** suggestions as you type using context from your code, such as your open files and file history. Cody autocompletes single lines or whole functions in any programming language, configuration file, or documentation. It’s powered by the latest instant LLM models for accuracy and performance. Autocomplete supports any programming language because it uses LLMs trained on broad data. It works exceptionally well for Python, Go, JavaScript, and TypeScript. - + + +## Cody's autocomplete capabilities + +Cody's autocompletion model has been designed to enhance speed, accuracy, and the overall user experience. Both Cody Free and Pro users can expect the following with Cody's autocomplete: + +- **Increased speed and reduced latency**: The P75 latency is reduced by 350 ms, making the autocomplete function faster +- **Improved accuracy for multi-line completions**: Completions across multiple lines are more relevant and accurately aligned with the surrounding code context +- **Higher completion acceptance rates**: The average completion acceptance rate (CAR) is improved by more than 4%, providing a more intuitive user interaction + +On the technical side, Cody's autocomplete is optimized for both server-side and client-side performance, ensuring seamless integration into your coding workflow. The **default** autocomplete model for Cody Free and Pro users is **[DeepSeek V2](https://huggingface.co/deepseek-ai/DeepSeek-V2)**, which significantly helps boost both the responsiveness and accuracy of autocomplete. Cody Enterprise users get **StarCoder** as the default autocomplete model. ## Prerequisites @@ -32,17 +42,17 @@ By default, a fully configured Sourcegraph instance picks a default LLM to gener - Here, edit the `completionModel` option inside the `completions` - Click the **Save** button to save the changes - Cody autocomplete works only with Anthropic's Claude Instant model. Support for other models will be coming later. +Cody autocomplete works only with Anthropic's Claude Instant model. Support for other models will be coming later. - Self-hosted customers must update to version 5.0.4 or more to use autocomplete. +Self-hosted customers must update to version 5.0.4 or more to use autocomplete. Before configuring the autocomplete feature, it's recommended to read more about [Enabling Cody on Sourcegraph Enterprise](/cody/clients/enable-cody-enterprise) guide. Cody Autocomplete goes beyond basic suggestions. It understands your code context, offering tailored recommendations based on your current project, language, and coding patterns. Let's view a quick demo using the VS Code extension. - + Here, Cody provides suggestions based on your current project, language, and coding patterns. Initially, the `code.js` file is empty. Start writing a function for `bubbleSort`. As you type, Cody suggests the function name and the function parameters. diff --git a/docs/cody/capabilities/supported-models.mdx b/docs/cody/capabilities/supported-models.mdx index c0c88d4d6..9108bb344 100644 --- a/docs/cody/capabilities/supported-models.mdx +++ b/docs/cody/capabilities/supported-models.mdx @@ -19,8 +19,8 @@ Cody supports a variety of cutting-edge large language models for use in Chat an | Mistral | [mixtral 8x7b](https://mistral.ai/technology/#models:~:text=of%20use%20cases.-,Mixtral%208x7B,-Currently%20the%20best) | ✅ | ✅ | - | | | | | | Mistral | [mixtral 8x22b](https://mistral.ai/technology/#models:~:text=of%20use%20cases.-,Mixtral%208x7B,-Currently%20the%20best) | ✅ | ✅ | - | | | | | | Ollama | [variety](https://ollama.com/) | experimental | experimental | - | | | | | -| Google Gemini | [1.5 Pro](https://deepmind.google/technologies/gemini/pro/) | ✅ | ✅ | ✅ (Beta) | | | | | -| Google Gemini | [1.5 Flash](https://deepmind.google/technologies/gemini/flash/) | ✅ | ✅ | ✅ (Beta) | | | | | +| Google Gemini | [1.5 Pro](https://deepmind.google/technologies/gemini/pro/) | ✅ | ✅ | ✅ (Beta) | | | | | +| Google Gemini | [1.5 Flash](https://deepmind.google/technologies/gemini/flash/) | ✅ | ✅ | ✅ (Beta) | | | | | | | | | | | | | | | To use Claude 3 (Opus and Sonnets) models with Cody Enterprise, make sure you've upgraded your Sourcegraph instance to the latest version. @@ -29,14 +29,15 @@ Cody supports a variety of cutting-edge large language models for use in Chat an Cody uses a set of models for autocomplete which are suited for the low latency use case. -| **Provider** | **Model** | **Free** | **Pro** | **Enterprise** | -| :----------- | :---------------------------------------------------------------------------------------- | :------------- | :------------- | :------------- | -| Fireworks.ai | [StarCoder](https://arxiv.org/abs/2305.06161) | ✅ | ✅ | ✅ | -| Anthropic | [claude Instant](https://docs.anthropic.com/claude/docs/models-overview#model-comparison) | - | - | ✅ | -| Google Gemini (Beta) | [1.5 Flash](https://deepmind.google/technologies/gemini/flash/) | - | - | ✅ | | | | | -| Ollama (Experimental) | [variety](https://ollama.com/) | ✅ | ✅ | - | -| | | | | | +| **Provider** | **Model** | **Free** | **Pro** | **Enterprise** | | | | | +| :-------------------- | :---------------------------------------------------------------------------------------- | :------- | :------ | :------------- | --- | --- | --- | --- | +| Fireworks.ai | [DeepSeek-V2](https://huggingface.co/deepseek-ai/DeepSeek-V2) | ✅ | ✅ | - | | | | | +| Fireworks.ai | [StarCoder](https://arxiv.org/abs/2305.06161) | - | - | ✅ | | | | | +| Anthropic | [claude Instant](https://docs.anthropic.com/claude/docs/models-overview#model-comparison) | - | - | ✅ | | | | | +| Google Gemini (Beta) | [1.5 Flash](https://deepmind.google/technologies/gemini/flash/) | - | - | ✅ | | | | | +| Ollama (Experimental) | [variety](https://ollama.com/) | ✅ | ✅ | - | | | | | +| | | | | | | | | | -[See here for Ollama setup instructions](https://sourcegraph.com/docs/cody/clients/install-vscode#supported-local-ollama-models-with-cody) +The default autocomplete model for Cody Free and Pro user is DeepSeek-V2. Enterprise users get StarCoder as the default model. -For information on context token limits, see our [documentation here](/cody/core-concepts/token-limits). +Read here for [Ollama setup instructions](https://sourcegraph.com/docs/cody/clients/install-vscode#supported-local-ollama-models-with-cody). For information on context token limits, see our [documentation here](/cody/core-concepts/token-limits). diff --git a/docs/cody/clients/install-vscode.mdx b/docs/cody/clients/install-vscode.mdx index 6d13526d0..bfde9ffc0 100644 --- a/docs/cody/clients/install-vscode.mdx +++ b/docs/cody/clients/install-vscode.mdx @@ -153,7 +153,6 @@ For Edit: - Select the default model available (this is Claude 3 Opus) - See the selection of models and click the model you desire. This model will now be the default model going forward on any new edits - ### Selecting Context with @-mentions Cody's chat allows you to add files and symbols as context in your messages. @@ -272,7 +271,14 @@ For customization and advanced use cases, you can create **Custom Commands** tai Learn more about Custom Commands [here](/cody/capabilities/commands#custom-commands) +## Smart Apply code suggestions + +Cody lets you dynamically insert code from chat into your files with **Smart Apply**. Every time Cody provides you with a code suggestion, you can click the **Apply** button. Cody will then analyze your open code file, find where that relevant code should live, and add a diff. + +For chat messages where Cody provides multiple code suggestions, you can apply each in sequence to go from chat suggestions to written code. + ## Keyboard shortcuts + Cody provides a set of powerful keyboard shortcuts to streamline your workflow and boost productivity. These shortcuts allow you to quickly access Cody's features without leaving your keyboard. * `Opt+L` (macOS) or `Alt+L` (Windows/Linux): Toggles between the chat view and the last active text editor. If a chat view doesn't exist, it opens a new one. When used with an active selection in a text editor, it adds the selected code to the chat for context.