-
Notifications
You must be signed in to change notification settings - Fork 807
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
based on tag v0.0.39-jetbrains
- Loading branch information
Showing
85 changed files
with
5,122 additions
and
1 deletion.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Large diffs are not rendered by default.
Oops, something went wrong.
14 changes: 14 additions & 0 deletions
14
docs/i18n/zh-CN/docusaurus-plugin-content-blog/options.json
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,14 @@ | ||
{ | ||
"title": { | ||
"message": "Blog", | ||
"description": "The title for the blog used in SEO" | ||
}, | ||
"description": { | ||
"message": "Blog", | ||
"description": "The description for the blog used in SEO" | ||
}, | ||
"sidebar.title": { | ||
"message": "Recent posts", | ||
"description": "The label for the left sidebar" | ||
} | ||
} |
26 changes: 26 additions & 0 deletions
26
docs/i18n/zh-CN/docusaurus-plugin-content-docs/current.json
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,26 @@ | ||
{ | ||
"version.label": { | ||
"message": "Next", | ||
"description": "The label for version current" | ||
}, | ||
"sidebar.docsSidebar.category.🌉 Model setup": { | ||
"message": "🌉 模型设置", | ||
"description": "The label for category 🌉 Model setup in sidebar docsSidebar" | ||
}, | ||
"sidebar.docsSidebar.category.🎨 Customization": { | ||
"message": "🎨 自定义", | ||
"description": "The label for category 🎨 Customization in sidebar docsSidebar" | ||
}, | ||
"sidebar.docsSidebar.category.🚶 Walkthroughs": { | ||
"message": "🚶 演练", | ||
"description": "The label for category 🚶 Walkthroughs in sidebar docsSidebar" | ||
}, | ||
"sidebar.docsSidebar.category.📖 Reference": { | ||
"message": "📖 参考", | ||
"description": "The label for category 📖 Reference in sidebar docsSidebar" | ||
}, | ||
"sidebar.docsSidebar.category.Model Providers": { | ||
"message": "模型提供者", | ||
"description": "The label for category Model Providers in sidebar docsSidebar" | ||
} | ||
} |
280 changes: 280 additions & 0 deletions
280
docs/i18n/zh-CN/docusaurus-plugin-content-docs/current/config-file-migration.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,280 @@ | ||
--- | ||
title: Config File Migration | ||
description: Migrating from config.py to config.json | ||
keywords: [json, config, configuration, migration] | ||
--- | ||
|
||
# Migration to `config.json` | ||
|
||
On November 20, 2023, we migrated to using JSON as the primary config file format. If you previously used Continue, we will have attempted to automatically translate your existing config.py into a config.json file. If this fails, we will fallback to a default config.json. Your previous config.py will still be kept, but moved to config.py.old for reference. Below you can find a list of changes that were made in case you need to manually migrate your config, as well as examples of proper config.json files. | ||
|
||
The JSON format provides stronger guiderails, making it easier to write a valid config, while still allowing Intellisense in VS Code. | ||
|
||
If you need any help migrating, please reach out to us on Discord. | ||
|
||
## Configuration as Code | ||
|
||
> Continue has moved to using Typescript configuration. To learn about this, please see [Configuration as Code](./customization/code-config.md). | ||
For configuration that requires code, we now provide a simpler interface that works alongside config.json. In the same folder, `~/.continue`, create a file named `config.py` (the same name as before) and add a function called `modify_config`. This function should take a [`ContinueConfig`](https://github.com/continuedev/continue/blob/main/server/continuedev/core/config.py) object as its only argument, and return a `ContinueConfig` object. This object is essentially the same as the one that was previously defined in `config.py`. This allows you to modify the initial configuration object defined in your `config.json`. Here's an example that cuts the temperature in half: | ||
|
||
```python | ||
from continuedev.core.config import ContinueConfig | ||
|
||
def modify_config(config: ContinueConfig) -> ContinueConfig: | ||
config.completion_options.temperature /= 2 | ||
return config | ||
``` | ||
|
||
To summarize, these are the steps taken to load your configuration: | ||
|
||
1. Load `~/.continue/config.json` | ||
2. Convert this into a `ContinueConfig` object | ||
3. If `~/.continue/config.py` exists and has defined `modify_config` correctly, call `modify_config` with the `ContinueConfig` object to generate the final configuration | ||
|
||
## List of Changes | ||
|
||
### `completion_options` | ||
|
||
The properties `top_p`, `top_k`, `temperature`, `presence_penalty`, and `frequency_penalty` have been moved into a single object called `completion_options`. It can be specified at the top level of the config or within a `models` object. | ||
|
||
### `request_options` | ||
|
||
The properties `timeout`, `verify_ssl`, `ca_bundle_path`, `proxy`, and `headers` have been moved into a single object called `request_options`, which can be specified for each `models` object. | ||
|
||
### The `model` property | ||
|
||
Instead of writing something like `Ollama(model="phind-codellama:34b", ...)`, where the `model` property was different depending on the provider and had to be exactly correct, we now offer a default set of models, including the following: | ||
|
||
```python | ||
# OpenAI | ||
"gpt-3.5-turbo", | ||
"gpt-3.5-turbo-16k", | ||
"gpt-4", | ||
"gpt-3.5-turbo-0613", | ||
"gpt-4-32k", | ||
"gpt-4-turbo-preview", | ||
# Open-Source | ||
"mistral-7b", | ||
"llama2-7b", | ||
"llama2-13b", | ||
"codellama-7b", | ||
"codellama-13b", | ||
"codellama-34b", | ||
"phind-codellama-34b", | ||
"wizardcoder-7b", | ||
"wizardcoder-13b", | ||
"wizardcoder-34b", | ||
"zephyr-7b", | ||
"codeup-13b", | ||
"deepseek-1b", | ||
"deepseek-7b", | ||
"deepseek-33b", | ||
"neural-chat-7b" | ||
# Anthropic | ||
"claude-2", | ||
# Google PaLM | ||
"chat-bison-001", | ||
``` | ||
|
||
If you want to use a model not listed here, you can still do that by specifying whichever value of `model` you need. But if there's something you think we should add as a default, let us know! | ||
|
||
### Prompt template auto-detection | ||
|
||
Based on the `model` property, we now attempt to [autodetect](https://github.com/continuedev/continue/blob/108e00c7db9cad110c5df53bdd0436b286b92466/server/continuedev/core/config_utils/shared.py#L38) the prompt template. If you want to be explicit, you can select one of our prompt template types (`"llama2", "alpaca", "zephyr", "phind", "anthropic", "chatml", "deepseek", "neural-chat"`) or write a custom prompt template in `config.py`. | ||
|
||
### `PromptTemplate` | ||
|
||
If you were previously using the `PromptTemplate` class in your `config.py` to write a custom template, we have moved it from `continuedev.libs.llm.base` to `continuedev.models.llm`. | ||
|
||
## Examples of `config.json` | ||
|
||
After the "Full example" these examples will only show the relevant portion of the config file. | ||
|
||
### Full example, with Free Trial Models | ||
|
||
```json | ||
{ | ||
"models": [ | ||
{ | ||
"title": "GPT-4", | ||
"provider": "free-trial", | ||
"model": "gpt-4" | ||
}, | ||
{ | ||
"title": "GPT-3.5-Turbo", | ||
"provider": "free-trial", | ||
"model": "gpt-3.5-turbo" | ||
} | ||
], | ||
"system_message": "Always be kind", | ||
"completion_options": { | ||
"temperature": 0.5 | ||
}, | ||
"model_roles": { | ||
"default": "GPT-4", | ||
"summarize": "GPT-3.5-Turbo" | ||
}, | ||
"slash_commands": [ | ||
{ | ||
"name": "edit", | ||
"description": "Edit highlighted code", | ||
"step": "EditHighlightedCodeStep" | ||
}, | ||
{ | ||
"name": "config", | ||
"description": "Customize Continue", | ||
"step": "OpenConfigStep" | ||
}, | ||
{ | ||
"name": "comment", | ||
"description": "Write comments for the highlighted code", | ||
"step": "CommentCodeStep" | ||
}, | ||
{ | ||
"name": "share", | ||
"description": "Download and share this session", | ||
"step": "ShareSessionStep" | ||
}, | ||
{ | ||
"name": "cmd", | ||
"description": "Generate a shell command", | ||
"step": "GenerateShellCommandStep" | ||
} | ||
], | ||
"custom_commands": [ | ||
{ | ||
"name": "test", | ||
"prompt": "Write a comprehensive set of unit tests for the selected code. It should setup, run tests that check for correctness including important edge cases, and teardown. Ensure that the tests are complete and sophisticated. Give the tests just as chat output, don't edit any file.", | ||
"description": "Write unit tests for highlighted code" | ||
} | ||
], | ||
"context_providers": [{ "name": "terminal" }, { "name": "diff" }] | ||
} | ||
``` | ||
|
||
### Ollama with CodeLlama 13B | ||
|
||
```json | ||
{ | ||
"models": [ | ||
{ | ||
"title": "Ollama", | ||
"provider": "ollama", | ||
"model": "codellama-13b" | ||
} | ||
] | ||
} | ||
``` | ||
|
||
### Claude 2 | ||
|
||
```json | ||
{ | ||
"models": [ | ||
{ | ||
"title": "Claude-2", | ||
"provider": "anthropic", | ||
"model": "claude-2", | ||
"api_key": "sk-ant-api03-REST_OF_API_KEY", | ||
"context_length": 100000 | ||
} | ||
] | ||
} | ||
``` | ||
|
||
### LM Studio with Phind Codellama 34B | ||
|
||
```json | ||
{ | ||
"models": [ | ||
{ | ||
"title": "LM Studio", | ||
"provider": "lmstudio", | ||
"model": "phind-codellama-34b" | ||
} | ||
] | ||
} | ||
``` | ||
|
||
### OpenAI-compatible API | ||
|
||
This is an example of serving a model using an OpenAI-compatible API on http://localhost:8000. | ||
|
||
```json | ||
{ | ||
"models": [ | ||
{ | ||
"title": "OpenAI-compatible API", | ||
"provider": "openai", | ||
"model": "codellama-13b", | ||
"api_base": "http://localhost:8000" | ||
} | ||
] | ||
} | ||
``` | ||
|
||
### Azure OpenAI | ||
|
||
```json | ||
{ | ||
"models": [ | ||
{ | ||
"title": "Azure OpenAI", | ||
"provider": "openai", | ||
"model": "gpt-3.5-turbo", | ||
"api_key": "my-api-key", | ||
"api_base": "https://my-azure-openai-instance.openai.azure.com/", | ||
"engine": "my-azure-openai-deployment", | ||
"api_version": "2023-07-01-preview", | ||
"api_type": "azure" | ||
} | ||
] | ||
} | ||
``` | ||
|
||
### TogetherAI | ||
|
||
```json | ||
{ | ||
"models": [ | ||
{ | ||
"title": "Phind CodeLlama", | ||
"provider": "together", | ||
"model": "phind-codellama-34b", | ||
"api_key": "<your-api-key>" | ||
} | ||
] | ||
} | ||
``` | ||
|
||
### Temperature, top_p, etc... | ||
|
||
The `completions_options` for each model will override the top-level `completion_options`. For example, the "GPT-4" model here will have a temperature of 0.8, while the "GPT-3.5-Turbo" model will have a temperature of 0.5. | ||
|
||
```json | ||
{ | ||
"models": [ | ||
{ | ||
"title": "GPT-4", | ||
"provider": "free-trial", | ||
"model": "gpt-4", | ||
"completion_options": { | ||
"top_p": 0.9, | ||
"top_k": 40, | ||
"temperature": 0.8 | ||
} | ||
}, | ||
{ | ||
"title": "GPT-3.5-Turbo", | ||
"provider": "free-trial", | ||
"model": "gpt-3.5-turbo" | ||
} | ||
], | ||
"completion_options": { | ||
"temperature": 0.5, | ||
"presence_penalty": 0.5, | ||
"frequency_penalty": 0.5 | ||
} | ||
} | ||
``` |
22 changes: 22 additions & 0 deletions
22
.../i18n/zh-CN/docusaurus-plugin-content-docs/current/customization/code-config.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,22 @@ | ||
# Code Configuration | ||
|
||
To allow added flexibility and eventually support an entire plugin ecosystem, Continue can be configured programmatically in a Python file, `~/.continue/config.ts`. | ||
|
||
Whenever Continue loads, it carries out the following steps: | ||
|
||
1. Load `~/.continue/config.json` | ||
2. Convert this into a `Config` object | ||
3. If `~/.continue/config.ts` exists and has defined `modifyConfig` correctly, call `modifyConfig` with the `Config` object to generate the final configuration | ||
|
||
Defining a `modifyConfig` function allows you to make any final modifications to your initial `config.json`. Here's an example that sets the temperature to a random number and maxTokens to 1024: | ||
|
||
```typescript title="~/.continue/config.ts" | ||
export function modifyConfig(config: Config): Config { | ||
config.completionOptions = { | ||
...config.completionOptions, | ||
temperature: Math.random(), | ||
maxTokens: 1024, | ||
}; | ||
return config; | ||
} | ||
``` |
Oops, something went wrong.