Skip to content

Commit

Permalink
docs changes
Browse files Browse the repository at this point in the history
  • Loading branch information
sestinj committed Jun 23, 2024
1 parent 447ef47 commit c102261
Show file tree
Hide file tree
Showing 9 changed files with 86 additions and 106 deletions.
15 changes: 15 additions & 0 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -110,8 +110,23 @@ Pre-requisite: You should use the Intellij IDE, which can be downloaded [here](h

a. If you change code from the `core` or `binary` directories, make sure to run `npm run build` from the `binary` directory to create a new binary.

b. If you change code from the `gui` directory, make sure to run `npm run build` from the `gui` directory to create a new bundle.

c. Any changes to the Kotlin coded in the `extensions/intellij` directory will be automatically included when you run "Build Plugin"

##### Debugging

Continue's JetBrains extension shares much of the code with the VS Code extension by utilizing shared code in the `core` directory and packaging it in a binary in the `binary` directory. The Intellij extension (written in Kotlin) is then able to communicate over stdin/stdout in the [CoreMessenger.kt](./extensions/intellij/src/main/kotlin/com/github/continuedev/continueintellijextension/continue/CoreMessenger.kt) file.

For the sake of rapid development, it is also possible to configure this communication to happen over local TCP sockets:

1. In [CoreMessenger.kt](./extensions/intellij/src/main/kotlin/com/github/continuedev/continueintellijextension/continue/CoreMessenger.kt), change the `useTcp` variable to `true`.
2. Open a VS Code window (we recommend this for a preconfigured Typescript debugging experience) with the `continue` repository. Select the "Core Binary" debug configuration and press play.
3. Run the "Run Plugin" Gradle configuration.
4. You can now set breakpoints in any of the TypeScript files in VS Code. If you make changes to the code, restart the "Core Binary" debug configuration and reload the _Host IntelliJ_ window.

If you make changes to Kotlin code, they can often be hot-reloaded with "Run -> Debugging Actions -> Reload Changed Classes".

### Formatting

Continue uses [Prettier](https://marketplace.visualstudio.com/items?itemName=esbenp.prettier-vscode) to format JavaScript/TypeScript. Please install the Prettier extension in VS Code and enable "Format on Save" in your settings.
Expand Down
36 changes: 2 additions & 34 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@

<div align="center">

**[Continue](https://docs.continue.dev) keeps developers in flow. Our open-source [VS Code](https://marketplace.visualstudio.com/items?itemName=Continue.continue) and [JetBrains](https://plugins.jetbrains.com/plugin/22707-continue-extension) extensions enable you to easily create your own modular AI software development system that you can improve.**
**[Continue](https://docs.continue.dev) is the leading open-source AI code assistant. You can connect any models and any context to build custom autocomplete and chat experiences inside [VS Code](https://marketplace.visualstudio.com/items?itemName=Continue.continue) and [JetBrains](https://plugins.jetbrains.com/plugin/22707-continue-extension)**

</div>

Expand Down Expand Up @@ -68,45 +68,13 @@ JetBrains: `@docs` (MacOS) / `@docs` (Windows)

</div>

## Task and tab autocomplete

### Answer coding questions

Highlight + select sections of code and ask Continue for another perspective

- “what does this forRoot() static function do in nestjs?”
- “why is the first left join in this query necessary here?”
- “how do I run a performance benchmark on this rust binary?”

### Edit in natural language

Highlight + select a section of code and instruct Continue to refactor it

- “/edit rewrite this to return a flattened list from a 3x3 matrix”
- “/edit refactor these into an angular flex layout on one line"
- “/edit define a type here for a list of lists of dictionaries”

### Generate files from scratch

Open a blank file and let Continue start new Python scripts, React components, etc.

- “/edit get me started with a basic supabase edge function”
- “/edit implement a c++ shortest path algo in a concise way”
- “/edit create a docker compose file with php and mysql server"

### And much more!

- Try out [experimental support for local tab autocomplete](https://docs.continue.dev/walkthroughs/tab-autocomplete) in VS Code
- Use [built-in context providers](https://docs.continue.dev/customization/context-providers#built-in-context-providers) or create your own [custom context providers](https://docs.continue.dev/customization/context-providers#building-your-own-context-provider)
- Use [built-in slash commands](https://arc.net/l/quote/zbhwfjmp) or create your own [custom slash commands](https://docs.continue.dev/customization/slash-commands#custom-slash-commands)

## Getting Started

### Download for [VS Code](https://marketplace.visualstudio.com/items?itemName=Continue.continue) and [JetBrains](https://plugins.jetbrains.com/plugin/22707-continue-extension)

You can try out Continue with our free trial models before configuring your setup.

Once you're ready to use your own API key or a different model / provider, press the `+` button in the bottom left to add a new model to your `config.json`. Learn more about the models and providers [here](https://docs.continue.dev/model-setup/overview).
Learn more about the models and providers [here](https://continue.dev/docs/setup/overview).

## Contributing

Expand Down
20 changes: 17 additions & 3 deletions core/autocomplete/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,21 +38,35 @@ Example:

## Setting up a custom model

All of the configuration options available for chat models are available to use for tab-autocomplete. For example, if you wanted to use a remote Ollama instance you would edit your `config.json` like this (note that it is not inside the models array):
All of the configuration options available for chat models are available to use for tab-autocomplete. For example, if you wanted to use a remote vLLM instance you would edit your `config.json` like this (note that it is not inside the models array), filling in the correct model name and vLLM endpoint:

```json title=~/.continue/config.json
{
"tabAutocompleteModel": {
"title": "Tab Autocomplete Model",
"provider": "openai",
"model": "<MODEL_NAME>",
"apiBase": "<VLLM_ENDPOINT_URL>"
},
...
}
```

As another example, say you want to use a different model, `deepseek-coder:6.7b-base`, with Ollama:

```json title=~/.continue/config.json
{
"tabAutocompleteModel": {
"title": "Tab Autocomplete Model",
"provider": "ollama",
"model": "starcoder:3b",
"model": "deepseek-coder:6.7b-base",
"apiBase": "https://<my endpoint>"
},
...
}
```

If you aren't yet familiar with the available options, you can learn more in our [overview](../model-setup/overview.md).
If you aren't yet familiar with the available options, you can learn more in our [overview](../setup/overview.md).

### What model should I use?

Expand Down
2 changes: 1 addition & 1 deletion docs/docs/setup/select-model.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ You likely want to use a model that is 1-15B parameters for autocomplete.

#### Codestral from Mistral

Our current recommendation for autocomplete, if you are able to choose any model, is `codestral-latest` from Mistral's API.
Our current recommendation for autocomplete, if you are able to choose any model, is `codestral-latest` from [Mistral's API](../walkthroughs/set-up-codestral.md).

### Open-source LLMs

Expand Down
10 changes: 6 additions & 4 deletions docs/docs/setup/select-provider.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: Select a provider
description: Swap out different LLM providers
title: Select providers
description: Configure LLM providers
keywords: [openai, anthropic, gemini, ollama, ggml]
---

Expand Down Expand Up @@ -40,7 +40,8 @@ You can deploy a model in your [AWS](https://github.com/continuedev/deploy-os-co
## SaaS

You can access both open-source and commercial LLMs via:
* [OpenRouter](../reference/Model%20Providers/openrouter.md)

- [OpenRouter](../reference/Model%20Providers/openrouter.md)

### Open-source models

Expand All @@ -60,8 +61,9 @@ You can run open-source LLMs with cloud services like:
You can use commercial LLMs via APIs using:

- [Anthrophic API](../reference/Model%20Providers/anthropicllm.md)
- [OpenAI API](../reference/Model%20Providers/openai.md)
- [Azure OpenAI Service](../reference/Model%20Providers/openai.md)
- [Google Gemini API](../reference/Model%20Providers/geminiapi.md)
- [OpenAI free trial](../reference/Model%20Providers/freetrial.md)
- [Mistral API](../reference/Model%20Providers/mistral.md)
- [Voyage AI API](../walkthroughs/codebase-embeddings.md#openai)
- [Cohere API](../reference/Model%20Providers/cohere.md)
Expand Down
37 changes: 34 additions & 3 deletions docs/docs/walkthroughs/set-up-codestral.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,18 +23,49 @@ keywords: [codestral, mistral, model setup]
"models": [
{
"title": "Codestral",
"provider": "mistral"
"provider": "mistral",
"model": "codestral-latest",
"apiKey": "[API_KEY]"
}
],
"tabAutocompleteModel": {
"title": "Codestral",
"provider": "mistral",
"model": "codestral-latest",
"apiKey": "[API_KEY]"
}
}
```

5. If you run into any issues or have any questions, please join our Discord and post in the `#help` channel [here](https://discord.gg/EfJEfdFnDQ)

## Trobleshooting

### Temporary workaround for JetBrains

Mistral AI recently changed the API endpoint to `codestral.mistral.ai` instead of `api.mistral.ai`, and our updated JetBrains extension is waiting on approval from the store. In the meantime, you will have to specify apiBase as "https://codestral.mistral.ai/v1" in the config.json like this:

```json title="~/.continue/config.json"
{
"models": [
{
"title": "Codestral",
"provider": "mistral",
"model": "codestral-latest",
"apiKey": "[API_KEY]",
"apiBase": "https://codestral.mistral.ai/v1/"
}
],
"tabAutocompleteModel": {
"title": "Codestral",
"provider": "mistral",
"model": "codestral-latest"
"model": "codestral-latest",
"apiKey": "[API_KEY]",
"apiBase": "https://codestral.mistral.ai/v1/"
}
}
```

5. If you run into any issues or have any questions, please join our Discord and post in the `#help` channel [here](https://discord.gg/EfJEfdFnDQ)
### Ask for help on Discord

Please join our Discord and post in the `#help` channel [here](https://discord.gg/EfJEfdFnDQ) if you are having problems using Codestral
2 changes: 1 addition & 1 deletion docs/docs/walkthroughs/tab-autocomplete.md
Original file line number Diff line number Diff line change
Expand Up @@ -102,7 +102,7 @@ This object allows you to customize the behavior of tab-autocomplete. The availa
- `multilineCompletions`: Whether to enable multiline completions ("always", "never", or "auto"). Defaults to "auto".
- `useCache`: Whether to cache and reuse completions when the prompt is the same as a previous one. May be useful to disable for testing purposes.
- `useOtherFiles`: Whether to include context from files outside of the current one. Turning this off should be expected to reduce the accuracy of completions, but might be good for testing.
- `disable`: Disable autocomplete (can also be done from IDE settings)
- `disableInFiles`: A list of glob patterns for files in which you want to disable tab autocomplete.

### Full example

Expand Down
8 changes: 6 additions & 2 deletions extensions/intellij/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
<!-- Plugin description -->

**[Continue](https://docs.continue.dev) is the open-source autopilot for software development—an extension that brings the power of ChatGPT to your IDE**
![readme](../../media/readme.png)

<h1 align="center">Continue</h1>

Expand Down Expand Up @@ -42,6 +42,10 @@

You can try out Continue with our free trial models before configuring your setup.

Once you're ready to use your own API key or a different model / provider, press the `+` button in the bottom left to add a new model to your `config.json`. Learn more about the models and providers [here](https://docs.continue.dev/model-setup/overview).
Learn more about the models and providers [here](https://continue.dev/docs/setup/overview).

## License

[Apache 2.0 © 2023 Continue Dev, Inc.](./LICENSE)

<!-- Plugin description end -->
62 changes: 4 additions & 58 deletions extensions/vscode/README.md
Original file line number Diff line number Diff line change
@@ -1,26 +1,16 @@
> 🎉 **Tab autocomplete now available in pre-release (experimental)**
![Continue logo](media/readme.png)
![readme](media/readme.png)

<h1 align="center">Continue</h1>

<div align="center">

**[Continue](https://docs.continue.dev) is an open-source autopilot for VS Code and JetBrains—the easiest way to code with any LLM**
**[Continue](https://docs.continue.dev) is the leading open-source AI code assistant. You can connect any models and any context to build custom autocomplete and chat experiences inside [VS Code](https://marketplace.visualstudio.com/items?itemName=Continue.continue) and [JetBrains](https://plugins.jetbrains.com/plugin/22707-continue-extension)**

</div>

<div align="center">

<a target="_blank" href="https://opensource.org/licenses/Apache-2.0" style="background:none">
<img src="https://img.shields.io/badge/License-Apache_2.0-blue.svg" style="height: 20px;" />
</a>
<a target="_blank" href="https://docs.continue.dev" style="background:none">
<img src="https://img.shields.io/badge/continue_docs-%23BE1B55" style="height: 20px;" />
</a>
<a target="_blank" href="https://discord.gg/vapESyrFmJ" style="background:none">
<img src="https://img.shields.io/badge/discord-join-continue.svg?labelColor=191937&color=6F6FF7&logo=discord" style="height: 20px;" />
</a>
## Easily understand code sections

![understand](docs/static/img/understand.gif)

Expand Down Expand Up @@ -56,51 +46,7 @@

You can try out Continue with our free trial models before configuring your setup.

![Editing With Continue](media/readme.gif)

Highlight + select code sections and ask a question to get another perspective

- “how can I set up a Prisma schema that cascades deletes?”
- “where in the page should I be making this request to the backend?”
- “how can I communicate between these iframes?”

# 🪄 Reference context inline

![Reference context inline](media/image.gif)

- "@diff check my commit for mistakes"
- "how does @server.py stream responses?"
- "how do i fix this error in the @terminal?"

# ⚡️ Get completions directly in your editor

![Continue autocomplete](media/autocomplete.gif)

# ✏️ Edit in natural language

Highlight + select a section of code and instruct Continue to refactor it

- “/edit migrate this digital ocean terraform file into one that works for GCP”
- “/edit change this plot into a bar chart in this dashboard component”
- “/edit rewrite this function to be async”

![Above line edit](media/above-line-edit.gif)

# 🚀 Generate files from scratch

![Generate files from scratch](media/scratch.gif)

Open a blank file, <kbd>Cmd/Ctrl</kbd> + <kbd>Shift</kbd> + <kbd>L</kbd>, and let Continue start new Python scripts, React components, etc.

- “here is a connector for postgres, now write one for kafka”
- “make an IAM policy that creates a user with read-only access to S3”
- “use this schema to write me a SQL query that gets recently churned users”

# Getting Started

You can try out Continue for free using a proxy server that securely makes calls with our API key to models like GPT-4, Gemini Pro, and Phind CodeLlama via OpenAI, Google, and Together respectively.

Once you're ready to use your own API key or a different model / provider, press the `+` button in the bottom left to add a new model to your `config.json`. Learn more about the models and providers [here](https://docs.continue.dev/model-setup/overview).
Learn more about the models and providers [here](https://continue.dev/docs/setup/overview).

## License

Expand Down

0 comments on commit c102261

Please sign in to comment.