Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion core/llm/llms/CometAPI.ts
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
import { LLMOptions } from "../../index.js";
import { allModelProviders } from "@continuedev/llm-info";
import { LLMOptions } from "../../index.js";
import OpenAI from "./OpenAI.js";

/**
Expand Down Expand Up @@ -183,6 +183,7 @@ class CometAPI extends OpenAI {
*/
private static RECOMMENDED_MODELS = [
// GPT series
"gpt-5.1",
"gpt-5-chat-latest",
"chatgpt-4o-latest",
"gpt-5-mini",
Expand Down
2 changes: 1 addition & 1 deletion core/llm/toolSupport.ts
Original file line number Diff line number Diff line change
Expand Up @@ -392,7 +392,7 @@ export function isRecommendedAgentModel(modelName: string): boolean {
[/o[134]/],
[/deepseek/, /r1|reasoner/],
[/gemini/, /2\.5/, /pro/],
[/gpt-5/],
[/gpt/, /-5|5\.1/],
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this may match funky dates but should be fine since 2-digit month/day won't start w 5

[/claude/, /sonnet/, /3\.7|3-7|-4/],
[/claude/, /opus/, /-4/],
[/grok-code/],
Expand Down
4 changes: 2 additions & 2 deletions docs/customize/deep-dives/autocomplete.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -163,9 +163,9 @@ The `config.json` configuration format offers configuration options through `tab

## Autocomplete FAQs and Troubleshooting in Continue

### I want better completions, should I use GPT-4?
### I want better completions, should I use GPT-5?

Perhaps surprisingly, the answer is no. The models that we suggest for autocomplete are trained with a highly specific prompt format, which allows them to respond to requests for completing code (see examples of these prompts [here](https://github.com/continuedev/continue/blob/main/core/autocomplete/templating/AutocompleteTemplate.ts)). Some of the best commercial models like GPT-4 or Claude are not trained with this prompt format, which means that they won't generate useful completions. Luckily, a huge model is not required for great autocomplete. Most of the state-of-the-art autocomplete models are no more than 10b parameters, and increasing beyond this does not significantly improve performance.
Perhaps surprisingly, the answer is no. The models that we suggest for autocomplete are trained with a highly specific prompt format, which allows them to respond to requests for completing code (see examples of these prompts [here](https://github.com/continuedev/continue/blob/main/core/autocomplete/templating/AutocompleteTemplate.ts)). Some of the best commercial models like GPT-5 or Claude are not trained with this prompt format, which means that they won't generate useful completions. Luckily, a huge model is not required for great autocomplete. Most of the state-of-the-art autocomplete models are no more than 10b parameters, and increasing beyond this does not significantly improve performance.

### Autocomplete Not Working – How to Fix It

Expand Down
2 changes: 2 additions & 0 deletions docs/customize/deep-dives/model-capabilities.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -171,6 +171,8 @@ This matrix shows which models support tool use and image input capabilities. Co

| Model | Tool Use | Image Input | Context Window |
| :------------ | -------- | ----------- | -------------- |
| GPT-5.1 | Yes | No | 400k |
| GPT-5 | Yes | No | 400k |
| o3 | Yes | No | 128k |
| o3-mini | Yes | No | 128k |
| GPT-4o | Yes | Yes | 128k |
Expand Down
10 changes: 5 additions & 5 deletions docs/customize/model-roles/chat.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -76,20 +76,20 @@ If you prefer to use an open-weight model, then the Gemma family of Models from
</Tab>
</Tabs>

### GPT-4o from OpenAI
### GPT-5.1 from OpenAI

If you prefer to use a model from [OpenAI](../model-providers/top-level/openai), then we recommend GPT-4o.
If you prefer to use a model from [OpenAI](../model-providers/top-level/openai), then we recommend GPT-5.1.

<Tabs>
<Tab title="Hub">
Add the [OpenAI GPT-4o block](https://hub.continue.dev/openai/gpt-4o) from the hub
Add the [OpenAI GPT-5.1 block](https://hub.continue.dev/openai/gpt-5.1) from the hub
</Tab>
<Tab title="YAML">
```yaml title="config.yaml"
models:
- name: GPT-4o
- name: GPT-5.1
provider: openai
model: ''
model: gpt-5.1
apiKey: <YOUR_OPENAI_API_KEY>
```
</Tab>
Expand Down
12 changes: 12 additions & 0 deletions gui/src/pages/AddNewModel/configs/models.ts
Original file line number Diff line number Diff line change
Expand Up @@ -1121,6 +1121,18 @@ export const models: { [key: string]: ModelPackage } = {
icon: "openai.png",
isOpenSource: false,
},
gpt5_1: {
title: "GPT-5.1",
description: "OpenAI's GPT-5.1 model for advanced reasoning and chat",
params: {
model: "gpt-5.1",
contextLength: 400_000,
title: "GPT-5.1",
},
providerOptions: ["openai"],
icon: "openai.png",
isOpenSource: false,
},
gpt5Codex: {
title: "GPT-5 Codex",
description:
Expand Down
5 changes: 3 additions & 2 deletions gui/src/pages/AddNewModel/configs/providers.ts
Original file line number Diff line number Diff line change
Expand Up @@ -111,13 +111,14 @@ export const providers: Partial<Record<string, ProviderInfo>> = {
openai: {
title: "OpenAI",
provider: "openai",
description: "Use gpt-5, gpt-4, or any other OpenAI model",
description: "Use gpt-5.1, gpt-5, gpt-4, or any other OpenAI model",
longDescription:
"Use gpt-5, gpt-4, or any other OpenAI model. See [here](https://openai.com/product#made-for-developers) to obtain an API key.",
"Use gpt-5.1, gpt-5, gpt-4, or any other OpenAI model. See [here](https://openai.com/product#made-for-developers) to obtain an API key.",
icon: "openai.png",
tags: [ModelProviderTags.RequiresApiKey],
packages: [
models.gpt5,
models.gpt5_1,
models.gpt5Codex,
models.gpt4o,
models.gpt4omini,
Expand Down
8 changes: 8 additions & 0 deletions packages/llm-info/src/providers/openai.ts
Original file line number Diff line number Diff line change
Expand Up @@ -92,6 +92,14 @@ export const OpenAi: ModelProvider = {
regex: /gpt-5-codex/,
recommendedFor: ["chat"],
},
{
model: "gpt-5.1",
displayName: "GPT-5.1",
contextLength: 400000,
maxCompletionTokens: 128000,
regex: /^gpt-5\.1$/,
recommendedFor: ["chat"],
},
// gpt-4o
{
model: "gpt-4o",
Expand Down
Loading