Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(smart-apply): error when applying code to an empty file #7439

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

ichim-david
Copy link
Collaborator

@ichim-david ichim-david commented Mar 15, 2025

Refs CODY-5396
Fixed bug where inserting any code within an empty file triggered an error
which doesn't abort the smart apply action, leaving the user to think it's super slow.

This started after the work here #6768
Because startStreamingEditTask calls fixup startDecorator
https://github.com/sourcegraph/cody/pull/6768/files#diff-aab0c624f666dd0c22f4415b5d9cf6e2b1a9d3ceb14526e2714896beade885dfR291

And we bump the end line by one even when the file is empty
https://github.com/sourcegraph/cody/blob/main/vscode/src/non-stop/FixupTask.ts#L155
The trimEmptyLinesFromRange ended up with an index higher than available which
crashed https://github.com/sourcegraph/cody/pull/7439/files#diff-e0078048d45c5a1b5f5eea5af56c3919ecf75b0fe610dfee87578e87d0944ae0R87

Also used the correct model name to ensure we get 15k input instead of the default
fallback of 7k in case we have a large file that requires edits.

Test plan

Apply code in an empty file and observe it working

- Adds a test case to handle insertion into an empty document.
- Fixes a bug where due to `getDefaultSelectionRange` bump of end range
  document.atLine(endLineIndex) triggered an error since that line number was
  higher than the available lines of an empty file.
…de-3-5-sonnet-latest to get full input and output of given model

Updates the default model used for smart apply to `anthropic::2024-10-22::claude-3-5-sonnet-latest`. This change allows the full 15k input for smart
 apply operations rather than the default 7k.
@ichim-david ichim-david requested a review from umpox March 15, 2025 20:43
This commit updates the `getModelByID` and `getModelInfo` functions to correctly parse model IDs that use either `/` or `::` as separators. This allows for more flexible model ID formats, such as "openai::v1::gpt-4o-mini" or "openai/gpt-4o-mini".

The `getModelByID` function now splits the model ID on both `/` and `::` to extract the model identifier. The `getModelInfo` function has also been updated to handle both separators.

This fixes two cases:
1. You want to choose the context window for a model
2. You want to select a provider when we have a task intent
In both cases you want for the system to choose the right model
regardless if the dev choose to use the new ModelWithRefID style or the legacy model name (which is still used by the BYOK models)

const endLine = document.lineAt(startLineIndex)
if (range.end.character === 0 || endLine.text.length === 0) {
const endLine = document.lineAt(endLineIndex)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

array-index-bug Ensure endLineIndex >= 0 before accessing document.lineAt(). Current check 'endLineIndex > 0' could still allow -1 values.

This commit updates the `getContextWindowByID` function to pass the `models` array to the `getModelByID` function. This change allows for the use of a custom model list when retrieving context window information. This is useful in scenarios where a different set of models needs to be considered than the default set.
…tion

This commit adds error handling to the execution of edit tasks. Specifically, it wraps the `startStreamingEdit` call in a try-catch block. If an error occurs during the edit task, the fixup controller cancels the task, and an error message is displayed to the user. This prevents the user from being left in an inconsistent state if the edit fails, where the user sees only applying without any error.
The error message prompts the user to check the logs for more details.
Comment on lines +594 to 602
public getModelByID(modelID: string, models = this.models): Model | undefined {
// TODO(sqs)#observe: remove synchronous access here, return an Observable<Model|undefined> instead
return this.models.find(m => m.id === modelID)
// split on :: or / to get the model id
// e.g. "openai::v1::gpt-4o-mini" or "openai/gpt-4o-mini"
const modelParameters = modelID.split(/[:\/]{2}|\//)
const modelId = modelParameters.at(-1) || ''

return models.find(m => m.id.endsWith(modelId))
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we consider breaking this logic out into a different PR? It's worrying to me that it seems this logic was broken before, do we have the incorrect context window for other models too?

It'd also be useful to get some tests as splitting via a regex is going to be harder to maintain for future models than just matching an id

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants