Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: use selectionRange in edits when available #1429

Merged
merged 2 commits into from
Oct 19, 2023
Merged

Conversation

abeatrix
Copy link
Contributor

Fix regression caused by changes in #1317

The commit fixes an issue where getSmartSelection was always recalculating a selection even when a valid selectionRange was already provided.

Currently, the /edit command does not respect user selection when it's available:

broken_edit.mov

This becomes an issue as folding range is still in the experimental stage. It also cause the code lens to jump around, unless it's intended as I haven't read through the PR 馃槄

This PR adds a check to use selectionRange when it is available and not an empty range.

This provides a fallback in case where "smart" selection is not working for users.

Test plan

Select code in your editor and run the /edit command, the range should not be expanded for you.

@abeatrix abeatrix requested a review from a team October 19, 2023 00:17
Copy link
Contributor

@dominiccooney dominiccooney left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good.

CC @arafatkatze so we can work out the intent... Do we want to broaden selections, when there's an explicit selection? It is a bit surprising when it changes.

@abeatrix abeatrix enabled auto-merge (squash) October 19, 2023 00:31
@abeatrix abeatrix merged commit f8fc7e5 into main Oct 19, 2023
13 checks passed
@abeatrix abeatrix deleted the bee/fix-edit-range branch October 19, 2023 00:33
@arafatkatze
Copy link
Contributor

arafatkatze commented Oct 19, 2023

@dominiccooney @abeatrix Thanks for message and a PR. A couple of thoughts

A. Original Problem & Intention:

The primary intention behind the "Smart Selection" was to tackle some prevalent edge cases, as described in issues #585 and #223. The primary challenge was to ensure that the selection made by users encompassed the entirety of the code regions that required a fixup, ensuring comprehensiveness in the changes.

Some of the pertinent edge cases included( From my original pr that used an LLM based selection algorithm):
image

B. I understand that the new PR aims to respect user selection, especially when it's explicit. However, it seems like the new approach, in trying to rectify one problem, effectively negates the core functionality of the "Smart Selection" feature. Coz if user selection is made then smart selection is avoided and if user selection is NOT made then the add recipe is used.

public async getTaskIntent(taskId: string, intentDetector: IntentDetector): Promise<FixupIntent> {
const task = this.tasks.get(taskId)
if (!task) {
throw new Error('Select some code to fixup.')
}
const document = await vscode.workspace.openTextDocument(task.fixupFile.uri)
const selectedText = document.getText(task.selectionRange)
if (truncateText(selectedText, MAX_CURRENT_FILE_TOKENS) !== selectedText) {
const msg = "The amount of text selected exceeds Cody's current capacity."
throw new Error(msg)
}
if (selectedText.trim().length === 0) {
// Nothing selected, assume this is always 'add'.
return 'add'

This could inadvertently reintroduce the original edge cases we were striving to eliminate coz the smart selection code is basically never run for fixup recipe.

Possible Solutions:

Given the above, I'd like to propose a couple of solutions to ensure we achieve a balanced outcome:

  1. Remove add recipe and just do smart selection in that case -> seems fairly easy and straightforward. Happy to spin up a PR for it.
  2. Intermediate Step: Consider an intermediate LLM call to determine if expanding a selection would be beneficial. This ensures that user selections are respected, but also optimizes for cases where an expanded selection could address prevalent edge cases. I have done a very similar PR for this before and I am happy to do it again.
  3. Feature Flags & A/B Testing: Make the concept of smart selection could be behind a feature flag and then A/B testing acceptance data could be a good idea.

While I completely understand and respect the rationale behind the regression fix and it makes a lot of sense why it was done, tt would be nice to consider the original intent of smart selection.

@abeatrix
Copy link
Contributor Author

@arafatkatze thanks for the detail response and for clarifying, really appreciate it.

I think the original problem was fix up with " Empty Selections don't work", and your PR addressed the issue perfectly, where we use Smart Selection when selection is empty. However, using user selection was never the issue iirc.

As for the second issue you linked, it was referring to the edit action through code action, which has already been addressed by a separate PR released in last stable version.

I like your second point though, maybe something we could add to the new "regenerate" code lens? Wdyt?

@arafatkatze
Copy link
Contributor

@abeatrix Thanks for pointers, these are some good questions. I will try best to be concise here.

Empty Selections don't work", and your PR addressed the issue perfectly, where we use Smart Selection when selection is empty

Right now, in the case of empty selection the intent of the fixup changes to add and therefore we don't do the smart selection but it should be an easy fix and I can add it. This would remove the add intent code but hopefully that's okay. If you want the behaviour of Empty Selection -> smart Selection then I am happy to spin up a PR for that case.

if (selectedText.trim().length === 0) {
// Nothing selected, assume this is always 'add'.
return 'add'

You can see that incase of empty selection the 'add' intent is returned.

As for the second issue you linked, it was referring to the edit action through code action, which has already been addressed by a separate PR released in last stable version.

Makes sense

I like your second point though, maybe something we could add to the new "regenerate" code lens? Wdyt?

#1383 is a very interesting PR. I really liked the narrative behind it but I can understand why currently you folks decided not to support the error inclusions.

My understanding that when users opt for a regenerate, it's predominantly due to dissatisfaction with the initial result, often expecting error resolutions or the result just didn't "understand" their intent. The causality for that could be

  1. Errors not resolved
  2. Refactors/variable renaming -> If you only selected the return value then any refactor would miss other parts of the function where changes need to be applied
  3. Bad selection boundaries mess up the indentation completely in languages like python

Personal opinion: I would find it unlikely if users just regenerate for no reason and expect LLM to give a different response which is "better" the next time around(See comment)

In these cases, it might make a lot of sense to expand the selection. We could introduce a mechanism that quickly consults with a faster LLM, like Cloud Instant, just providing the error and its context or the user instruction + selected context for no errors. This could determine whether an expansion is feasible to address the error. This approach ensures we're not merely resending the same request but adapting and enhancing it for a better LLM output. That fast LLM check can potentially be used for other things as well, not just range expansion.

@dominiccooney @abeatrix Is that something you folks would like to see?

@toolmantim
Copy link
Contributor

toolmantim commented Oct 20, 2023

I think there's another causality, which is the prompt needs to be rewritten to clarify intent or to cater for a result from the LLM that the user hadn't anticipated.

I'd suggest we investigate that in #1407 along with auto-apply first, and then investigate the idea of allowing users to include code errors from the file, or expanding the range.

Or we look at solving the above causalities scoped only to "Ask Cody to Fix", and not to all fixups.

@arafatkatze
Copy link
Contributor

@toolmantim

I think there's another causality, which is the prompt needs to be rewritten to clarify intent or to cater for a result from the LLM that the user hadn't anticipated.

Makes sense, that's a very common way to solve this problem.

I'd suggest we investigate that in #1407 along with auto-apply first, and then investigate the idea of allowing users to include code errors from the file, or expanding the range.

Makes sense. Auto Apply would be a very useful feature coz the "friction" of pressing a button again AFTER having requested a fixup seems like an unwanted extra step. I would rather just get a result and undo it rather than press an extra button.

I do have a question. When the auto apply fixup is working and people are using it. What needs to happen so that you folks consider the possibility of code error inclusion or range expansion? Coz that task seems unrelated to fixup codelenses changes discussed in #1407 Its more about UI experience rather than the quality of the results and the quality of results depends on the context included in the prompt(Code Expansion/Error inclusion matter in that).

Or we look at solving the above causalities scoped only to "Ask Cody to Fix", and not to all fixups.

Very fair point, "Ask cody to fix" already does error inclusion and range expansion to solve the problem so it's already taking all the cases into account.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants