-
Notifications
You must be signed in to change notification settings - Fork 213
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Autocomplete: Bring back completion synthesization from prior requests and retest all inflight requests #559
Conversation
I think this happens because now, multiple completion requests can be merged into a new completion array but we do not keep track of which exact completion of the array was the last visible one. I think we'll have to add this because we only ever want this logic to trigger if the exact same completion is visible again. I know we need to store all of the returned items so the hover tooltip works, but for the comparison we only need to look at one. Edit: This seems to work Screen.Recording.2023-08-03.at.15.30.55.mov |
c814b5d
to
3267de2
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you! I added some comments that IMO should be addressed before merging.
Take my input as that from a random dev out there--you make the final decision here.
I do worry about the network cache becoming too complex and bug-prone as it:
- Models a subset of document contents, document edit behavior, and editor view state.
- Interacts with the way that ghost text is and should be shown (ie the LastCandidate "cache"). (My concern here is eliminated if the
Opt-out from the last candidate logic when a previously invisible ghost text would be used
commit is reverted, as then the LastCandidate cache takes more of a primary/differentiated role then and there is less overlap.)
I also totally understand the value of cache entry synthesis from already in-flight requests that fortunately predicted the user's subsequent typing.
The //
glitch (cac-slashslashspace-glitch.mp4 posted below) is an example of what I am worried we will be encountering a lot with the network cache. Maybe it is just that one issue and then it's good! I don't know.
I don't know the right solution or trade-off, and I trust you to make the call.
@sqs Thank you for the detailed feedback! I’m now thinking that we can probably just reuse the "last candidate" logic here instead of using a fully blown cache like this. So when a previously in-flight request finishes and we do the "retest cache" logic, we can also pretend it is the last candidate and see if it would still apply. |
3267de2
to
b30d537
Compare
Hm, not happy with this approach either:
I'll need think about this a bit more (and also really start writing my reviews now 😐). |
Memo to self: Check what happens if we return a completion result even though a new inline complete call was triggered. Expectation: VS Code will disregard the completion (unless it matches the new prefix perhaps?) Idea: If that's the case, we might return instead of cancelling all completion results and only completely rely on the "last candidate" logic. We just need to make sure these are not logged as being visible. |
Ohh this works great! |
Todo
|
48ad7b6
to
994c63f
Compare
…st text would be used
…ible ghost text would be used" This reverts commit 0200c05.
994c63f
to
075f90a
Compare
…he completion to contain all characters from the replaced section
This PR brings back a completion cache that is able to synthesize results even if the request has changed. It's much simpler than the old request cache and only keeps a maximum of 50 completions around.
When the cache is queried, we compare the document position with all cached completions for the same file and if there's something salvageable, we synthesize a new completion base off of the previous one.
The data structures are kept much simpler now. Instead of having a map of multiple arrays of completions, we flattened the data structure to only be an array of completions. Inflight request logging was also changed from a per-file map to a flat set.
Todo
I noticed what appears to be a regression that I’m still investigating. Sometimes the last candidate response will not be the actual last completion that was visible. You can notice this happening at ~26sec in this video:
Screen.Recording.2023-08-03.at.15.11.40.mov
Test plan
Type a
console.log
statement, leafing only a tiny pause after one character. Observe that the response is going to be marked asCacheAfterRequestStarted
: