Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

x/tools/gopls: limit number of returned completions (maybe just unimported) #36001

Closed
heschik opened this issue Dec 5, 2019 · 12 comments
Closed

x/tools/gopls: limit number of returned completions (maybe just unimported) #36001

heschik opened this issue Dec 5, 2019 · 12 comments
Labels
Milestone

Comments

@heschik
Copy link
Contributor

@heschik heschik commented Dec 5, 2019

With unimported completions on, triggering completions with no prefix is very slow. Depressingly, the main problem is generating the diffs for all the imports that could be added -- after https://golang.org/cl/205678, we need to do some fairly expensive parsing and formatting for each one. Profile:

profile001

There's really no point in returning every single package in the module cache as a completion candidate. I think we should definitely limit those.

I think @muirdm has also said that Emacs may have trouble handling large numbers of completions regardless. So maybe we should just put a cap on the entire result set?

@gopherbot gopherbot added this to the Unreleased milestone Dec 5, 2019
@heschik heschik modified the milestones: Unreleased, gopls v1.0 Dec 5, 2019
@muirdm

This comment has been minimized.

Copy link

@muirdm muirdm commented Dec 5, 2019

If I remember correctly, I was referring to a massive amount of candidates slowing gopls down rather than client side issues. (To be fair, Emacs is probably the worst at handling giant responses, too.)

I feel like at this point we have enough per-candidate work it might make sense to change to the "resolveCompletion" approach where we defer work until the user selects the candidate. This would include deferring looking up the docs and computation of any edits. It also opens the door for the "I'm feeling lucky" improvement where placeholder snippets come pre-populated with the top completion candidate(s).

For this particular case we could also cache the edits per unimported package rather than computing them for every candidate (i.e. wasteful if you have 100 candidates from the same unimported package).

Regardless of performance issues, though, I also think there are too many unimported package completions to be useful (I'm talking about the package name completions, not the package member completions). We could use stricter matching such as requiring an exact prefix match for the first N characters in the package name. If the package list is sorted usefully beyond just the location of the package (i.e. by import frequency), then dropping all but the top N unimported packages may also be a good option.

Arbitrarily limiting the overall completion response may also be good. I don't think I have ever scrolled through 50 candidates just to select the 51st candidate. The key is limiting candidates before we have done all the work to create the candidates. Note that this only works when we are doing server-side filtering. If the user disables gopls fuzzy matching and deep completions we still need to return all the candidates since the editor performs further filtering.

@stamblerre

This comment has been minimized.

Copy link
Contributor

@stamblerre stamblerre commented Dec 5, 2019

I agree with everything above, though I am nervous about using completionItem/resolve. I should dig up an old CL that I had to implement it, but I remember I found it made completion "feel" slower because it always gets triggered for the first completion candidate. So then, a single textDocument/completion request becomes a textDocument/completion + completionItem/resolve. I'm not sure how to avoid this, though.

Edit: I seem to recall discussing this a while ago, and maybe @muirdm suggesting a third alternative of something like completionItem/select, but I can't remember if this was mentioned on an issue to the LSP authors or just on the Go tracker.

@muirdm

This comment has been minimized.

Copy link

@muirdm muirdm commented Dec 5, 2019

So then, a single textDocument/completion request becomes a textDocument/completion + completionItem/resolve

Can we can pre-resolve the top N candidates so it is only slow if they scroll? Maybe put in a special data cookie to make the "resolve" action a no-op.

@stamblerre

This comment has been minimized.

Copy link
Contributor

@stamblerre stamblerre commented Dec 5, 2019

That sounds like a good start. We should test if that approach still "feels" slower because of the extra round trip. Let me find that CL so we can start working on this...I think it must have been when I was adding completion documentation.

@stamblerre

This comment has been minimized.

Copy link
Contributor

@stamblerre stamblerre commented Dec 5, 2019

I'm not sure if this is still useful, but here was my implementation of completionItem/resolve from July 😄. It's only on patchset 5 because after that I changed my mind (CL 184721).

@heschik

This comment has been minimized.

Copy link
Contributor Author

@heschik heschik commented Dec 5, 2019

@muirdm I'm not too familiar with this area and I don't want to spend a ton of time on it right now. Do you want to do something soon? If not, I'll probably just cap completions at 50 or whatever and we can do something more sophisticated later.

@zikaeroh

This comment has been minimized.

Copy link

@zikaeroh zikaeroh commented Dec 5, 2019

Out of curiosity, are you referring to capping the final completion list, or just some aspects of it?

I know that completions (at least with my experience with VS Code + LSP) will do things like "send one completion request after the ., then filter it on what the user continues to type without re-requesting the completion". Does gopls do some trick to prevent that behavior and always be able to provide a new completion list at the cost of needing more calls?

@muirdm

This comment has been minimized.

Copy link

@muirdm muirdm commented Dec 5, 2019

Something simple for now sounds good to me.

Unsolicited thoughts, but capping the unimported packages will be simplest assuming you can just take the first N matching packages and then stop.

To cap all candidates you need to take the top N highest scorers, so you would probably need a heap or similar data structure to efficiently track the top N highest scoring candidates as we find them. Or maybe just defer the per-candidate work until after sorting and capping, but that probably isn't too easy either.

@muirdm

This comment has been minimized.

Copy link

@muirdm muirdm commented Dec 5, 2019

Does gopls do some trick to prevent that behavior and always be able to provide a new completion list at the cost of needing more calls?

Yes, we set the "incomplete" flag in the completion results which makes the editor re-request completions after every keystroke. This behavior is required for server side fuzzy matching and deep completions.

@gopherbot

This comment has been minimized.

Copy link

@gopherbot gopherbot commented Dec 5, 2019

Change https://golang.org/cl/210198 mentions this issue: internal/lsp/source: cap number of unimported completions

@gopherbot

This comment has been minimized.

Copy link

@gopherbot gopherbot commented Dec 6, 2019

Change https://golang.org/cl/210200 mentions this issue: internal/lsp/source: optimize computeFixEdits

gopherbot pushed a commit to golang/tools that referenced this issue Dec 6, 2019
Building unimported completions requires re-parsing and formatting at least
some of the file for each one, which adds up. Limit it to 20; I expect
people will just type more rather than scroll through a giant list.

Updates golang/go#36001.

Change-Id: Ib41232b91c327d4b824e6176e30306abf356f5b4
Reviewed-on: https://go-review.googlesource.com/c/tools/+/210198
Run-TryBot: Heschi Kreinick <heschi@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rebecca Stambler <rstambler@golang.org>
gopherbot pushed a commit to golang/tools that referenced this issue Dec 6, 2019
In the common case that a file has imports, we're going to diff just the
import block. That means that ApplyFixes doesn't need to produce the
whole formatted file, which is a huge speedup. We will do more work twice
for files with no imports, but those are presumably pretty short.

Hat tip to Muir for pointing towards this in
https://go-review.googlesource.com/c/tools/+/209579/2/internal/imports/imports.go#87
even if I didn't catch it.

Updates golang/go#36001.

Change-Id: Ibbeb4d88c6505eac26a36994de514813606c8c79
Reviewed-on: https://go-review.googlesource.com/c/tools/+/210200
Run-TryBot: Heschi Kreinick <heschi@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rebecca Stambler <rstambler@golang.org>
@stamblerre stamblerre modified the milestones: gopls v1.0, gopls/v0.3.0 Jan 15, 2020
@heschik

This comment has been minimized.

Copy link
Contributor Author

@heschik heschik commented Jan 15, 2020

This was done in the commits above.

@heschik heschik closed this Jan 15, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
5 participants
You can’t perform that action at this time.