Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Large amount of CompletionItems returned by CompletionItemProvider causes significant lag #18682

Closed
Janne252 opened this issue Jan 17, 2017 · 13 comments
Assignees
Labels
bug Issue identified by VS Code Team member as probable bug perf suggest IntelliSense, Auto Complete

Comments

@Janne252
Copy link

Janne252 commented Jan 17, 2017

If a CompletionItemProvider implementation returns a large amount of CompletionItems, it causes the text editor to lag while typing. What can be done to reduce the lag?

  • VSCode Version: 1.8.1
  • OS Version: Windows 10 Pro, 1607

Steps to Reproduce:

The "issue" can be reproduced with this test extension: Janne252/vscode-test-completionItemProvider (see the readme for additional steps)

Some background:

I'm working on an extension for a video game's internal scripting language. This language has a rather large API which results in a total of 6313 completionItems from various kinds of sources, which I have dumped to a single .json file for demonstration.

@jrieken jrieken added the info-needed Issue requires more information from poster label Jan 18, 2017
@jrieken
Copy link
Member

jrieken commented Jan 18, 2017

@Janne252 Can you be a little more precise please? What lack to you mean? I see a short time of 'Loading...' which is basically what it takes to call all providers and to build the completion model but from then on I don't see any lack when typing

jan-18-2017 10-52-48

Please open dev tools, and run a CPU profile for the scenario that is slow for you. That should allow us to understand this bestter

@Janne252
Copy link
Author

Janne252 commented Jan 18, 2017

@jrieken Absolutely. Here's 2 recordings that should demonstrate the issue:

Without the massive CompletionItemProvider:

With the massive CompletionItemProvider:

The lag is most noticeable while typing multiple short words, like passing local variables to a function call. Perhaps I'm over-exaggerating the issue a bit.

Edit: Since the gifts seem to automatically play & loop, I suppose it's best to open each individual .gif in a new tab and watch it from the beginning.

@jrieken jrieken added suggest IntelliSense, Auto Complete and removed info-needed Issue requires more information from poster labels Jan 18, 2017
@jrieken
Copy link
Member

jrieken commented Jan 18, 2017

Ok, I can reproduce but not as easy as you can. Do you have the editor.quickSuggestionsDelay-setting?

I'll see how this can be made faster on our side, basically we are still filtering the list of 6000+ items when the next character wants to be inserted. We should be faster (or cancel earlier) but on your side you could also return less suggestion if that is possible

@Janne252
Copy link
Author

Janne252 commented Jan 18, 2017

No, I don't have editor.quickSuggestionsDelay set. It would be great if the process can be made faster. It's a bit problematic when the public API is so ridiculously long and with standard lua, they are all exposed in the global namespace, so reducing the number of returned suggestions is not really possible.

@jrieken
Copy link
Member

jrieken commented Jan 18, 2017

Yeah, I will make it faster and already found one or two things. To validate my theories, I would be helpful if you can create profile and share it with me. Do the following

  • hit 'F1 > Developer Tools'
  • in there select 'Profiles' and start a 'CPU profile'
  • type with suggestions and lack for a few seconds
  • stop the profiler, save, and share it the profile with me

Thanks

@Janne252
Copy link
Author

CPU-20170118T182734.zip

For reference, I typed the following text and then waited for about 5 seconds before ending the recording:
This is a test and the test goes on and on and on. Now pausing.

@jrieken
Copy link
Member

jrieken commented Jan 18, 2017

Things we should look into

  • debounce filtering while typing
  • first filter, then sort (when retrieving results)
  • run object track logic inside deserialise logic (when retrieving results)
  • improve IPC communication performance (500ms are spend sending/receiving items)

@jrieken
Copy link
Member

jrieken commented Jan 23, 2017

@Janne252 So far I have the communication between the main side and the extension host a lot faster. You should feel this when suggestions are requested freshly for a new word. Also I have made small improvements in how often and how we filter. That should make a small difference when we filter down the 6000+ suggestions against the word you are just typing... The changes will be in tomorrows insiders build and I'd be happy to get some early feedback.

@jrieken jrieken modified the milestones: February 2017, January 2017 Jan 23, 2017
@Janne252
Copy link
Author

@jrieken How do I know when a new insiders build is out?
The date suggests that I have the today's version.

@jrieken
Copy link
Member

jrieken commented Jan 25, 2017

That's good. There is an update every day and you should already have some performance improvements. It would be helpful if you can do another profiler run

@Janne252
Copy link
Author

@jrieken Here you go: CPU-20170125T172131.zip

jrieken added a commit that referenced this issue Jan 31, 2017
@jrieken jrieken modified the milestones: February 2017, March 2017 Feb 20, 2017
@jrieken jrieken removed this from the March 2017 milestone Mar 6, 2017
@jrieken
Copy link
Member

jrieken commented Mar 20, 2017

The match/score performance will be tackled in #22153

@jrieken jrieken added the bug Issue identified by VS Code Team member as probable bug label Apr 21, 2017
@jrieken
Copy link
Member

jrieken commented Sep 7, 2017

Closing this as scoring, sending/receiving items, and finally garbage collection spying has been improved. This has served as a great issue to tackle various issues and I hope things are better now. I will keep the sample extensions for the future and I will continue with performance work here and there.

@jrieken jrieken closed this as completed Sep 7, 2017
@vscodebot vscodebot bot locked and limited conversation to collaborators Nov 17, 2017
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Issue identified by VS Code Team member as probable bug perf suggest IntelliSense, Auto Complete
Projects
None yet
Development

No branches or pull requests

2 participants