Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add streaming support in vscode #105870

Open
heejaechang opened this issue Sep 1, 2020 · 13 comments
Open

Add streaming support in vscode #105870

heejaechang opened this issue Sep 1, 2020 · 13 comments
Assignees
Labels
editor-contrib Editor collection of extras feature-request Request for new features or functionality quick-open Quick-open issues (search, commands)
Milestone

Comments

@heejaechang
Copy link

heejaechang commented Sep 1, 2020

this is a conceptually dup of #20010. creating a new issue since the previous one is closed and that one is using custom streaming support rather than official streaming (partial results) support added to LSP.

Language Server Protocol has added partial results (streaming) supports (https://microsoft.github.io/language-server-protocol/specifications/specification-current/#partialResults) and VS has already added support for it.

As long as I know, the support is added to LSP since it was one of the top complaints from VS users, especially from people who have large codebases.

For example, Finding Symbols from the workspace (WorkspaceSymbol) can easily take several seconds for big codebase and return a lot of results, Streaming takes the same amount of time to get full results, but most of the time, users get what they want before getting full results. so rather than users get the worst-case scenario all the time, streaming reduces the time users have to wait.

VS has FULL SUPPORT for streaming.
(Find all references, Document symbols, document highlights, Pull model diagnostics, workspace symbols and completion)

So they even support partial results (streaming) for document-level features such as "Document Symbol", "Document Highlights" and etc.

but, we (Python LS for vscode - Pylance) are not asking that much, but at least support for workspace-wide features such as "Workspace Symbol", "Find all references", "Call Hierarchy" and etc.

We do understand streaming (partial results) requires an incremental update of UI and that is not easy (flickering, ordering, sizing, grouping, etc issues). I believe VS had that issue as well when they started adding streaming supports (even before LSP), but once it is done, I believe they got a lot better at supporting large codebase.

@heejaechang
Copy link
Author

tagging @milopezc who did VS LSP streaming support. tagging @dbaeumer as FYI.

@heejaechang
Copy link
Author

heejaechang commented Sep 1, 2020

tagging @gundermanc who did the VS side work of updating UI incrementally for VS Search. he can provide more detail info but it does things like

  1. wait for users to stop typing before updating the result list.
  2. update the result in a fixed interval (sorting, merging)
  3. freeze items around the user-selected item and etc

and that gave so far very nice experiences to VS. you can try by "Ctrl+Q" in VS.

@gundermanc
Copy link
Member

For workspace/symbol specifically, Visual Studio actually supports two slightly different versions of a streaming interaction with slightly different behaviors and guarantees, but both:

  • Attempt to more or less maintain the user's selection as new items are found.
  • Allow the addition of items for several seconds after the results list is visible.

Roslyn, for example, uses this guarantee to do a long-running brute-force search through all symbols in the project, evaluating in order of likelihood of a match (recent projects, close-by projects, etc). This means that the majority of the time, the symbol is found instantly, but in some cases you may have to a wait a few seconds for it to be found.

GoTo:

Symbol and file navigation only box in VS.

  • Collects items one by one, sorts them after 100 items are in queue, or 100 ms.
  • Lets items get added above or below the user's selection. If above, the viewport is adjusted to keep the same items in view.
  • Changing your search query clears the results list and runs a new search.

image

Ctrl+Q/VS Search:

Search aggregation 'global search' box in VS for menu commands, options, templates, files, and symbols.

  • Searches many more types of items, and so much more complex than GoTo's model.
  • Lets items get added above or below the user's selection. ~2 seconds after the user stops typing, everything at or above the selected item in the list is 'frozen', and all newly found items are added below the frozen section, ensuring that the desired item isn't moved out from under the mouse or selection.
  • Maintains a queue for each source of items that collects items one by sort, sorts them after 100 items are in the queue (or 100ms), and merges them into the visible list.
  • Maintains a cache of previous search results, which are re-sorted while the next search is in progress, enabling instantaneous feedback if the search query is changed after the results list is visible.

Doc with more detailed aggregator architecture for Microsoft internal viewers: https://microsoft-my.sharepoint.com/:w:/p/chgund/EWPw43pfMghFj9LCcj2K8gYBTJKGoz5ETKWKIRraTeaqnA?e=1awZO6

CtrlQ2

@jrieken jrieken added editor-contrib Editor collection of extras feature-request Request for new features or functionality quick-pick Quick-pick widget issues labels Sep 2, 2020
@jrieken jrieken added this to the Backlog milestone Sep 2, 2020
@heejaechang
Copy link
Author

heejaechang commented Oct 12, 2020

typescript has the same, cancellation/partial results issue. users once invoked "find all references", they have to wait until it is done. and all results show up at once.

typescriptFAR

@heejaechang
Copy link
Author

another asks from users related to streaming. (microsoft/pylance-release#2236)

basically, user has multi-language workspace and do not want one LS to slow down whole "go to symbols" experience. streaming support will improve the experience since it won't let vscode be blocked by any one LS.

@luabud
Copy link
Member

luabud commented Jan 21, 2022

@jrieken while providing full streaming support is a super complex task, would it be feasible to consider it for workspace symbols only, at least as a "first priority"? 🤔

@heejaechang
Copy link
Author

I think workspace symbols is the only feature currently that is workspace wide but also, doesnt have any semantic scoping. for example, other workspace wide features such as find all references, rename, peek referenecs, rename files only work on files where there are semantic dependencies (in other words, find all references on python symbol won't search java, ts, and etc files even if they are in same workspace), but workspace symbol will search all of them.

so, if we need to choose one, it will be workspace symbol that require streaming at least.

@nickzhums
Copy link
Member

nickzhums commented Jan 25, 2022

Sharing some thoughts from the Java team, streaming support is extremely useful for several cases
1 Code completion - Completion responsiveness and performance is always among the top asks from developers and streaming support would greatly enhance this aspect. Support incremental loading in code completion suggestions will have a huge impact on developer satisfaction.
2 Reference view (find all references, workspace symbols, etc). This is often brought up by Java developers with complex codebases which is becoming increasing common (Since more professional Java developers are adopting VS Code). With those big projects, we find asks in this area quite frequent in our surveys. This one is aligned with the Python's team thoughts.
3. Support streaming in general will improve UX in a lot of areas
Would love to see how we can support this :)

perrinjerome added a commit to perrinjerome/pygls that referenced this issue Jul 10, 2022
It seems there was a confusing with workDoneProgressParams and
workDoneProgressOptions.

[workDoneProgressParams](https://microsoft.github.io/language-server-protocol/specifications/lsp/3.17/specification/#workDoneProgressParams)
is the type used for the token in methods and notifications, it is the
token itself, so it has type str | int. This type is correct, this was
to explain context.

[workDoneProgressOptions](https://microsoft.github.io/language-server-protocol/specifications/lsp/3.17/specification/#workDoneProgressOptions)
is the type used for registration, it is used to indicate if work done
progress are supported, so it is a boolean. This type was wrong.

It seems as of today vscode does include workDoneProgress in requests:
 - microsoft/vscode-languageserver-node#528 (comment)
 - microsoft/vscode#105870
perrinjerome added a commit to perrinjerome/pygls that referenced this issue Jul 10, 2022
It seems there was a confusing with WorkDoneProgressParams and
WorkDoneProgressOptions.

[WorkDoneProgressParams](https://microsoft.github.io/language-server-protocol/specifications/lsp/3.17/specification/#workDoneProgressParams)
is the type used for the token in methods and notifications, it is the
token itself, so it has type str | int. This type is correct, this was
to explain context.

[WorkDoneProgressOptions](https://microsoft.github.io/language-server-protocol/specifications/lsp/3.17/specification/#workDoneProgressOptions)
is the type used for registration, it is used to indicate if work done
progress are supported, so it is a boolean. This type was wrong.

It seems as of today vscode does include workDoneProgress in requests:
 - microsoft/vscode-languageserver-node#528 (comment)
 - microsoft/vscode#105870
@ejgallego
Copy link

ejgallego commented Feb 16, 2023

In our case coq-lsp, support for streaming textDocument/diagnostic would be very useful.

@ljw1004
Copy link

ljw1004 commented Jul 10, 2023

What is the current status of this request, please?

For Hack LSP, we wish to add streaming support to textDocument/references. For some of our large projects, we can compute the first few references within milliseconds but it takes P95=2mins until we've computed the final references. We would love to display the first few references quickly.

@findleyr
Copy link

findleyr commented Aug 7, 2023

In gopls there's a tension in our completion logic between latency and search depth. Being able to stream results would eliminate that tension.

@DanTup
Copy link
Contributor

DanTup commented Aug 8, 2023

In gopls there's a tension in our completion logic between latency and search depth. Being able to stream results would eliminate that tension.

Dart has this dilemma too. Users expect code completion to include all symbols including those that have not yet been imported into the current file (they will auto-import when selected), but this full list is much more expensive to compute. The result is that we assign a time budget and stop building completions after that time (because otherwise completion could appear very slow), but this has the result that completion results can appear inconsistent (particularly on slower machines).

What I'd really like to do is send all of the locally imported items first (which can be computed very quickly) because most likely the user wants something from that list, but still be able to provide the larger list. Using isIncomplete=true doesn't help much because truncating the list based on which items were discovered first (rather than ranked) doesn't provide good results, and truncating the list at all without compiling the entire list could result in exact matches being excluded (and the user cannot type any additional characters to trigger further searching).

@JamyDev
Copy link

JamyDev commented Feb 6, 2024

For our internal LSP at Uber we would love to have this as well to avoid delays from the native LSP's, in favor of our much faster cached results. cc @isidorn

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
editor-contrib Editor collection of extras feature-request Request for new features or functionality quick-open Quick-open issues (search, commands)
Projects
None yet
Development

No branches or pull requests