Skip to content
This repository has been archived by the owner on Jun 1, 2023. It is now read-only.

Speedup by going full async and manage tasks intelligently #163

Closed
bew opened this issue Sep 23, 2019 · 1 comment
Closed

Speedup by going full async and manage tasks intelligently #163

bew opened this issue Sep 23, 2019 · 1 comment

Comments

@bew
Copy link
Contributor

bew commented Sep 23, 2019

Scry is awfully slow when using neovim with LanguageClient (LSP client) and deoplete (a completion framework).
Diagnostics are almost always out of date or simply not there at all (still processing other things I guess or launching build after build...), and the completions is buggy, does not work (not sure why yet).

Reading through the logs from LanguageClient, it looks like Scry is always behind the requests sent by the LSP client, because currently Scry processes each event sequentially (yes... I know... ><)

FYI: Link to the logs https://gist.github.com/bew/7f5a274a59b85277bdf4c5e7b91f7655

I've been reading the RLS (Rust Language Server), it's really cool what they did ❤️
Overview: https://www.ncameron.org/blog/how-the-rls-works/
With more details: https://github.com/rust-lang/rls/blob/master/architecture.md


thinking in progress...

[for now it's pretty raw, I'll probably rewrite this over time]

We really need to separate the requests reading from the processing

For exemple, for completion, neovim will send ~30 requests before stopping
=> not sure exactly why is stops (yet!), maybe scry finally replied to the first request?

Will need a graph of (async) tasks, with dependencies and interruptions when input data is out of date.

For completion, iirc we just parse the plain code and go through the AST nodes directly,
which means that we don't have all types / method names (e.g: getters, records, ...)

  • We should pass the TopLevelVisitor first everywhere and cache results for stdlib and other shards, other files (?)
    (is this what @asterite wanted to do?)
  • then parse / read (through a RPC API between scry and the compiler ?) the resulting AST nodes, or directly the types, constants, methods, macros (before & after expansions)
  • and merge types DB when another file adds things to a cached type (e.g: Array).

other thoughts posted on gitter

With a separation between the client requests handling (the LSP protocol, etc..) and the actual tasks we have to do, we'll move to a more generic LSP server implementation with only the need to implement (for scry!) hooks on the various LSP events sent by the client.
Then we'll have to send tasks via channels to a task manager, which will manage the dependencies between the tasks, stop running tasks (if possible) if their result is their input is outdated (e.g: by a file change), etc...

@bew
Copy link
Contributor Author

bew commented Sep 22, 2020

I'm not doing crystal anymore, I will not work on this anytime soon..

@bcardiff bcardiff closed this as completed Jun 1, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants