-
Notifications
You must be signed in to change notification settings - Fork 220
dynamic completions and signature helps #1507
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Yeah, we should definitely do something like this! Without having reviewed the PR in detail, here are some generic thoughts:
|
It's useful for all values the runtime knows about (i.e. globals), even in a local scope. I think I'd prefer to always go through the LS, but I feel like we already talked about that at some point and Zac wasn't super happy with that. |
|
Here is one scenario why I think it would be really good to go through the LS: say a user has the following file struct Foo
a::Int
b::String
end
x = Foo(2, "test")and then executes that in the REPL. So now we can use the information that Now say the user edits the file and updates the definition of
So I think one of the main worries was that it would complicate the whole communication story a lot. I think that is true, unless we have something like my cancellation framework. If I get that into the shape that I imagine it should have, then I think it would make this kind of thing not trivial, but much, much simpler. For example, it should make it fairly simple to implement say the request handler for hovers in such a way that at the beginning of the request handler two async tasks are started: one that does the existing static analysis in the LS, and a second one that sends a message to the REPL asking for its information. In this cancellation framework, we would be able to add a timeout to the request to the REPL, so that we essentially have a very simple way of making sure that the REPL request doesn't block us: if it comes back in time, great, we merge information, but if it doesn't (maybe because the REPL crashed, or is blocked, or any other reason), then we'll just return the information from the static analysis side. |
|
Hiya, I've only read through (but was unable to run/experiment with) the PR so please excuse if I've missed some thing(s). My main concern relates to how consistency is maintained between what is within a document and what is executed in the REPL to ensure that any given completion is relevant. I don't immediately see how dynamic information for an arbitrary variable These seem like possibly resolvable though fairly complicated issues, but I do feel fairly strongly that information we provide for a file (or project as a whole) should not be state dependent (i.e. on the REPL session). |
|
Do we need full cancellation token support before we can merge this? I'm worried that the REPL process in general is a very unreliable message processor, and in that case it seems like we should have things like timeouts on requests and general cancel token across the RPC boundary to make sure this doesn't provide a weird experience when the REPL process is blocked. I have ongoing work in https://github.com/davidanthoff/CancellationTokens.jl and julia-vscode/JSONRPC.jl#58 that will make that quite simple, but it will still take a while to finish that. Just another point: I think this will be especially helpful for notebook kernels, as one tends to have a lot of global code in there. Finally, I could also still imagine a design where the LS process sends messages to the kernel (REPL or notebook) and then merges things before sending them back to the client. I think once we have cancel token support with timeouts that could also work. |
No. Requests to the REPL time out after .5s, which makes for a pretty interactive experience.
Agreed. For now this is REPL-only though.
Yes, but that comes with a lot of additional complexity, so I'd be in favor of getting this in as-is for now. |

I this PR I would like to show my ideas to try to make more use of dynamic information so that we can get more enhanced features that LS is probably hard to offer because of its static nature.
For now, I implemented dynamic completions and signature helps, that hopefully don't conflict with the existing LS features or gracefully replace them.
completions
we can offer property/field/dict completions using information from running session.
For example if we have loaded the code below:
then we can get
They won't conflict with the existing completions from LS and as far as I understand field/dict completions are impossible to be implemented in LS and property completions are hard to offer in a way that works for most cases (like scripting with lots of globals)
signatures help
signature help can be enhanced using user code and type inference; we can dynamically infer the return type and narrow down possible method applicants e.g.:

(note that
Array{Float64, 3}return type gets inferred and also there're only 2 method signatures offered)The signature helps from dynamic information are more "correct" than those offered by LS when the user code loaded and the type inference succeeds, and in that case we can eagerly replace LS signature helps with the dynamic ones.
This PR is obviously "after JuliaCon" stuff.
I'm also welcome on general discussions about these style of way to use running session.
I hope these dynamic features can live nicely with LS, and we can do yet more interesting like helping linting using dynamic features. But I may miss some points. If we like them, I may also want to mention this kind of enhancement idea in JuliaCon.