Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reload part of the server configuration without restarts #3666

Merged
merged 14 commits into from
Mar 30, 2020

Conversation

SomeoneToIgnore
Copy link
Contributor

@SomeoneToIgnore SomeoneToIgnore commented Mar 20, 2020

Partially addresses #2857

Closes #3751

Reloads all server configuration that's not related to VFS without restarts.

The VFS-related parameters are not considered, since VFS is planned to be rewritten/replaced in the future and I have a suspicion that with the current code, swapping the VFS and the file watchers on the fly will cause big troubles.

I have to store and process the config request id separately, since the workspace/configuration response returns any[] (https://microsoft.github.io/language-server-protocol/specifications/specification-current/#workspace_configuration), if there's a better way to handle those responses, let me know.

@SomeoneToIgnore SomeoneToIgnore changed the title Reload configuration Reload part of the server configuration without restarts Mar 20, 2020
Copy link
Contributor

@Veetaha Veetaha left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, though the protocol for config reloading is somewhat imperfect...

crates/rust-analyzer/src/main_loop.rs Outdated Show resolved Hide resolved
crates/rust-analyzer/src/main_loop.rs Outdated Show resolved Hide resolved
crates/rust-analyzer/src/main_loop.rs Outdated Show resolved Hide resolved
editors/code/src/client.ts Outdated Show resolved Hide resolved
editors/code/src/inlay_hints.ts Outdated Show resolved Hide resolved
editors/code/src/inlay_hints.ts Outdated Show resolved Hide resolved
editors/code/src/main.ts Outdated Show resolved Hide resolved
@SomeoneToIgnore SomeoneToIgnore force-pushed the reload-configuration branch 2 times, most recently from a327c27 to 3918ef3 Compare March 22, 2020 16:26
@SomeoneToIgnore
Copy link
Contributor Author

I would really appreciate if somebody can review and merge this before the #3696 lands :)

Maybe @matklad ?

@Veetaha
Copy link
Contributor

Veetaha commented Mar 23, 2020

Ah, sorry, there wil be a conflict ;)
[UPD] Hmm, I think there won't be one since we modified different lines of code.

@edwin0cheng
Copy link
Member

edwin0cheng commented Mar 24, 2020

I just tested, at least "rust-analyzer.highlightingOn" do not works. (No reload notification or do not change any highlight)

@matklad
Copy link
Member

matklad commented Mar 24, 2020 via email

@edwin0cheng
Copy link
Member

edwin0cheng commented Mar 24, 2020

@matklad Agreed.

OTHO , And I also tested:

  • inlay hints and parameter hints that both works correctly.
  • rust-analyzer.cargo-watch.enable works, but if I disable it during cargo checking, the progress bar displayed in VSCode does not clear.

@SomeoneToIgnore
Copy link
Contributor Author

Thanks a lot for testing, nice catches.

I've moved the whole highlighting section under the reload check, so the prompt now appears for all of them.
If I got that right, with semantic tokens feature on, we highlight the documents unconditionally, so that setting is part of the old highlighting mechanism.
I wonder, should we provide something similar to the new one too?

As for the check interruption, this gets interesting: it looks like we try to do the graceful shutdown in the Drop impl for the CheckWatcher.
I have no clue why does not it finish correctly though, since don't know how it interacts with VS Code.
Might it be that we need to send some response to VS Code on termination also?

@eriktrom
Copy link

@SomeoneToIgnore - never used vscode framework, but give this a shot, in editors/code/src/config.ts, line 77, return Promise.resolve() so that when u pass the callback to the constructor function's first ev listener(in that file), this.refreshConfig() is always scheduled in the microtask queue where promises resolve. U should then also be able to remove this.refreshConfig(); from the 2nd line in the constructor, thus forcing it to be called in the same event loop stack frame.

I've noticed this bug, so if you need help, holler, don't mind helping out.

(and of course, i could be way wrong and the scheduling could very well be benign)

@Veetaha
Copy link
Contributor

Veetaha commented Mar 25, 2020

@eriktrom what bug are you talking about? What do you mean by passing the callback to the event listener? Maybe you meant registering the event listener callback which is this line:
https://github.com/rust-analyzer/rust-analyzer/blob/6ad1a0711631d8017791a6dfe85bbe205d6c7414/editors/code/src/config.ts#L32
And I don't see any bugs and how microtasks should be connected with them here.

@SomeoneToIgnore if you feel that there is something that is broken on the frontend that doesn't support the reloading, you can pass this task to me. Anyway, my guess is that the server doesn't send the end event when it interrupts the cargo watcher. So this line here ends with an error
https://github.com/rust-analyzer/rust-analyzer/blob/6ad1a0711631d8017791a6dfe85bbe205d6c7414/crates/ra_cargo_watch/src/lib.rs#L381

because the communication channel was closed abruptly in the WatchThread::drop()
https://github.com/rust-analyzer/rust-analyzer/blob/6ad1a0711631d8017791a6dfe85bbe205d6c7414/crates/ra_cargo_watch/src/lib.rs#L398

From my perspective, it looks like the arr two solutions:

  1. Add a Shutdown command here:
    https://github.com/rust-analyzer/rust-analyzer/blob/6ad1a0711631d8017791a6dfe85bbe205d6c7414/crates/ra_cargo_watch/src/lib.rs#L98-L101

And properly handle it here by sending the WorkDoneProgressEnd task:
https://github.com/rust-analyzer/rust-analyzer/blob/6ad1a0711631d8017791a6dfe85bbe205d6c7414/crates/ra_cargo_watch/src/lib.rs#L123-L129

And send the Shutdown command in drop().

  1. Don't drop the cargo watcher, but maybe instead just stop the current check process by sending the new Update command which will also reastart cargo with the new config. However, we need to add some API to update the watcher config. Maybe the Update command should supply the options themselves (i.e. make Update command a tuple enum variant that includes CheckOptions)?

Looks like it is very intuitive to do it the second way. What do you think @kiljacken @matklad @SomeoneToIgnore

@eriktrom
Copy link

eriktrom commented Mar 25, 2020

sorry let me clarify succinctly:

fe379be#diff-519236f72123ff858d99b50e506bf6ccL56-L57

fe379be#diff-519236f72123ff858d99b50e506bf6ccL70-L87

in the constructor. the callback this.onConfigChange, registered on line 56 by the vscode.workspace.onDidChangeConfiguration(this.onConfigChange, this, ctx.subscriptions); event handler is async. But when called, due to being an async callback, we're already in a different turn of the event loop. So line 57 is being called immediately, and the method this.onConfigChange is non-deterministically called by the event handler (registered on line 56), I'd need to debug with a breakpoint but a few things can go wrong, the instance could be destroyed in that time period, we might then re-register that async callback if its a hot path, preventing the normal sync stack from flushing, (b/c promises block while they're queue has any microtasks), leading to a timeout, that then requires an host extension restart.

I'm not sure how vscode does its internal scheduling, so I just commented offhand as though it didn't and therefore, take with grain of salt.

BTW - I love rust-analyzer :) thanks so much for ur work.

@kiljacken
Copy link
Contributor

Anyway, my guess is that the server doesn't send the end event when it interrupts the cargo watcher. So this line here ends with an error

https://github.com/rust-analyzer/rust-analyzer/blob/6ad1a0711631d8017791a6dfe85bbe205d6c7414/crates/ra_cargo_watch/src/lib.rs#L381

because the communication channel was closed abruptly in the WatchThread::drop()

https://github.com/rust-analyzer/rust-analyzer/blob/6ad1a0711631d8017791a6dfe85bbe205d6c7414/crates/ra_cargo_watch/src/lib.rs#L398

I believe your analysis is correct. I didn't really consider cancelling the check thread without starting a new run immediately afterwards which masks the underlying issue. I believe we are robust against multiple end messages in the extension, so just making sure we send it in a shutdown command or something else should be fine.

@edwin0cheng
Copy link
Member

And properly handle it here by sending the WorkDoneProgressEnd task:

@Veetaha @SomeoneToIgnore One thing I want to clarify here is that the Cargo progress bar is a custom progress bar, not the default VSCode progress bar.

@@ -93,6 +93,12 @@ export async function activate(context: vscode.ExtensionContext) {
activateHighlighting(ctx);
}
activateInlayHints(ctx);

vscode.workspace.onDidChangeConfiguration(
_ => ctx?.client?.sendNotification('workspace/didChangeConfiguration', { settings: "" }),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, is there perhaps some way to make the client library we use do this automatically?

@@ -20,6 +20,7 @@ export class Ctx {
const res = new Ctx(config, extCtx, client, serverPath);
res.pushCleanup(client.start());
await client.onReady();
client.onRequest('workspace/configuration', _ => [configToServerOptions(config)]);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similarly, is there some kind of build-in handler here, which just fetches the relevant section from the config?

@@ -624,7 +661,17 @@ fn on_notification(
Err(not) => not,
};
let not = match notification_cast::<req::DidChangeConfiguration>(not) {
Ok(_params) => {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, yeah, given that we already have no-op handler to avoid errors, I think clients do send this notification already?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It did not trigger for me when I tried to change the settings on master, so it looks like they don't.

But I find this surprising also, I would have expected it to be working by default.

@SomeoneToIgnore
Copy link
Contributor Author

I've finally found some time to dig and fix the progress bar issue.

We cannot use the way discussed at #3666 (comment) due to the way the main loop works: it uses the task_recv from the current check watcher.
This means that we cannot send whatever message from the old watcher and just drop it: the message won't be received by anybody, hence no progress bar cleaning will be done ever.

Instead, I clean up all diagnostics and stop the progress bad on CheckWatcherThread::run which is done once when the new watcher starts.

With this fixed, I've addressed all feedback up to here.

The last portion of the feedback remaining is the one from @matklad about the automatic config requests/responses handling, the one that I've outlined in the PR header as

if there's a better way to handle those responses, let me know.

I still have no idea if there's a better way and will gladly put a fixme and fix it later instead of dealing with it now and holding the PR for whatever uncertain amount of time more.

@matklad
Copy link
Member

matklad commented Mar 30, 2020

bors r+

@bors
Copy link
Contributor

bors bot commented Mar 30, 2020

@bors bors bot merged commit 671926a into rust-lang:master Mar 30, 2020
@SomeoneToIgnore SomeoneToIgnore deleted the reload-configuration branch March 30, 2020 12:53
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Way to toggle inlay hints without reload
6 participants