Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make the debounce adaptive for validation job #1973

Merged
merged 3 commits into from
Dec 15, 2021

Conversation

jdneo
Copy link
Contributor

@jdneo jdneo commented Dec 13, 2021

TL;DR

This PR makes the debounce time of performValidation() adaptive. This change won't affect the perf for a specific scenario, like the time to calculate the completion list, but it can significantly boost the overall throughput of the JDT Language Server, and the more powerful the machine is, the more perf boost it will get.

400ms debounce is too large

The current 400ms debounce time looks too large for the validation job. I added some logs to see how long it takes for performValidation() job.

Randomly writing some code in a java file with 4000+ lines and 400+ methods:

Windows, i7@2.9GHz, 32GB Mem MacOS, i5@2.9GHz, 8GB Mem
7.04ms 48.97ms

If we check the time of JDTLanguageServer.waitForLifeCycleJobs(), you can see threads take lot of time just wait for the document lifecycle jobs.

Windows

image

MacOS

image

Use Adaptive Debounce

I used a moving average window to make the debounce time adaptive and make sure the largest debounce time is 400ms.

Average Time Cost per LSP Request

To check the impact of this change, let's see the average time cost to resolve each LSP request. We can get the time for each request calculate from the trace:
image

Windows (Unit: ms)

Cut Top & Bottom 5% Cut Top & Bottom 10%
400ms Debounce 229.36 176.05
Adaptive Debounce 125.53 99.98

MacOS (Unit: ms)

Cut Top & Bottom 5% Cut Top & Bottom 10%
400ms Debounce 1206.7 1143.27
Adaptive Debounce 1214.51 1073.7

Note: When coding, some of the LSP request will take plenty of time to calculate (i.e. complete types with S for the first time), thus impacting the following LSP requests because we only dispatch one request each time. So the data in the table above, I sorted all the time data, and cut some lowest and highest data. (Considered them as abnormal data).

Throughput

We can convert the above table to throughput, the unit is number of LSP request handled per second

Windows

Cut Top & Bottom 5% Cut Top & Bottom 10%
400ms Debounce 4.36 5.68
Adaptive Debounce 7.97 10

MacOS

Cut Top & Bottom 5% Cut Top & Bottom 10%
400ms Debounce 0.83 0.87
Adaptive Debounce 0.82 0.93

Below are two videos illustrate the impact when we have a higher throughput, please note that the time it costs to semantic highlighting variable aaa. A higher throughput makes the semantic highlighting faster (And for other kind of requests as well 馃槂).

400ms debounce

400_debounce_2.mp4

Adaptive debounce

The first round still use 400ms as the init debounce time, after several rounds, the moving average become more and more smaller and you can see the highlighting become more and more faster.

adaptive_debounce_2.mp4

Signed-off-by: Sheng Chen sheche@microsoft.com

Signed-off-by: Sheng Chen <sheche@microsoft.com>
Signed-off-by: Sheng Chen <sheche@microsoft.com>
Copy link
Contributor

@rgrunber rgrunber left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Definitely improves the responsiveness on typing. I can't see any issues with this so I would be fine with including it.

Signed-off-by: Sheng Chen <sheche@microsoft.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants