Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Suggested configuration to speed-up the analysis #6905

Open
EdmundsEcho opened this issue Dec 16, 2020 · 21 comments
Open

Suggested configuration to speed-up the analysis #6905

EdmundsEcho opened this issue Dec 16, 2020 · 21 comments
Labels
C-support Category: support questions S-unactionable Issue requires feedback, design decisions or is blocked on other work

Comments

@EdmundsEcho
Copy link

Great tool. I would not have made as much progress on this "first real" Rust project as quickly as I did without it.

I'm running into issues where the analysis is interrupting the screen refresh e.g., I have to imagine what I'm typing for 2-3 words, pause, repeat. I'm using a desktop Mac Pro with 6 x 8 GB cpus. Are there any configuration settings that I might toggle when I want to prioritize accordingly? i.e., is there a specific feature that is particularly "greedy" right now?

PS: the analyzer version should be the latest: dbd0cfb (I'm using coc-rust-analyzer to maintain the updates)

@bjorn3
Copy link
Member

bjorn3 commented Dec 16, 2020

Rust-analyzer should never block the editor. Not even when it is stuck in an infinite loop. The language server protocol is an asynchronous protocol where the editor submits requests and the language server responds at a later moment. Possibly several seconds later when for example searching a whole project. How much cpu and memory does the rust-analyzer-mac process use while the editor is blocked?

@EdmundsEcho
Copy link
Author

... up to 10% CPU. More often, less than that (~3-4%). Memory is 950MB. 19 threads. For what it's worth, only 2 of the 6 cores fire-up.

@bjorn3
Copy link
Member

bjorn3 commented Dec 16, 2020

950MB shouldn't be problematic at all especially with how much ram you have. I didn't notice you were using (neo)vim though previously. I wonder if it has different behavior than vscode with regards to blocking on the language server.

@EdmundsEcho
Copy link
Author

I agree the memory seems ok. Certainly not what prior issues seemed to report. Yes, I'm using neovim. It does not block with other LS and is a much touted feature of nvim. The folks at coc-rust-analyzer was my first stop. The "halting" (if you will) was something they had no control over.

@GopherJ
Copy link

GopherJ commented Dec 16, 2020

@EdmundsEcho I think you can try to have more swap, that's how I make rust-analyzer works in 8GB memory...

but in general I think 16GB memory is a good fit for rust-analyzer if the project is too big

Are you sure that you didn't active any debug configs?

@EdmundsEcho
Copy link
Author

@GopherJ Thank you. If not too much to ask, may you point me to how how increase the swap?

re debug settings: I only have the "checkOnSave" register set to true.

@EdmundsEcho
Copy link
Author

The best I can tell, it clearly hangs while computing the types. In neovim, they are displayed using "virtual text".

The types are cleared from the display when in "insert mode". Once back in "normal mode" the types get re-computed. The display hangs during this computation.

- E

@GopherJ
Copy link

GopherJ commented Dec 16, 2020

@EdmundsEcho could you try if this config work for you? https://github.com/GopherJ/cfg/blob/master/coc/coc-settings.json

I experienced lag on my 8gb laptop but it's not so serious like yours;)

@EdmundsEcho
Copy link
Author

@GopherJ Thank you for the config. The speed has improved. One setting I suspect might have helped was the 200ms throttle: "diagnostic.messageDelay": 200,.

A broader FYI: Nightly fires-up two instances of rust-analyzer. The config you provided specified "nightly". I was using "stable". The "stable" only had one instance.

Is that by design?

- E

@lnicola
Copy link
Member

lnicola commented Dec 19, 2020

Is that by design?

Yes. The second instance is used only to load and run proc macros. It shouldn't use any resources outside of that -- on my Linux system with an empty project it's using less than 3 MB RAM. You can disable it with rust-analyzer.procMacro.enable.

@lnicola lnicola added the S-unactionable Issue requires feedback, design decisions or is blocked on other work label Dec 19, 2020
@EdmundsEcho
Copy link
Author

EdmundsEcho commented Dec 19, 2020

That is consistent with my experience on a Mac OS also. I only mentioned it because it was a contrast between releases so wanted to confirm. No issues and I assume could be a way to improve performance.

Regarding the original thread, the conclusion of "unactionable" may be exactly right. That is true if the halting experienced in the nvim context is completely out of the analyzer's "hands".

I don't know how much the team at coc-rust-analyzer has the understanding of the "not supposed to interrupt screen rendering/typing input" spec. So if we aren't having that experience means something is not optimal, perhaps even wrong. Finally, evidence of combined performance being awry is the fact that the improved experience correlated with using the throttling setting.

What is the best way to coordinate?

@lnicola
Copy link
Member

lnicola commented Dec 19, 2020

It's not a difference between releases, it's a difference introduced by @GopherJ's configuration. Disabling proc macros can speed up the analysis, at the cost of worse accuracy -- not that it matters, since the server tends to crash easily these days.

@EdmundsEcho
Copy link
Author

Agree re the @GopherJ configuration that had the impact... nothing to do with the code base. However, what does it mean that this setting should impact the rendering at all? Do you know what I mean?

...at the cost of worse accuracy -- not that it matters, since the server tends to crash easily these days

That must be frustrating as someone working on the code base. What you all have accomplished up to now is great stuff. Bugs and all, I'll take it. Not only has my productivity increased, but my enjoyment and understanding too. That's "priceless"... again, bugs and all :)

- E

@lnicola
Copy link
Member

lnicola commented Dec 20, 2020

Unfortunately, I don't know much about coc.nvim, but LSP clients should not really block waiting for the server, like @bjorn3 said above.

Does it happen in every project, or can you share the code? I don't use coc.nvim, but I can test in Code and nvim/LanguageClient-neovim.

@GopherJ
Copy link

GopherJ commented Dec 23, 2020

I think it's only about laptop, I didn't experience this while using:

image

with 16GB RAM

However, I experienced similar lag in business laptop like Huawei Matebook 2019 (8GB RAM + i7 8gen)

Also I think it's import to do test with minimum vimrc + latest neovim

@lnicola

This comment has been minimized.

@flodiebold flodiebold added the C-support Category: support questions label Mar 31, 2022
@coding610
Copy link

coding610 commented Jun 16, 2023

I am seeing very similar issues as @EdmundsEcho but using nvim-lsp, instead of coc. (I'm on neovim). I'm on a mac with these specs:
CPU: Intel i5-7267U (4) @ 3.10GHz GPU: Intel Iris Plus Graphics 650 RAM: 16GB.

I've tried to translate the coc settings to nvim-lsp settings, but I have seen no succes so far. Any help on these nvim-lsp settings for rust_analyzer? How to change the setting "diagnostics.messageDelay" as @EdmundsEcho suggested?

Thanks

EDIT:
This is my current (very minimal) rust_analyzer config (that i copied from some blog i think):

lspconfig['rust_analyzer'].setup({
    on_attach=on_attach,
    settings = {
        ["rust-analyzer"] = {
            imports = {
                granularity = {
                    group = "module",
                },
                prefix = "self",
            },
            cargo = {
                buildScripts = {
                    enable = true,
                },
            },
            procMacro = {
                enable = true,
            },
        }
    }
})

My neovim system originates from nvchad, but I have configured it to my liking, and added lsp and stuff.
Here is the repo:
https://github.com/coding610/nvimchad-dotfiles/tree/master

@lnicola
Copy link
Member

lnicola commented Jun 16, 2023

diagnostic.messageDelay sounds like a CoC-specific setting, it won't apply to other clients. Also, your build script and proc macro settings are the defaults now, you don't need to change them.

For better perf, I like:

    "rust-analyzer.diagnostics.experimental.enable": true,
    "rust-analyzer.cachePriming.enable": false,
    "rust-analyzer.checkOnSave": false

And you might also be interested in https://github.com/pr2502/ra-multiplex.

@coding610
Copy link

coding610 commented Jun 16, 2023

@lnicola how would I write the settings that you though would increase preformance, in my settings. I tried to do something like this:

lspconfig['rust_analyzer'].setup({
    on_attach=on_attach,
    settings = {
        ["rust-analyzer"] = {
            imports = {
                granularity = {
                    group = "module",
                },
                prefix = "self",
            },
            cargo = {
                buildScripts = {
                    enable = true,
                },
            },
            procMacro = {
                enable = true,
            },
            diagnostics {
                experimental {
                    enable = true
                }
            },
            ...
        },
    }
})

But it gave a error saying undefined global diagnostics, undefined global experimental...

@lnicola
Copy link
Member

lnicola commented Jun 16, 2023

If you look at the lines above, it should probably be diagnostics = { experimental = { enable = true } } }. You're missing the = signs.

@coding610
Copy link

coding610 commented Jun 16, 2023

Thank you,
With these settings I am seeing much less lag.

EDIT:
Settings:

lspconfig['rust_analyzer'].setup({
    on_attach=on_attach,
    settings = {
        ["rust-analyzer"] = {
            imports = {
                granularity = {
                    group = "module",
                },
                prefix = "self",
            },
            cargo = {
                buildScripts = {
                    enable = true,
                },
            },
            procMacro = {
                enable = true,
            },
            diagnostics = {
                experimental = {
                    enable = true
                }
            }
        },
    }
})

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
C-support Category: support questions S-unactionable Issue requires feedback, design decisions or is blocked on other work
Projects
None yet
Development

No branches or pull requests

6 participants