-
Notifications
You must be signed in to change notification settings - Fork 768
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pylance runs out of memory while scanning files in workspace #4121
Comments
how many files are opened in vscode? or do you have imports that will imports hundred of files transitively? |
Also, have you set I think the 2GB heap limit comes from the instance of node that VS Code launches to run its language servers. I don't think Pylance can override this. We should confirm my understanding. Even if we cannot increase the 2GB limit, Pylance still shouldn't be running out of heap space. As @heejaechang mentioned above, we have code in place to monitor memory usage and discard any in-memory caches when we reach a high-water mark. It sounds like there may be some problem with that logic. |
so, 4GB (64bit vscode) is max decided at compile time for electron. only way to get around it seems allowing users to provide their own nodejs which doesn't have pointer compression enabled at build time. and we run our language server on the user supplied node instead of vscode. |
I tested it (https://github.com/microsoft/pyrx/pull/3310) and it works as expected. |
Let's figure out why our in-memory cache management is not working before we consider exposing an option like this. |
Answering questions above:
|
Thanks for the additional information. Is most of your code base untyped? In particular, are return type annotations missing for most of the functions and methods in your code base? If so, then a recent change I made in pyright could significantly reduce the amount of code analysis being performed in this situation. This change will be in this week's prerelease version of pylance. Please give it try and let us know if that eliminates the heap issue you're seeing. Pyright is able to log additional heap usage information that would be useful in helping to track down the problem you're seeing. Please create a |
Why yes! I'll try out the prerelease version.
Weirdly, I've already done that and I don't see the heap stats in the output 馃 -- do I need to install the Pyright extension for this to work? |
Okay I logged in this morning and....it didn't crash. I checked It turns out that Still probably good to diagnose why the crash is happening though. @erictraut I tried with the prerelease version after putting Let me know about the |
No, pylance is built on top of pyright.
This week's prerelease version of pylance hasn't been released yet. It should be released within the next 24 hours if everything goes as planned. |
It's available now. |
It probably works for remote server since it uses real node rather than electron that vscode is based on. electron is compiled with pointer compression on, so 4GB is hard limit you can't cross runtime. with vscode (electron), --max-old-space-size will work up to 4GB |
that's probably because our heap management code didn't kick in for your case. your case, I believe, it is not like you are running solution wide feature such as rather I think things like type evaluator is resolving alias (imported) symbol or getting symbols from other module your file is referencing and that cause parsing/binding to happen for files your file depends on transitively. (at least, that's what your log shows I believe) in that case, it is not easy for us to dump cache (type cache, binding info and parse tree) since we are in the middle of type evaluation. that being said, I think the change @erictraut mentioned above should help since it reduces number of files we analyze while type evaluating. (which in turn help us what we should show in completion/hover/signature help and etc even though you don't consume type info directly) but if that doesn't work, using node rather than electron would be only option. and in your case, it sounds like remote server already uses node so you should be good. |
I've been having the same issue for quite a while. Here is the log I noticed after Pylance sent a notification of server shut down:
Funny thing to notice is that is says Answering questions above:
What happens: |
@heejaechang, do we know whether it's the foreground or background (indexing) process that's running out of memory? I can't tell from this log trace. I suspect it's the foreground, but it would be good to confirm. |
@dynalz if you think @erictraut I am not sure whether we can distinguish that since it only output process id but not thread id. if user enable |
I'm suffering from the same issue... in particular pylance crashes analysing the VSCode Pylance version 2023.4.10 (pyright d7616109) Pylance trace log
|
@panilo can you create a new issue.. Also we jus released a new update. Please try the latest prerelease 2023.4.21 |
the log shows OOM on type eval. so our current heap threshold won't work for this case. but pyright's recent change on type eval might mitigate the issue. |
Same issue for me on a Windows 1x Pro VM, letting VSCode run will ends on an OOM VM, I've got the issue since more than 6 month (and on every recent patches)
This process uses 2.1GB of RAM (until my VM is OOMed)
So the problem persists on 2023.4.41 release |
Same thing here with |
disabling I'm not sure if there is something I can check so I can provide more info? Please let me know |
I have the same problem with aws_cdk |
@ben-elsen, the problem you're seeing with aws_cdk is likely this one, which should be addressed in last week's prerelease version of pylance. |
@erictraut the description of the problem is exactly the same but I tested the pre release version of pylance but the problem remains.. |
Is there a way to just provide a setting to increase memory? I'm not sure if this is the usually normal workspace and if this is causing the issue or not, but probably having the ability to allocate more resources would be nice either way. |
I can go OOM with 14 files and 4 folders for example, maybe we "follow a false scent"
|
Same issue for AWS CDK. I'm running VSCode + WSL2 Ubuntu-20.04. Project is fairly small ~30 files (max ~500 lines per file) I tried providing larger size My WSL has allocated 9GB mem and I can see pylance server consuming 5.5GB. Pylance pre-release version |
TL;DR: downgrade Pylance to Though, I do see the
This is pretty frustrating as it's really beneficial to have pylance working when working with CDK, and it worked without errors in previous versions. Hope this gets fixed soon. |
Is there any way I can provide debug info in helping solving the issue? I can safely say I only have this issue in this workspace, other workspaces work fine. |
This issue has been closed automatically because it needs more information and has not had recent activity. If the issue still persists, please reopen with the information requested. Thanks. |
Why was this closed? |
It was marked as requiring more information and the bot auto closes things if nobody responds. Not sure what the information we were waiting for was, so maybe it was marked that way by mistake. If you're having an out of memory crash with Pylance it's better to just open a new issue instead of just adding to this one though. Most memory issues require a specific repro to cause, so it's likely these are all different. |
come to here by Google search when I met a similar problem. pylance taken huge memory leading to OOM it is suggest that language server( not only pylance ) may expose a command in command palette to examine memory profiling. |
@crackevil if you're having an OOM crash, could you open a new issue? We'd need to reproduce it in house in order to debug. |
I've followed the pylance-is-crashing troubleshooting markdown, set the But I still see the command line then pylance is still crashing when the memory is used up. Error Log<--- Last few GCs ---> [3896855:0x7efdf0000ff0] 96711 ms: Scavenge 6349.3 (6503.1) -> 6335.2 (6503.8) MB, 10.06 / 0.00 ms (average mu = 0.999, current mu = 0.999) allocation failure; <--- JS stacktrace ---> FATAL ERROR: MarkCompactCollector: young object promotion failed Allocation failed - JavaScript heap out of memory 2024-04-09 13:02:40.871 [info] 1: 0xcd8bd6 node::OOMErrorHandler(char const*, v8::OOMDetails const&) [/usr/bin/node] 2024-04-09 13:02:40.872 [info] 2: 0x10aed20 v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, v8::OOMDetails const&) [/usr/bin/node] 2024-04-09 13:02:40.872 [info] 3: 0x10af007 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, v8::OOMDetails const&) [/usr/bin/node] 2024-04-09 13:02:40.873 [info] 4: 0x12cdfe5 [/usr/bin/node] 2024-04-09 13:02:40.873 [info] 5: 0x1301eae void v8::internal::LiveObjectVisitor::VisitMarkedObjectsNoFailv8::internal::EvacuateNewSpaceVisitor(v8::internal::Page*, v8::internal::EvacuateNewSpaceVisitor*) [/usr/bin/node] 2024-04-09 13:02:40.874 [info] 6: 0x1310faa v8::internal::Evacuator::RawEvacuatePage(v8::internal::MemoryChunk*) [/usr/bin/node] 2024-04-09 13:02:40.874 [info] 7: 0x1311462 v8::internal::Evacuator::EvacuatePage(v8::internal::MemoryChunk*) [/usr/bin/node] 2024-04-09 13:02:40.875 [info] 8: 0x131177f v8::internal::PageEvacuationJob::Run(v8::JobDelegate*) [/usr/bin/node] 2024-04-09 13:02:40.875 [info] 9: 0x1f91cdd v8::platform::DefaultJobState::Join() [/usr/bin/node] 2024-04-09 13:02:40.876 [info] 10: 0x1f922b3 v8::platform::DefaultJobHandle::Join() [/usr/bin/node] 2024-04-09 13:02:40.876 [info] 11: 0x130e815 v8::internal::MarkCompactCollector::EvacuatePagesInParallel() [/usr/bin/node] 2024-04-09 13:02:40.877 [info] 12: 0x131d5a0 v8::internal::MarkCompactCollector::Evacuate() [/usr/bin/node] 2024-04-09 13:02:40.877 [info] 13: 0x131defd v8::internal::MarkCompactCollector::CollectGarbage() [/usr/bin/node] 2024-04-09 13:02:40.878 [info] 14: 0x12e2c3e v8::internal::Heap::MarkCompact() [/usr/bin/node] 2024-04-09 13:02:40.878 [info] 15: 0x12e399d v8::internal::Heap::PerformGarbageCollection(v8::internal::GarbageCollector, v8::internal::GarbageCollectionReason, char const*) [/usr/bin/node] 2024-04-09 13:02:40.879 [info] 16: 0x12e4209 [/usr/bin/node] 2024-04-09 13:02:40.879 [info] 17: 0x12e4818 [/usr/bin/node] 2024-04-09 13:02:40.880 [info] 18: 0x1a34081 [/usr/bin/node] |
Here is how I mitigated memory usage - vscode by default scans too many files (if you're not careful): BTW, if you still have a Other than that: use something like the settings below (which is aimed at Python, but I presume you can figure out which folders to add for your project). "files.exclude": {
"**/*-report.*/**": true,
"**/*.egg-info/**": true,
"**/.coverage/**": true,
"**/.git/**": true,
"**/.mypy_cache/**": true,
"**/.pytest_cache/**": true,
"**/.tox/**": true,
"**/__pycache__/**": true,
"**/htmlcov/**": true
},
"files.watcherExclude": {
"**/*.egg-info/**": true,
"**/.egg-info/**": true,
"**/.git/**": true,
"**/.mypy_cache/**": true,
"**/.pytest_cache/**": true,
"**/.tox/**": true,
"**/.venv/**": true,
"**/__pycache__/**": true,
"**/htmlcov/**": true
},
"python.analysis.exclude": [
"**/__pycache__",
"**/.git",
"**/.mypy_cache",
"**/.pytest_cache",
"**/.tox",
"**/htmlcov",
"**/*.egg-info"
],
"python.analysis.ignore": [
"**/.vscode/**",
"**/__pycache__/**",
"**/.egg-info/**",
"**/.git/**",
"**/.mypy_cache/**",
"**/.pytest_cache/**",
"**/.tox/**",
"**/.venv/**",
"**/*.egg-info/**",
"**/htmlcov/**",
"**/site-packages/**/*.py"
],
"search.exclude": {
"**/*.egg-info/": true,
"**/*.html": true,
"**/.git": true,
"**/.mypy": true,
"**/.tox": true,
"**/htmlcov/": true,
"**/repos/**": true,
"**/site-packages/**": true,
"**/test-report.xml": true
} Bonus if you use the "coverage-gutters.ignoredPathGlobs": "**/{node_modules,venv,.venv,vendor,.git,.tox,.*_cache,__pycache__}/**", If you don't set that, performance of |
Environment data
Repro Steps
Sadly I don't have repro steps 馃槥 -- it's quite a large private repo.
Expected behavior
Pylance hits a 2GB heap limit after loading just about 700 files from a private repo, which I'd estimate is under a million lines of code. I would expect Pylance to allow you to increase the memory limit up to, say, 8GB or 16GB. I am not 100% sure this would fix the problem, but it makes sense that users might want to hold more than 2GB of type information in a single workspace.
FWIW, it is possible to load this repo into Pycharm, although it's required that you up the memory limit to 8GB from Pycharm's default of 4GB.
If that doesn't help, I'd also expect to be able to fully ignore whole packages outside of my particular area. Even though I've set
python.analysis.include
to my own team's directory, Pylance still crawls the entire dependency tree of all of my files.Actual behavior
Pylance provides no way to increase the limit above 2GB (at least not that I can find). I have tried adding
export NODE_OPTIONS="--max-old-space-size=8192"
to my.bashrc
but that didn't change anything.Logs
I've redacted the file names from the repo in this snippet, but it should give an idea. The main thing to notice here is there are no "whale" files that are blowing it up, it's just hundreds of reasonably sized files that all need to be loaded.
pylance_out_redacted.txt
The text was updated successfully, but these errors were encountered: