-
Notifications
You must be signed in to change notification settings - Fork 198
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
indexing runs out of memory for large projects #1219
Comments
Inviting @licam to this issue in order to provide additional feedback and test early builds, once available. |
…o smaller chunks to reduceo overall memory needs
… after bulk parsing to close zip files and free up memory
…s and arrays all the time + reusing common sets instead of creating new set objects all the time
@licam The latest pre-release builds for VSCode should already contain a few early optimizations. Would be interesting to hear whether that runs any better in your environment and with your large projects. You can switch to the pre-release in VSCode directly when you click on the |
Here are some early rough results, measuring the progress here (using my sample project): Version
Version
Both measurements used the default max heap setting of 512m for the language server process. The exact numbers will vary quite a bit, depending on the size of the individual source code files and the number of symbols generated for the concrete project, of course. If you have larger projects that this, you have to increase the heap space for the language server. |
@martinlippert Sounds promising. We will test and adapt the new version once it will be released. Thank you! |
The indexing infrastructure is running out of memory when indexing projects with a large number of source files (as reported in #1212).
We need to improve the implementation to reduce the overall memory consumption, especially to decouple the memory consumption from the size of the project or the number of projects being parsed.
Step 1: we need to chunk the set of source files into well-defined smaller chunks in order to allow the garbage collection to free up while indexing.
Step 2: we need to cleanup the lookup environment of the parser after each parsing attempt in order to avoid leaking memory or keeping zip files open.
The text was updated successfully, but these errors were encountered: