-
Notifications
You must be signed in to change notification settings - Fork 826
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
reduce compile memory usage peaks #1267
Comments
xml_tree.cpp is actually a very simple file. Splitting it wouldn't improve anything. The problem is that is depends on nearly all gramars which are very expensive to compile. |
Yep, I agree @herm on the solution for
|
I just had a look at the json file: It also include two grammars, so the solution would be the same. |
Are there any ideas how to reduce the size of these grammars in general? Currently the linker takes several minutes and uses about 1.5GB of RAM because the input files are that large. 9 of the 10 largest .os files contain grammars. Together they take up more than 600MB. Somehow that feels wrong considering that the grammars are not very complicated. |
I can now compile xml_tree.cpp where before it consistently segfaulted cc1plus (gcc 4.5.3). |
…feature_collection_parser.cpp - refs #1267
…ng transform grammar to cpp - refs #1267
compile time of xml_node.cpp is now ~8 seconds for me after finishing moving the color and path_expression grammars to cpp files. closing. will re-open if I see any future failures on launchpad or from user reports. |
…feature_collection_parser.cpp - refs #1267
…ng transform grammar to cpp - refs #1267
Reading this, it seems there is still a little room for improving memory usage during compilation. |
xml_tree.cpp
andjson/feature_collection_parser.cpp
have consistently been causing build failures on launchpad. This is new - we've been doing nighly builds on launchpad for over a year and have only recently seen out of memory problems due to these two files. It happens in 1 of every 10-15 builds, likely when the build is dispatched to a machine with less memory.My hunch is these files are peaking well over 2 GB of memory. Ideally we could try to split up the translation units somehow.
The text was updated successfully, but these errors were encountered: