You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We're generating way too many object files (one per class). This shows in how long compilation and linking takes through LLVM, but even without LLVM the system linker will take a very long time to link this many object files.
This especially becomes a problem as we produce separate object files for method impls, non-heap data, class objects, and heap data. The number of object files for even a simple project could number in the thousands, with many of the object files being very small.
I propose that we move to a coarser mapping between types and object files. I think that packages might be a good granularity for this, to start with. In this way, the sources for a given class can still be found fairly easily (especially if we keep them sorted), and we will have fewer very small object files. I would expect this level of granularity to improve compilation performance by having fewer compilation tasks (but still more than enough to keep all compile threads busy). It should also improve memory usage by sharing cached objects (such as declarations) between classes, and improving linker performance by reducing the amount of "busy work" being done by the linker.
The text was updated successfully, but these errors were encountered:
For each package, a directory which includes the class loader e.g. boot/java/lang/
Each package has multiple object files:
methods.o contains the method code and non-object variables
oop_fields.o contains all object literal and static field references
objects.o contains all regular heap objects (not including root class objects)
One complication is that we must continue to store the root classes array in topologically-sorted order (i.e. in order by type ID), so these should remain in a single dedicated root object file (e.g. root_classes.o in the root output directory) to ensure proper ordering.
When linking, we'd link in order: all method.o followed by all oop_fields.o followed by the singular root_classes.o file, and then lastly all objects.o.
We're generating way too many object files (one per class). This shows in how long compilation and linking takes through LLVM, but even without LLVM the system linker will take a very long time to link this many object files.
This especially becomes a problem as we produce separate object files for method impls, non-heap data, class objects, and heap data. The number of object files for even a simple project could number in the thousands, with many of the object files being very small.
I propose that we move to a coarser mapping between types and object files. I think that packages might be a good granularity for this, to start with. In this way, the sources for a given class can still be found fairly easily (especially if we keep them sorted), and we will have fewer very small object files. I would expect this level of granularity to improve compilation performance by having fewer compilation tasks (but still more than enough to keep all compile threads busy). It should also improve memory usage by sharing cached objects (such as declarations) between classes, and improving linker performance by reducing the amount of "busy work" being done by the linker.
The text was updated successfully, but these errors were encountered: