Fix deep stack sizes when serializing some schemas #331
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
The "parents" val in a DPathCompileInfo is a backpointer to all
DPathCompileInfo's that reference it. The problem with this is that when
elements are shared, these backpointers create a highly connected graph
that requires a large stack to serialize using the default java
serialization as it jumps around parents and children. To avoid this
large stack requirement, we make the parents backpointer transient. This
prevents jumping back up to parents during serialization and results in
only needing a stack depth relative to the schema depth. Once all that
serialization is completed and all the DPathCompileInfo's are
serialized, we then manually traverse all the DPathCompileInfo's again
and serialize the parent sequences (via the serailizeParents method).
Because all the DPathCompileInfo's are already serialized, this just
serializes the Sequence objects and the stack depth is again relative to
the schema depth.
On complex schemas, this saw an order of magnitude reduction in stack
size during serialization.
DAFFODIL-2283