fix: skip __origin__ entries during alias expansion to prevent excessive-aliasing#2
Merged
reuvenharrison merged 2 commits intov3from Mar 30, 2026
Merged
Conversation
…ive-aliasing The previous alias fix (aliasDepth == 0 guard on addOriginInMap/addOriginInSeq) stopped injecting duplicate __origin__ keys during alias expansion, but still allowed the decoder to re-walk existing __origin__ entries that were injected when the anchor was first processed. Each such re-walk increments aliasCount (because aliasDepth > 0), and for large specs with many anchors (e.g. cardano-wallet's 1909 aliases) the aliasCount/decodeCount ratio exceeds the threshold, triggering "document contains excessive aliasing". Fix: in d.mapping(), skip entries whose key has Line == 0 (i.e. synthesised __origin__ nodes) when inside an alias expansion. This keeps aliasCount proportional to the actual YAML data, not the injected metadata. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…ring alias expansion Re-enable the aliasDepth guard in mapping() so __origin__ metadata nodes injected during anchor processing are skipped on each alias expansion. Without this, every expansion re-decoded those nodes, inflating aliasCount and spuriously triggering the "excessive aliasing" check for large specs (5000+ aliases). Add TestOrigin_ManyAliasesNoExcessiveAliasing with 5000 aliases of a 20-property anchor — a load that reliably fails without the guard and passes with it. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2 tasks
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Follow-up to #1
The previous alias fix (
aliasDepth == 0guard onaddOriginInMap/addOriginInSeq) stopped injecting duplicate__origin__keys during alias expansion. However it introduced a secondary issue.Problem
When an anchor is first processed,
__origin__metadata entries are appended to each nested mapping node'sContent. When an alias expands that anchor, the decoder re-walks the subtree — now correctly skipping new origin injection, but still callingd.unmarshal()on the existing__origin__entries already present inContent. Each such call happens ataliasDepth > 0, incrementingaliasCount.For large specs with many aliases (e.g. cardano-wallet's ~1900 aliases) the
aliasCount/decodeCountratio exceeds the threshold and triggers:Fix
In
d.mapping(), skip entries whose key hasLine == 0(i.e. synthesised__origin__nodes) when inside an alias expansion. This keepsaliasCountproportional to actual YAML data, not injected metadata.Test
Verified with the cardano-wallet spec (
alias.yml, 258KB, ~1900 anchors/aliases) viaTestAliasIssueinoasdiff/yaml.🤖 Generated with Claude Code