Conversation
Add centralized legacy metadata rewrite safeguards and restore PacketEvents client-version normalization for legacy sessions to improve movement compatibility diagnostics. Made-with: Cursor
Made-with: Cursor
Made-with: Cursor
Use 26.1.2 mapping artifacts as the authoritative source for blockstate/item remapping, split stack-vs-registry item rewrites, consolidate entity metadata handling, and add recipe packet safety guards to prevent legacy decode crashes while preserving valid block/item identities like leaf_litter. Made-with: Cursor
Vendor the 26.1.2 and 26.2-snapshot-3 mapping artifacts into project resources and load them from the jar by default, while still allowing explicit override paths for debugging. Made-with: Cursor
- Advancement, block state, registry, and packet pipeline fixes for legacy clients - Connection/encoder/handshake mixins and PacketEvents bridge adjustments - Document changes in README; track prebuilt/legacylink-0.1.1.jar for deploys
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 0 minutes and 32 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (5)
📝 WalkthroughWalkthroughThis pull request switches remapping from threshold-based heuristics to dump-table driven mappings, pins protocol constants to a specific bridge pair (26.1.2 ↔ 26.2-snapshot-3), integrates PacketEvents version normalization, consolidates entity metadata/recipe/item rewriting, and adds enhanced validation and tracing for block-state and recipe remapping. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 6
🧹 Nitpick comments (6)
src/main/java/dev/ohno/legacylink/handler/rewrite/VillagerEntityData2661.java (1)
31-41: Consider centralizing entity-type ID matching helper.This helper duplicates the same registry-key check pattern already present in
EntityMetadataRewriter2661(Lines 162-174 there). Extracting one shared utility would reduce drift risk across rewriters.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/java/dev/ohno/legacylink/handler/rewrite/VillagerEntityData2661.java` around lines 31 - 41, Duplicate registry-key check logic exists in VillagerEntityData2661.isEntityType and EntityMetadataRewriter2661 (the same BuiltInRegistries.getKey + string compare pattern); extract this into a single shared utility (e.g., a static method like EntityTypeUtils.isEntityTypeId or EntityIdMatcher.hasId) in a common package, change VillagerEntityData2661.isEntityType and the corresponding method in EntityMetadataRewriter2661 to delegate to that new utility, and update calls such as isVillagerLike to use the common helper to avoid drift.src/main/java/dev/ohno/legacylink/handler/rewrite/Vanilla261EntityMetadataTailTrim2661.java (1)
77-83: Minor:contentEqualsvsequalsfor String comparison.Line 82 uses
id.contentEquals(key.toString()). Sinceidis aStringandkey.toString()also returns aString, usingequalswould be more idiomatic.contentEqualsis typically used when comparing aStringto a non-StringCharSequence.This is functionally correct but slightly unconventional.
💡 Suggested simplification
private static boolean isEntityType(`@Nullable` EntityType<?> type, String id) { if (type == null) { return false; } Identifier key = BuiltInRegistries.ENTITY_TYPE.getKey(type); - return key != null && id.contentEquals(key.toString()); + return key != null && id.equals(key.toString()); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/java/dev/ohno/legacylink/handler/rewrite/Vanilla261EntityMetadataTailTrim2661.java` around lines 77 - 83, In isEntityType(`@Nullable` EntityType<?> type, String id) replace the idiomatic String comparison id.contentEquals(key.toString()) with id.equals(key.toString()) to use the standard String equals method; locate the check in the method (uses BuiltInRegistries.ENTITY_TYPE.getKey(type) and Identifier key) and perform the substitution, ensuring the null guard for key remains (return key != null && id.equals(key.toString())).src/main/java/dev/ohno/legacylink/handler/rewrite/AdvancementRewriter.java (1)
95-118: Consider simplifying the stone fallback lookup.Line 113 performs a double lookup:
BuiltInRegistries.ITEM.byId(ItemRewriter.remapItemIdStrict(Item.getId(Items.STONE))). SinceItems.STONEis a well-known vanilla item,remapItemIdStrictshould always succeed and return a valid ID, makingbyIdreturnItems.STONEitself.The null check at line 114-116 is defensive but the complexity may be unnecessary.
💡 Simplified fallback
if (RegistryRemapper.isLegacyItemWireId(legacyWireId)) { return remapped; } - Item fallback = BuiltInRegistries.ITEM.byId(ItemRewriter.remapItemIdStrict(Item.getId(Items.STONE))); - if (fallback == null) { - fallback = Items.STONE; - } - return ItemStackTemplate.fromNonEmptyStack(new ItemStack(fallback)); + return ItemStackTemplate.fromNonEmptyStack(new ItemStack(Items.STONE)); }Since
Items.STONEis a vanilla item guaranteed to exist in all versions, the remapping indirection is unnecessary for the fallback case.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/java/dev/ohno/legacylink/handler/rewrite/AdvancementRewriter.java` around lines 95 - 118, In ensureLegacyWireSafeTemplate, simplify the stone fallback by skipping the double remap/byId lookup — instead set the fallback directly to Items.STONE (type Item) and remove the null check and defensive branch; you can drop the call to BuiltInRegistries.ITEM.byId(ItemRewriter.remapItemIdStrict(Item.getId(Items.STONE))) and just use Items.STONE as the fallback.src/main/java/dev/ohno/legacylink/mapping/RegistryRemapper.java (1)
283-296: Consider logging skipped malformed CSV lines.Lines that don't match the expected format (e.g., non-digit ID) are silently skipped. This could hide data corruption in the mapping files. A debug/trace log would aid troubleshooting.
🔍 Optional: Add debug logging for skipped lines
if (!idRaw.chars().allMatch(Character::isDigit)) { + LegacyLinkMod.LOGGER.trace("[LegacyLink] Skipping malformed CSV line: {}", line); continue; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/java/dev/ohno/legacylink/mapping/RegistryRemapper.java` around lines 283 - 296, In RegistryRemapper, when iterating the lines array (the loop using variable line) add debug/trace logging where malformed CSV lines are currently skipped (conditions: null/isBlank/item, invalid comma position, non-digit idRaw) — include the offending line content and a short reason (e.g., "empty/header", "missing comma", "invalid id '...'") before each continue so you can trace data issues; keep logging at debug/trace level and retain existing behavior of not throwing for malformed entries while still calling out key, idRaw and the specific failure reason.src/main/java/dev/ohno/legacylink/handler/LegacyPacketHandler.java (1)
818-852: Consider extracting shared SlotDisplay traversal logic.The recursive structure of
containsUnsafeSlotDisplayclosely mirrorsRecipeBookAddRewriter.remapSlotDisplay. Both traverse the sameSlotDisplaytype hierarchy. A visitor or functional-style traversal could reduce duplication, though the current approach is clear and the duplication is contained.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/java/dev/ohno/legacylink/handler/LegacyPacketHandler.java` around lines 818 - 852, The recursive traversal logic for SlotDisplay is duplicated between containsUnsafeSlotDisplay in LegacyPacketHandler and RecipeBookAddRewriter.remapSlotDisplay; extract a reusable traversal/visitor utility (e.g., a SlotDisplayVisitor or a static helper like SlotDisplayUtils.traverse/displayWithPredicate) that walks the SlotDisplay variants and accepts a Predicate<SlotDisplay> or Consumer<SlotDisplay> to apply logic, then refactor containsUnsafeSlotDisplay to call that utility with a predicate that checks legacy item IDs and refactor RecipeBookAddRewriter.remapSlotDisplay to call the same traversal with its remapping action; ensure the helper handles all variant cases present (ItemSlotDisplay, ItemStackSlotDisplay, Composite, WithRemainder, SmithingTrimDemoSlotDisplay, DyedSlotDemo, OnlyWithComponent, WithAnyPotion) so both callers can delegate traversal and only supply variant-specific behavior.src/main/java/dev/ohno/legacylink/handler/rewrite/EntityMetadataRewriter2661.java (1)
289-305: Consider single-pass overflow detection and filtering.The current implementation iterates twice: once to detect overflow, then again to filter. For most packets, this is fine, but a single-pass approach would be marginally more efficient.
♻️ Optional: Single-pass implementation
- boolean overflow = false; - for (SynchedEntityData.DataValue<?> v : packedItems) { - if (v.id() > cap) { - overflow = true; - break; - } - } - if (!overflow) { - return null; - } - List<SynchedEntityData.DataValue<?>> out = new ArrayList<>(packedItems.size()); - for (SynchedEntityData.DataValue<?> v : packedItems) { - if (v.id() <= cap) { - out.add(v); - } - } - return out; + List<SynchedEntityData.DataValue<?>> out = null; + for (int i = 0; i < packedItems.size(); i++) { + SynchedEntityData.DataValue<?> v = packedItems.get(i); + if (v.id() > cap) { + if (out == null) { + out = new ArrayList<>(packedItems.size()); + for (int j = 0; j < i; j++) { + out.add(packedItems.get(j)); + } + } + } else if (out != null) { + out.add(v); + } + } + return out;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/java/dev/ohno/legacylink/handler/rewrite/EntityMetadataRewriter2661.java` around lines 289 - 305, Replace the two-pass detection-and-filter in EntityMetadataRewriter2661 with a single-pass loop: iterate once over packedItems, maintain an overflow boolean and accumulate values with v.id() <= cap into a local List<SynchedEntityData.DataValue<?>> (create it up front or lazily), and after the loop return null if overflow is false or the accumulated list if overflow is true; update the code in the method that currently performs the two separate for-loops to this single-pass approach (use the existing symbols packedItems, cap, overflow and the DataValue<?> type).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@README.md`:
- Line 78: The README has inconsistent snapshot version mentions: update all
occurrences so they match (either change "26.2-snapshot-2" to "26.2-snapshot-3"
or change "26.1.2/26.2-snapshot-3" to use "26.2-snapshot-2"); search for the
exact strings "26.2-snapshot-2" and "26.2-snapshot-3" (and
"26.1.2/26.2-snapshot-3") and make them uniform across the file (for example
replace "26.2-snapshot-2" with "26.2-snapshot-3" if the repo is actually built
against snapshot-3), ensuring the Mapping source of truth line and the repo
build line contain the same snapshot identifier.
In `@src/main/java/dev/ohno/legacylink/debug/LegacyOutboundPacketCapture.java`:
- Around line 170-178: The disconnect-reason sanitizer in
LegacyOutboundPacketCapture (inside the ClientboundDisconnectPacket handling)
only replaces '\n' which allows '\r' to still break or overwrite log lines;
update the code that builds the text return value to normalize both carriage
returns and newlines (e.g. replace '\r' and '\n' with a space, or use a regex
that replaces [\r\n]+ with a single space) so the returned reason string never
contains CR or LF characters before being logged.
In `@src/main/java/dev/ohno/legacylink/LegacyLinkMod.java`:
- Around line 48-49: The readiness LOGGER.info that prints
LegacyLinkConstants.PROTOCOL_26_1_2 and
LegacyLinkConstants.PROTOCOL_26_2_SNAPSHOT_3 can run even when bridge
initialization is skipped; move the LOGGER.info call into the successful init
path guarded by the SERVER_STARTED protocol check so it only logs after the
bridge is actually initialized (place it immediately after the successful init
block around the code currently at Lines 37-39). Ensure you keep the same log
format and references to LOGGER, LegacyLinkConstants.PROTOCOL_26_1_2, and
LegacyLinkConstants.PROTOCOL_26_2_SNAPSHOT_3 so the message is emitted only upon
successful bridge initialization.
In `@src/main/java/dev/ohno/legacylink/mapping/RegistryRemapper.java`:
- Around line 63-64: In RegistryRemapper, remove the redundant call to
BLOCK_STATE_REMAP.clear() that duplicates an earlier clear; keep a single clear
invocation for BLOCK_STATE_REMAP (and ensure ITEM_REMAP.clear() remains where
intended), so delete the extra BLOCK_STATE_REMAP.clear() occurrence and run
tests to confirm behavior in the class methods that rely on these maps.
- Around line 167-183: The redundant call to BLOCK_STATE_REMAP.clear() should be
removed: in the buildMappings() routine remove the second clear invocation so
the map is only cleared once (keep the initial BLOCK_STATE_REMAP.clear() at the
start of the mapping population and delete the later duplicate clear), ensuring
BLOCK_STATE_REMAP is only reset once before population and references to
BLOCK_STATE_REMAP and buildMappings() remain unchanged.
In `@src/main/java/dev/ohno/legacylink/mixin/PacketEncoderMixin.java`:
- Around line 43-57: The static initializer in PacketEncoderMixin assigns
statesField and sectionPosField via
ClientboundSectionBlocksUpdatePacket.getDeclaredField but only catches
NoSuchFieldException; wrap the setAccessible(true) call for each field in its
own try/catch that catches RuntimeException and SecurityException (or a broad
RuntimeException if you prefer) and on failure set the corresponding field
variable to null so reflective access failures during setAccessible won't abort
class initialization; update the logic around statesField and sectionPosField in
the static block to ensure any setAccessible failure results in the field being
nulled for the existing null-checked usage sites.
---
Nitpick comments:
In `@src/main/java/dev/ohno/legacylink/handler/LegacyPacketHandler.java`:
- Around line 818-852: The recursive traversal logic for SlotDisplay is
duplicated between containsUnsafeSlotDisplay in LegacyPacketHandler and
RecipeBookAddRewriter.remapSlotDisplay; extract a reusable traversal/visitor
utility (e.g., a SlotDisplayVisitor or a static helper like
SlotDisplayUtils.traverse/displayWithPredicate) that walks the SlotDisplay
variants and accepts a Predicate<SlotDisplay> or Consumer<SlotDisplay> to apply
logic, then refactor containsUnsafeSlotDisplay to call that utility with a
predicate that checks legacy item IDs and refactor
RecipeBookAddRewriter.remapSlotDisplay to call the same traversal with its
remapping action; ensure the helper handles all variant cases present
(ItemSlotDisplay, ItemStackSlotDisplay, Composite, WithRemainder,
SmithingTrimDemoSlotDisplay, DyedSlotDemo, OnlyWithComponent, WithAnyPotion) so
both callers can delegate traversal and only supply variant-specific behavior.
In `@src/main/java/dev/ohno/legacylink/handler/rewrite/AdvancementRewriter.java`:
- Around line 95-118: In ensureLegacyWireSafeTemplate, simplify the stone
fallback by skipping the double remap/byId lookup — instead set the fallback
directly to Items.STONE (type Item) and remove the null check and defensive
branch; you can drop the call to
BuiltInRegistries.ITEM.byId(ItemRewriter.remapItemIdStrict(Item.getId(Items.STONE)))
and just use Items.STONE as the fallback.
In
`@src/main/java/dev/ohno/legacylink/handler/rewrite/EntityMetadataRewriter2661.java`:
- Around line 289-305: Replace the two-pass detection-and-filter in
EntityMetadataRewriter2661 with a single-pass loop: iterate once over
packedItems, maintain an overflow boolean and accumulate values with v.id() <=
cap into a local List<SynchedEntityData.DataValue<?>> (create it up front or
lazily), and after the loop return null if overflow is false or the accumulated
list if overflow is true; update the code in the method that currently performs
the two separate for-loops to this single-pass approach (use the existing
symbols packedItems, cap, overflow and the DataValue<?> type).
In
`@src/main/java/dev/ohno/legacylink/handler/rewrite/Vanilla261EntityMetadataTailTrim2661.java`:
- Around line 77-83: In isEntityType(`@Nullable` EntityType<?> type, String id)
replace the idiomatic String comparison id.contentEquals(key.toString()) with
id.equals(key.toString()) to use the standard String equals method; locate the
check in the method (uses BuiltInRegistries.ENTITY_TYPE.getKey(type) and
Identifier key) and perform the substitution, ensuring the null guard for key
remains (return key != null && id.equals(key.toString())).
In
`@src/main/java/dev/ohno/legacylink/handler/rewrite/VillagerEntityData2661.java`:
- Around line 31-41: Duplicate registry-key check logic exists in
VillagerEntityData2661.isEntityType and EntityMetadataRewriter2661 (the same
BuiltInRegistries.getKey + string compare pattern); extract this into a single
shared utility (e.g., a static method like EntityTypeUtils.isEntityTypeId or
EntityIdMatcher.hasId) in a common package, change
VillagerEntityData2661.isEntityType and the corresponding method in
EntityMetadataRewriter2661 to delegate to that new utility, and update calls
such as isVillagerLike to use the common helper to avoid drift.
In `@src/main/java/dev/ohno/legacylink/mapping/RegistryRemapper.java`:
- Around line 283-296: In RegistryRemapper, when iterating the lines array (the
loop using variable line) add debug/trace logging where malformed CSV lines are
currently skipped (conditions: null/isBlank/item, invalid comma position,
non-digit idRaw) — include the offending line content and a short reason (e.g.,
"empty/header", "missing comma", "invalid id '...'") before each continue so you
can trace data issues; keep logging at debug/trace level and retain existing
behavior of not throwing for malformed entries while still calling out key,
idRaw and the specific failure reason.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: c0aa5a2a-caaf-4fa8-baec-b8be8ff3ab14
⛔ Files ignored due to path filters (2)
prebuilt/legacylink-0.1.1.jaris excluded by!**/*.jarsrc/main/resources/legacylink/mappings/legacy-item-protocol-map-26.1.2.csvis excluded by!**/*.csv
📒 Files selected for processing (25)
.gitignoreREADME.mdprebuilt/.gitkeepsrc/main/java/dev/ohno/legacylink/LegacyLinkConstants.javasrc/main/java/dev/ohno/legacylink/LegacyLinkMod.javasrc/main/java/dev/ohno/legacylink/debug/LegacyOutboundPacketCapture.javasrc/main/java/dev/ohno/legacylink/handler/LegacyPacketHandler.javasrc/main/java/dev/ohno/legacylink/handler/rewrite/AdvancementRewriter.javasrc/main/java/dev/ohno/legacylink/handler/rewrite/BlockStatePacketRewriter.javasrc/main/java/dev/ohno/legacylink/handler/rewrite/CubeMobEntityData2661.javasrc/main/java/dev/ohno/legacylink/handler/rewrite/EntityMetadataRewriter2661.javasrc/main/java/dev/ohno/legacylink/handler/rewrite/ItemRewriter.javasrc/main/java/dev/ohno/legacylink/handler/rewrite/RecipeBookAddRewriter.javasrc/main/java/dev/ohno/legacylink/handler/rewrite/Vanilla261EntityMetadataTailTrim2661.javasrc/main/java/dev/ohno/legacylink/handler/rewrite/VillagerEntityData2661.javasrc/main/java/dev/ohno/legacylink/integration/PacketEventsVersionBridge.javasrc/main/java/dev/ohno/legacylink/mapping/LegacyItemIdTable.javasrc/main/java/dev/ohno/legacylink/mapping/RegistryRemapper.javasrc/main/java/dev/ohno/legacylink/mixin/ConfigurationFinishMixin.javasrc/main/java/dev/ohno/legacylink/mixin/ConnectionSendMixin.javasrc/main/java/dev/ohno/legacylink/mixin/HandshakeMixin.javasrc/main/java/dev/ohno/legacylink/mixin/PacketEncoderMixin.javasrc/main/java/dev/ohno/legacylink/mixin/StatusPacketMixin.javasrc/main/resources/legacylink/mappings/legacy-state-to-id-26.1.2.jsonsrc/main/resources/legacylink/mappings/snapshot-id-to-state-26.2-snapshot-3.json
| ### Known limitations (v0.1) | ||
|
|
||
| - **Block state ID ceiling** (`MAX_26_1_BLOCKSTATE_ID` in `LegacyLinkConstants`) is the **inclusive** last index the legacy client accepts (`0..MAX`). Pinned to **26.1.1** (`30207`) so 26.2 palette IDs remap safely; setting it too high can crash legacy clients in `LinearPalette.read`. | ||
| - **Mapping source of truth:** block states and items are translated from bundled 26.1.2/26.2-snapshot-3 dump tables by exact ID membership, not by "max legacy ID" thresholds. |
There was a problem hiding this comment.
Align snapshot version wording across README sections.
Line 78 says mappings are based on 26.2-snapshot-3, but Line 38 still says the repo is built against 26.2-snapshot-2. Please make these consistent to avoid operator confusion.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@README.md` at line 78, The README has inconsistent snapshot version mentions:
update all occurrences so they match (either change "26.2-snapshot-2" to
"26.2-snapshot-3" or change "26.1.2/26.2-snapshot-3" to use "26.2-snapshot-2");
search for the exact strings "26.2-snapshot-2" and "26.2-snapshot-3" (and
"26.1.2/26.2-snapshot-3") and make them uniform across the file (for example
replace "26.2-snapshot-2" with "26.2-snapshot-3" if the repo is actually built
against snapshot-3), ensuring the Mapping source of truth line and the repo
build line contain the same snapshot identifier.
Align snapshot references, tighten packet/metadata handling edge cases from review feedback, and add automated path-leak scanning in CI and local pre-commit to prevent sensitive local filesystem exposure. Made-with: Cursor
…s source of truth)
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (3)
src/main/java/dev/ohno/legacylink/handler/rewrite/SlotDisplayUtils.java (1)
99-110: Avoid eager allocation in the unchanged path.
rewriteNestedallocates anArrayListeven when every element is unchanged. You can lazily allocate on first detected change to reduce GC pressure in common no-op remap flows.♻️ Proposed refactor
public static List<SlotDisplay> rewriteNested(List<SlotDisplay> displays, UnaryOperator<SlotDisplay> transformer) { - List<SlotDisplay> remapped = new ArrayList<>(displays.size()); - boolean changed = false; - for (SlotDisplay nested : displays) { - SlotDisplay mapped = rewrite(nested, transformer); - if (mapped != nested) { - changed = true; - } - remapped.add(mapped); - } - return changed ? remapped : displays; + List<SlotDisplay> remapped = null; + for (int i = 0; i < displays.size(); i++) { + SlotDisplay nested = displays.get(i); + SlotDisplay mapped = rewrite(nested, transformer); + + if (remapped != null) { + remapped.add(mapped); + continue; + } + + if (mapped != nested) { + remapped = new ArrayList<>(displays.size()); + remapped.addAll(displays.subList(0, i)); + remapped.add(mapped); + } + } + return remapped != null ? remapped : displays; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/java/dev/ohno/legacylink/handler/rewrite/SlotDisplayUtils.java` around lines 99 - 110, rewriteNested eagerly allocates remapped even when no element changes; change it to allocate remapped lazily: iterate over displays calling rewrite(nested, transformer), and only when you detect the first changed element (mapped != nested) create remapped, copy all prior original elements (displays[0..i-1]) into remapped, then add the current mapped and continue adding subsequent mapped results to remapped; return displays if no change was ever detected. Use the existing method names rewrite, rewriteNested, and variables displays/transformer to locate the code..github/workflows/privacy-scan.yml (2)
1-8: Set explicit minimal workflow permissions.Consider adding
permissions: contents: readto enforce least privilege for the default token.Suggested fix
name: Privacy Scan on: pull_request: push: branches: - main + +permissions: + contents: read🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/privacy-scan.yml around lines 1 - 8, Add an explicit minimal permissions block to the workflow by setting the permissions key with contents: read so the default GITHUB_TOKEN is granted only read access to repository contents; update the top-level of the workflow YAML (near the existing name/on keys) to include permissions: and the contents: read entry to enforce least privilege.
16-17: Remove unusedripgrepinstallation step.At Line 16-17,
ripgrepis installed but not used byscripts/check-sensitive-paths.sh; this adds avoidable CI time and failure surface.Suggested fix
- - name: Install ripgrep - run: sudo apt-get update && sudo apt-get install -y ripgrep - - name: Scan for sensitive usernames and local paths run: bash scripts/check-sensitive-paths.sh🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/privacy-scan.yml around lines 16 - 17, Remove the unnecessary "Install ripgrep" CI step: delete the workflow step whose name is "Install ripgrep" that runs "sudo apt-get update && sudo apt-get install -y ripgrep" because scripts/check-sensitive-paths.sh does not use ripgrep; ensure the rest of the job remains intact and that no other steps reference ripgrep (search for "ripgrep" or the step name to confirm).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@scripts/check-sensitive-paths.sh`:
- Around line 35-37: The loop that prints offending hits currently emits the
full line (for rel, line_no, line in hits) via print(f"{rel}:{line_no}:{line}"),
which can leak secrets; change the output to avoid revealing the full 'line'
value — e.g., print only the file and line number (using rel and line_no) and
either a fixed marker like "<REDACTED_LINE>" or a safely truncated/hashed
preview of 'line' instead; update the print call in
scripts/check-sensitive-paths.sh (the loop over hits) accordingly so logs never
contain full offending-line contents.
- Line 15: The Windows path part of the regex in the pattern = re.compile(...)
line only matches doubled backslashes and thus misses typical Windows paths with
single backslashes; update that Windows branch of the regex so it matches single
backslashes (and optionally one-or-more backslashes) between path segments (for
example by using backslash-escaped sequences like \\ to represent a single path
separator in the regex or a quantifier to allow one-or-more backslashes), then
recompile pattern in scripts/check-sensitive-paths.sh so pathlib.rglob() paths
like C:\Users\... are detected.
In `@src/main/java/dev/ohno/legacylink/handler/LegacyPacketHandler.java`:
- Around line 173-178: The INFO log in LegacyPacketHandler using
LegacyLinkMod.LOGGER with HandlerNames.PACKET_HANDLER and
connection.getRemoteAddress() leaks the same client PII across phases; change it
to not emit raw addresses by either lowering the log level to DEBUG/TRACE or
replacing connection.getRemoteAddress() with a redacted/hashed representation
(e.g., compute a stable hash or short hex of the address via a helper like
anonymizeAddress(remoteAddr)) and log that value instead while keeping the same
message format and references to phase and HandlerNames.PACKET_HANDLER.
In `@src/main/java/dev/ohno/legacylink/mapping/RegistryRemapper.java`:
- Around line 325-335: The loadTextBlob method currently throws
IllegalStateException containing the raw override/path which can leak local
paths; modify the two exception messages in loadTextBlob (the "Mapping override
path does not exist" and "Failed reading override ... from") to avoid echoing
the absolute path or override variable—use a redacted placeholder or only the
file name (e.g., Path#getFileName()) and keep the actual exception as the cause
for the IOException case so debugging is preserved without exposing full paths;
update both throw sites in loadTextBlob accordingly.
---
Nitpick comments:
In @.github/workflows/privacy-scan.yml:
- Around line 1-8: Add an explicit minimal permissions block to the workflow by
setting the permissions key with contents: read so the default GITHUB_TOKEN is
granted only read access to repository contents; update the top-level of the
workflow YAML (near the existing name/on keys) to include permissions: and the
contents: read entry to enforce least privilege.
- Around line 16-17: Remove the unnecessary "Install ripgrep" CI step: delete
the workflow step whose name is "Install ripgrep" that runs "sudo apt-get update
&& sudo apt-get install -y ripgrep" because scripts/check-sensitive-paths.sh
does not use ripgrep; ensure the rest of the job remains intact and that no
other steps reference ripgrep (search for "ripgrep" or the step name to
confirm).
In `@src/main/java/dev/ohno/legacylink/handler/rewrite/SlotDisplayUtils.java`:
- Around line 99-110: rewriteNested eagerly allocates remapped even when no
element changes; change it to allocate remapped lazily: iterate over displays
calling rewrite(nested, transformer), and only when you detect the first changed
element (mapped != nested) create remapped, copy all prior original elements
(displays[0..i-1]) into remapped, then add the current mapped and continue
adding subsequent mapped results to remapped; return displays if no change was
ever detected. Use the existing method names rewrite, rewriteNested, and
variables displays/transformer to locate the code.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 80e0f7f8-b26a-4157-b713-4bece5183bfa
📒 Files selected for processing (15)
.github/workflows/privacy-scan.ymlREADME.mdscripts/check-sensitive-paths.shsrc/main/java/dev/ohno/legacylink/LegacyLinkMod.javasrc/main/java/dev/ohno/legacylink/debug/LegacyOutboundPacketCapture.javasrc/main/java/dev/ohno/legacylink/handler/LegacyPacketHandler.javasrc/main/java/dev/ohno/legacylink/handler/rewrite/AdvancementRewriter.javasrc/main/java/dev/ohno/legacylink/handler/rewrite/EntityMetadataRewriter2661.javasrc/main/java/dev/ohno/legacylink/handler/rewrite/EntityTypeIdMatcher.javasrc/main/java/dev/ohno/legacylink/handler/rewrite/RecipeBookAddRewriter.javasrc/main/java/dev/ohno/legacylink/handler/rewrite/SlotDisplayUtils.javasrc/main/java/dev/ohno/legacylink/handler/rewrite/Vanilla261EntityMetadataTailTrim2661.javasrc/main/java/dev/ohno/legacylink/handler/rewrite/VillagerEntityData2661.javasrc/main/java/dev/ohno/legacylink/mapping/RegistryRemapper.javasrc/main/java/dev/ohno/legacylink/mixin/PacketEncoderMixin.java
✅ Files skipped from review due to trivial changes (3)
- README.md
- src/main/java/dev/ohno/legacylink/handler/rewrite/EntityTypeIdMatcher.java
- src/main/java/dev/ohno/legacylink/mixin/PacketEncoderMixin.java
🚧 Files skipped from review as they are similar to previous changes (5)
- src/main/java/dev/ohno/legacylink/handler/rewrite/VillagerEntityData2661.java
- src/main/java/dev/ohno/legacylink/debug/LegacyOutboundPacketCapture.java
- src/main/java/dev/ohno/legacylink/handler/rewrite/RecipeBookAddRewriter.java
- src/main/java/dev/ohno/legacylink/handler/rewrite/Vanilla261EntityMetadataTailTrim2661.java
- src/main/java/dev/ohno/legacylink/handler/rewrite/EntityMetadataRewriter2661.java
| import sys | ||
|
|
||
| root = pathlib.Path(".") | ||
| pattern = re.compile(r"/Users/[^/]+/|/home/[^/]+/|[A-Za-z]:\\\\Users\\\\[^\\\\]+\\\\", re.IGNORECASE) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
python3 - <<'PY'
import re
p = re.compile(r"/Users/[^/]+/|/home/[^/]+/|[A-Za-z]:\\\\Users\\\\[^\\\\]+\\\\", re.IGNORECASE)
samples = [
r"C:\Users\alice\project\file.txt", # should match
r"C:\\Users\\alice\\project\\file.txt", # escaped representation
]
for s in samples:
print(f"{s} -> {bool(p.search(s))}")
PYRepository: ohnodev/legacylink
Length of output: 146
🏁 Script executed:
cat -n scripts/check-sensitive-paths.sh | head -20Repository: ohnodev/legacylink
Length of output: 723
Windows path regex with double backslashes fails to detect typical Windows paths.
The pattern at line 15 uses \\\\ in a raw string, which matches two literal backslashes (C:\\Users\\...). However, pathlib.rglob() returns paths with single backslashes on Windows (C:\Users\...), causing the check to miss actual leaks.
Suggested fix
-pattern = re.compile(r"/Users/[^/]+/|/home/[^/]+/|[A-Za-z]:\\\\Users\\\\[^\\\\]+\\\\", re.IGNORECASE)
+pattern = re.compile(r"/Users/[^/]+/|/home/[^/]+/|[A-Za-z]:\\Users\\[^\\]+\\", re.IGNORECASE)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| pattern = re.compile(r"/Users/[^/]+/|/home/[^/]+/|[A-Za-z]:\\\\Users\\\\[^\\\\]+\\\\", re.IGNORECASE) | |
| pattern = re.compile(r"/Users/[^/]+/|/home/[^/]+/|[A-Za-z]:\\Users\\[^\\]+\\", re.IGNORECASE) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/check-sensitive-paths.sh` at line 15, The Windows path part of the
regex in the pattern = re.compile(...) line only matches doubled backslashes and
thus misses typical Windows paths with single backslashes; update that Windows
branch of the regex so it matches single backslashes (and optionally one-or-more
backslashes) between path segments (for example by using backslash-escaped
sequences like \\ to represent a single path separator in the regex or a
quantifier to allow one-or-more backslashes), then recompile pattern in
scripts/check-sensitive-paths.sh so pathlib.rglob() paths like C:\Users\... are
detected.
This removes line-content leakage from path scan hits, broadens Windows path detection, redacts override path errors, applies least-privilege workflow permissions, and avoids eager SlotDisplay list allocation when rewrites are unchanged. Made-with: Cursor
Summary
Stacks protocol and rewrite work for 26.1.x clients on 26.2-snapshot servers: safer blockstate remapping, dump-driven tables shipped in the jar, PacketEvents version bridge hardening, and related mixins / handler updates. Refreshes
prebuilt/legacylink-0.1.1.jar.Highlights
42ec174).PacketEventsVersionBridge.LegacyPacketHandler,READMEupdates (see commit731a433).Testing
./gradlew buildpasses.Related
Summary by CodeRabbit
New Features
Improvements