refactor redstone core#28
Conversation
Move existing redstone behavior behind a world-level engine while keeping this PR limited to current redstone-capable blocks and compatibility behavior. Constraint: core-only refactor of existing redstone wire, redstone block, lever, redstone torch, and note block behavior; no new gameplay blocks. Rejected: adding button, pressure plate, redstone lamp, or other redstone content | outside core architecture scope. Rejected: collapsing existing block files into one redstone file | makes the review and future feature PRs harder to isolate. Confidence: high Scope-risk: broad Directive: keep new redstone blocks in follow-up feature PRs and preserve weak/direct/strong power separation. Tested: go test ./server/block ./server/world; go test ./...; git diff --check; core-only grep for button/pressure plate/lamp additions Not-tested: live Bedrock multiplayer/client visual timing beyond manual user spot checks
Keep burned-out redstone torches from recovering from their own loop propagation while still allowing later external input removal to relight them. Constraint: fix existing torch burnout behavior only; no new redstone content. Rejected: clearing burnout whenever input power drops | self-clock loops drop their own input immediately after burnout. Rejected: relying only on direct ScheduledTick tests | missed the world scheduler and graph propagation path. Confidence: high Scope-risk: narrow Directive: burnout recovery must distinguish self-caused loop updates from external input removal. Tested: go test ./server/block ./server/world; go test ./...; git diff --check; core-only grep for button/pressure plate/lamp additions Not-tested: visual Bedrock client timing after push
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughReplaces the legacy per-block redstone updater with a deterministic per-tick redstone engine. Introduces explicit redstone capability interfaces, transaction-scoped redstone APIs, refactors wire/torch/lever/redstone block/glowstone/TNT logic, integrates the engine into World/Tx/tick, and adds extensive tests and a benchmark. ChangesRedstone Engine Refactoring
Estimated code review effort🎯 5 (Critical) | ⏱️ ~110 minutes Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Warning Review ran into problems🔥 ProblemsGit: Failed to clone repository. Please run the Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (10)
server/world/handler.go (1)
14-17: ⚡ Quick winBreaking API change — worth calling out in release notes.
The signature shift from
HandleRedstoneUpdate(ctx, pos)toHandleRedstoneUpdate(ctx, update RedstoneUpdate)will silently fail to dispatch for any externalHandlerimplementation still using the old signature (they will no longer satisfy the interface, but if they embedNopHandlerthe compiler will accept the dangling old method without ever calling it). Consider documenting the migration in the PR description / a changelog entry.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@server/world/handler.go` around lines 14 - 17, The public Handler interface's HandleRedstoneUpdate signature changed from HandleRedstoneUpdate(ctx, pos) to HandleRedstoneUpdate(ctx, update RedstoneUpdate), which can silently break external implementations (especially those embedding NopHandler); update the PR description and changelog to call this out and provide a short migration note showing that implementers must rename/replace the old method to the new HandleRedstoneUpdate(ctx *Context, update RedstoneUpdate) and extract position data from the RedstoneUpdate type, and mention NopHandler embedding caveat so users can find and correct dangling old methods.server/world/redstone_bench_test.go (1)
11-24: ⚡ Quick winUse
b.Loop()instead of the manual loop pattern (Go 1.24 best practice).Since Go 1.24,
testing.B.Loop()is the recommended way to write benchmarks. It automatically excludes setup overhead, guarantees the loop body isn't optimized away, and eliminates the need for package-level sink variables.♻️ Proposed refactor
-var redstoneGraphIDBenchmarkResult uint64 - func BenchmarkRedstoneGraphID(b *testing.B) { nodes := redstoneBenchmarkNodes(256) edges := redstoneBenchmarkEdges(len(nodes)) b.ReportAllocs() b.SetBytes(int64(len(nodes)*32 + len(edges)*24)) - b.ResetTimer() - var result uint64 - for i := 0; i < b.N; i++ { - result = redstoneGraphID(nodes, edges) + for b.Loop() { + _ = redstoneGraphID(nodes, edges) } - redstoneGraphIDBenchmarkResult = result }🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@server/world/redstone_bench_test.go` around lines 11 - 24, Replace the manual for-loop in BenchmarkRedstoneGraphID with testing.B.Loop(): call b.ResetTimer(), then use b.Loop(func() { _ = redstoneGraphID(nodes, edges) }) so the benchmark harness prevents elimination and measures correctly; remove the package-level sink redstoneGraphIDBenchmarkResult (and its assignment), keeping redstoneBenchmarkNodes, redstoneBenchmarkEdges, and the b.SetBytes/b.ReportAllocs calls as-is.server/block/redstone_wire.go (1)
180-190: 💤 Low valueAvoid mutating
connectionswhile ranging over it.In
case 1, the loop body addsface.Opposite()to the same map it is ranging over. Go's spec allows new map entries to appear in the current iteration or be skipped — it just happens to be safe here because there is exactly one starting key andface.Opposite() != faceis idempotent if revisited. This is fragile to future changes (e.g., if the case starts handling >1 entry). Capture the keys first:♻️ Suggested cleanup
- switch len(connections) { - case 0: - for _, face := range cube.HorizontalFaces() { - connections[face] = true - } - case 1: - for face := range connections { - connections[face.Opposite()] = true - } - } + switch len(connections) { + case 0: + for _, face := range cube.HorizontalFaces() { + connections[face] = true + } + case 1: + var only cube.Face + for face := range connections { + only = face + } + connections[only.Opposite()] = true + }🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@server/block/redstone_wire.go` around lines 180 - 190, The code mutates the map variable connections while ranging over it in the switch case handling len(connections) == 1; to fix, first collect the existing keys into a temporary slice (e.g., keys := make([]cube.Face, 0, len(connections)) and append each key), then iterate that slice and add the opposite face into connections (use face.Opposite()). Update the logic in the same block (the case 1 branch) so map writes don’t occur while directly ranging over the map to avoid fragile iteration behavior.server/world/redstone.go (4)
488-490: 💤 Low valueReassigning the
newPowerparameter is easy to misread.
oldPower, newPower := e.power[pos], clampRedstonePower(newPower)reuses the function parameter name, so the read and write happen in the same statement. Functionally correct (Go's:=reassigns in scope), but consider naming the clamped local distinctly for readability:♻️ Suggested rename
-func (e *redstoneEngine) update(tx *Tx, pos cube.Pos, d redstoneDirty, graphID uint64, newPower int) { - b := tx.Block(pos) - oldPower, newPower := e.power[pos], clampRedstonePower(newPower) +func (e *redstoneEngine) update(tx *Tx, pos cube.Pos, d redstoneDirty, graphID uint64, rawPower int) { + b := tx.Block(pos) + oldPower := e.power[pos] + newPower := clampRedstonePower(rawPower)🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@server/world/redstone.go` around lines 488 - 490, The parameter name newPower is being reused in the statement oldPower, newPower := e.power[pos], clampRedstonePower(newPower) which is confusing; inside redstoneEngine.update change the clamped value to a new local name (for example clampedPower or newPowerClamped) and assign it from clampRedstonePower(newPower) while keeping oldPower = e.power[pos], then replace subsequent uses of the clamped value to reference the new local name instead of reusing the parameter.
1163-1182: 💤 Low value
redstoneStepFacecollapses diagonal vectors to a single horizontal face.For step-up/step-down wire connections (e.g.,
sideAbove = pos.Side(face).Side(FaceUp)),dx != 0anddy != 0simultaneously, and the function returns the horizontal face only. This works today because the only relayer with non-orthogonal neighbours isRedstoneWire, whoseRedstoneSignalLossis a constant 1 regardless offrom/to. If a future relayer's signal loss depends on whether the edge climbs/descends, the engine will silently lose that distinction (it'll always look like a flat horizontal step). Worth either:
- documenting that step-up/step-down connections are always reported by their horizontal projection, or
- recording the vertical component when present.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@server/world/redstone.go` around lines 1163 - 1182, redstoneStepFace currently prioritises horizontal components so diagonal step-up/step-down moves (dx != 0 and dy != 0) collapse to a horizontal face; change redstoneStepFace to preserve vertical motion by checking dy first and returning cube.FaceUp or cube.FaceDown when dy != 0, otherwise fall back to the existing horizontal checks (cube.FaceEast/West/South/North) and default; update the function redstoneStepFace to return vertical faces for climbing/descending edges so relayers that depend on vertical vs horizontal loss can distinguish them.
264-283: 💤 Low valueCombine the two
DeleteFuncpasses overe.dirty.The two consecutive
maps.DeleteFunccalls one.dirtyare equivalent to a single pass with an OR predicate. Combining them halves map traversal cost on chunk unload and improves readability.♻️ Suggested combine
- maps.DeleteFunc(e.dirty, func(pos cube.Pos, _ redstoneDirty) bool { - return chunkPosFromBlockPos(pos) == chunkPos - }) - maps.DeleteFunc(e.dirty, func(_ cube.Pos, dirty redstoneDirty) bool { - return chunkPosFromBlockPos(dirty.changed) == chunkPos - }) + maps.DeleteFunc(e.dirty, func(pos cube.Pos, dirty redstoneDirty) bool { + return chunkPosFromBlockPos(pos) == chunkPos || + (dirty.hasChanged && chunkPosFromBlockPos(dirty.changed) == chunkPos) + })Note: gating the second clause on
dirty.hasChangedalso prevents accidentally matching a zero-valuedchangedfield that wasn't actually set, which the publicRedstoneUpdatedoc explicitly warns against usingcube.Pos{}as an absence check for.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@server/world/redstone.go` around lines 264 - 283, In removeChunk, replace the two maps.DeleteFunc calls over e.dirty with a single maps.DeleteFunc that returns true if either the map key pos maps to chunkPos (chunkPosFromBlockPos(pos) == chunkPos) OR the stored redstoneDirty has a valid changed position that maps to chunkPos; use the redstoneDirty.hasChanged flag when checking dirty.changed to avoid matching a zero-valued cube.Pos. Keep the other maps.DeleteFunc calls for e.power, e.output and e.torchBurnout unchanged.
395-410: 💤 Low valueConsider avoiding redundant
compileAdjacentRedstonepasses for candidates that yield no redstone blocks.The first loop calls
compileAdjacentRedstone(candidates[i])for every candidate, and the second loop visits every discovered node—including those already processed—re-runningcompileAdjacentRedstoneon the same positions. This adds an extrablockLoaded+redstoneBlockMayConductcall (6 face checks) per candidate per tick. For sparse dirty sets this is negligible; for large circuits touched in one tick it may add up.A safe optimization is to only call
compileAdjacentRedstonein the first loop whencompileRegiondid not add any node for that candidate. This handles the case where a candidate is a non-redstone block with adjacent redstone components:♻️ Suggested optimization
for _, pos := range candidates { priorLen := len(nodes) e.compileRegion(tx, pos, seen, &nodes) + if len(nodes) == priorLen { + e.compileAdjacentRedstone(tx, pos, seen, &nodes) + } } for i := 0; i < len(nodes); i++ { e.compileAdjacentRedstone(tx, nodes[i].pos, seen, &nodes) }If you drop the first loop's call entirely without the conditional, you would miss redstone adjacent to non-redstone block updates (e.g., grass placed next to wiring).
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@server/world/redstone.go` around lines 395 - 410, In compile(tx *Tx, candidates []cube.Pos) the current code calls compileAdjacentRedstone for every candidate and then again for every discovered node, causing redundant checks; change the first loop to call compileRegion(tx, pos, seen, &nodes) first, record whether nodes grew (e.g., start := len(nodes)), and only call compileAdjacentRedstone(tx, pos, seen, &nodes) when len(nodes) == start (i.e., compileRegion added no node for that candidate); keep the second loop and subsequent processing (sorting, compileEdges, redstoneGraphID) unchanged and continue to use the same seen map and nodes slice to avoid regressing behavior for non-redstone-block updates.server/world/tx_redstone.go (1)
15-18: 💤 Low valueOptional: naming suggests scheduling but it's immediate invalidation.
ScheduleUpdatecallsinvalidateAroundwithRedstoneUpdateCauseScheduledTicksynchronously — the deferral is implicit (next worldredstone.tick). Consider renaming toInvalidateAroundor documenting that the function does not take a delay parameter, otherwise callers may expect behavior analogous toWorld.ScheduleBlockUpdate.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@server/world/tx_redstone.go` around lines 15 - 18, The name ScheduleUpdate is misleading because RedstoneTransaction.ScheduleUpdate calls r.tx.World().redstone.invalidateAround(...) immediately with RedstoneUpdateCauseScheduledTick (no delay parameter), so either rename ScheduleUpdate to InvalidateAround (or InvalidateAroundForNextTick) or update the doc comment to explicitly state it performs synchronous invalidation that causes re-evaluation on the next redstone tick; update all callers of ScheduleUpdate accordingly and ensure references to invalidateAround and RedstoneUpdateCauseScheduledTick remain consistent.server/block/redstone_torch_burnout_test.go (1)
46-67: 💤 Low value
burnedOutTick == 0is an ambiguous "not yet burned out" sentinel.
currentTickis read fromtx.CurrentTick()and a zero value is theoretically a valid tick. Use a dedicated boolean (orint64 = -1) so the test cannot misreport a real tick-0 burnout as "did not burn out", and so the post-burnout stability window in the second loop is unambiguous.🧪 Suggested cleanup
- var lit, burnedOut, attachmentPowered bool - var currentTick, burnedOutTick int64 + var lit, burnedOut, attachmentPowered, burnedOutSeen bool + var currentTick, burnedOutTick int64 dustPower := make(map[cube.Pos]int, len(dustPositions)) for deadline := time.Now().Add(5 * time.Second); time.Now().Before(deadline); { <-ticker.C redstoneTorchBurnoutTestSnapshot(w, torchPos, dustPositions, ¤tTick, &lit, &burnedOut, &attachmentPowered, dustPower) if burnedOut && !lit { burnedOutTick = currentTick + burnedOutSeen = true break } } - if burnedOutTick == 0 { + if !burnedOutSeen { t.Fatalf("redstone torch loop did not burn out through world scheduler; tick=%d lit=%t burnedOut=%t attachmentPowered=%t dust=%v", currentTick, lit, burnedOut, attachmentPowered, dustPower) }🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@server/block/redstone_torch_burnout_test.go` around lines 46 - 67, The test uses burnedOutTick == 0 as a sentinel which collides with a valid tick value; change the sentinel to a dedicated flag or a negative value—e.g. initialize burnedOutTick to -1 (or add a boolean burnedOutObserved) and when redstoneTorchBurnoutTestSnapshot reports burnedOut set burnedOutTick = currentTick and/or mark burnedOutObserved = true; then replace the checks that compare burnedOutTick == 0 with a check for < 0 (or use the boolean) and update the later loop to use the new sentinel so a real tick 0 burnout is not misreported and the post-burnout stability window remains unambiguous (referencing burnedOutTick, currentTick, and redstoneTorchBurnoutTestSnapshot).server/world/world.go (1)
504-516: 💤 Low valueAllocation per call in
liquidLoaded.
for _, layer := range []uint8{0, 1}allocates a small slice every call. Since liquid lookups can be hot during redstone propagation through partially loaded regions, you can unroll trivially:♻️ Suggested unroll
- x, y, z := uint8(pos[0]), int16(pos[1]), uint8(pos[2]) - for _, layer := range []uint8{0, 1} { - id := c.Block(x, y, z, layer) - b, ok := w.conf.Blocks.BlockByRuntimeID(id) - if !ok { - w.conf.Log.Error("liquidLoaded: no block with runtime ID", "ID", id) - return nil, false - } - if liq, ok := b.(Liquid); ok { - return liq, true - } - } - return nil, false + x, y, z := uint8(pos[0]), int16(pos[1]), uint8(pos[2]) + for layer := uint8(0); layer <= 1; layer++ { + id := c.Block(x, y, z, layer) + b, ok := w.conf.Blocks.BlockByRuntimeID(id) + if !ok { + w.conf.Log.Error("liquidLoaded: no block with runtime ID", "ID", id) + return nil, false + } + if liq, ok := b.(Liquid); ok { + return liq, true + } + } + return nil, false🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@server/world/world.go` around lines 504 - 516, The loop in liquidLoaded allocates a new []uint8 slice on every call; replace the ranged loop with two explicit checks to avoid allocation: call c.Block(x,y,z,0), resolve with w.conf.Blocks.BlockByRuntimeID, handle missing ID and type-assert to Liquid, returning as now, then repeat the same for layer 1 if the first is not a Liquid; keep the same error logging via w.conf.Log.Error and the same return semantics so the function behavior is unchanged while eliminating the per-call slice allocation.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@server/block/redstone_torch.go`:
- Around line 119-132: The RedstoneTorch type can hold an "unknown" cube.Face
which breaks logic in methods like RedstonePower, RedstoneStrongPower and code
paths that call pos.Side(t.Facing) (e.g., attachmentPowered,
NeighbourUpdateTick) because comparisons and side lookups become invalid; fix
this by normalizing the face before use—either reject/convert unknown during
decode/registration or, more simply, replace direct uses of t.Facing with a
helper that returns a concrete face (use the suggested facing() method) so
RedstonePower, RedstoneStrongPower, attachmentPowered, NeighbourUpdateTick and
any pos.Side(...) calls use t.facing() instead of t.Facing.
---
Nitpick comments:
In `@server/block/redstone_torch_burnout_test.go`:
- Around line 46-67: The test uses burnedOutTick == 0 as a sentinel which
collides with a valid tick value; change the sentinel to a dedicated flag or a
negative value—e.g. initialize burnedOutTick to -1 (or add a boolean
burnedOutObserved) and when redstoneTorchBurnoutTestSnapshot reports burnedOut
set burnedOutTick = currentTick and/or mark burnedOutObserved = true; then
replace the checks that compare burnedOutTick == 0 with a check for < 0 (or use
the boolean) and update the later loop to use the new sentinel so a real tick 0
burnout is not misreported and the post-burnout stability window remains
unambiguous (referencing burnedOutTick, currentTick, and
redstoneTorchBurnoutTestSnapshot).
In `@server/block/redstone_wire.go`:
- Around line 180-190: The code mutates the map variable connections while
ranging over it in the switch case handling len(connections) == 1; to fix, first
collect the existing keys into a temporary slice (e.g., keys :=
make([]cube.Face, 0, len(connections)) and append each key), then iterate that
slice and add the opposite face into connections (use face.Opposite()). Update
the logic in the same block (the case 1 branch) so map writes don’t occur while
directly ranging over the map to avoid fragile iteration behavior.
In `@server/world/handler.go`:
- Around line 14-17: The public Handler interface's HandleRedstoneUpdate
signature changed from HandleRedstoneUpdate(ctx, pos) to
HandleRedstoneUpdate(ctx, update RedstoneUpdate), which can silently break
external implementations (especially those embedding NopHandler); update the PR
description and changelog to call this out and provide a short migration note
showing that implementers must rename/replace the old method to the new
HandleRedstoneUpdate(ctx *Context, update RedstoneUpdate) and extract position
data from the RedstoneUpdate type, and mention NopHandler embedding caveat so
users can find and correct dangling old methods.
In `@server/world/redstone_bench_test.go`:
- Around line 11-24: Replace the manual for-loop in BenchmarkRedstoneGraphID
with testing.B.Loop(): call b.ResetTimer(), then use b.Loop(func() { _ =
redstoneGraphID(nodes, edges) }) so the benchmark harness prevents elimination
and measures correctly; remove the package-level sink
redstoneGraphIDBenchmarkResult (and its assignment), keeping
redstoneBenchmarkNodes, redstoneBenchmarkEdges, and the
b.SetBytes/b.ReportAllocs calls as-is.
In `@server/world/redstone.go`:
- Around line 488-490: The parameter name newPower is being reused in the
statement oldPower, newPower := e.power[pos], clampRedstonePower(newPower) which
is confusing; inside redstoneEngine.update change the clamped value to a new
local name (for example clampedPower or newPowerClamped) and assign it from
clampRedstonePower(newPower) while keeping oldPower = e.power[pos], then replace
subsequent uses of the clamped value to reference the new local name instead of
reusing the parameter.
- Around line 1163-1182: redstoneStepFace currently prioritises horizontal
components so diagonal step-up/step-down moves (dx != 0 and dy != 0) collapse to
a horizontal face; change redstoneStepFace to preserve vertical motion by
checking dy first and returning cube.FaceUp or cube.FaceDown when dy != 0,
otherwise fall back to the existing horizontal checks
(cube.FaceEast/West/South/North) and default; update the function
redstoneStepFace to return vertical faces for climbing/descending edges so
relayers that depend on vertical vs horizontal loss can distinguish them.
- Around line 264-283: In removeChunk, replace the two maps.DeleteFunc calls
over e.dirty with a single maps.DeleteFunc that returns true if either the map
key pos maps to chunkPos (chunkPosFromBlockPos(pos) == chunkPos) OR the stored
redstoneDirty has a valid changed position that maps to chunkPos; use the
redstoneDirty.hasChanged flag when checking dirty.changed to avoid matching a
zero-valued cube.Pos. Keep the other maps.DeleteFunc calls for e.power, e.output
and e.torchBurnout unchanged.
- Around line 395-410: In compile(tx *Tx, candidates []cube.Pos) the current
code calls compileAdjacentRedstone for every candidate and then again for every
discovered node, causing redundant checks; change the first loop to call
compileRegion(tx, pos, seen, &nodes) first, record whether nodes grew (e.g.,
start := len(nodes)), and only call compileAdjacentRedstone(tx, pos, seen,
&nodes) when len(nodes) == start (i.e., compileRegion added no node for that
candidate); keep the second loop and subsequent processing (sorting,
compileEdges, redstoneGraphID) unchanged and continue to use the same seen map
and nodes slice to avoid regressing behavior for non-redstone-block updates.
In `@server/world/tx_redstone.go`:
- Around line 15-18: The name ScheduleUpdate is misleading because
RedstoneTransaction.ScheduleUpdate calls
r.tx.World().redstone.invalidateAround(...) immediately with
RedstoneUpdateCauseScheduledTick (no delay parameter), so either rename
ScheduleUpdate to InvalidateAround (or InvalidateAroundForNextTick) or update
the doc comment to explicitly state it performs synchronous invalidation that
causes re-evaluation on the next redstone tick; update all callers of
ScheduleUpdate accordingly and ensure references to invalidateAround and
RedstoneUpdateCauseScheduledTick remain consistent.
In `@server/world/world.go`:
- Around line 504-516: The loop in liquidLoaded allocates a new []uint8 slice on
every call; replace the ranged loop with two explicit checks to avoid
allocation: call c.Block(x,y,z,0), resolve with w.conf.Blocks.BlockByRuntimeID,
handle missing ID and type-assert to Liquid, returning as now, then repeat the
same for layer 1 if the first is not a Liquid; keep the same error logging via
w.conf.Log.Error and the same return semantics so the function behavior is
unchanged while eliminating the per-call slice allocation.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: d75a4f3f-c0ae-4c85-9a19-b884e3ec6235
📒 Files selected for processing (22)
server/block/glowstone.goserver/block/lever.goserver/block/note.goserver/block/redstone.goserver/block/redstone_block.goserver/block/redstone_torch.goserver/block/redstone_torch_burnout_test.goserver/block/redstone_wire.goserver/block/redstone_wire_test.goserver/block/tnt.goserver/world/block.goserver/world/conf.goserver/world/handler.goserver/world/redstone.goserver/world/redstone/state.goserver/world/redstone_bench_test.goserver/world/redstone_test.goserver/world/tick.goserver/world/tx.goserver/world/tx_redstone.goserver/world/world.goserver/world/world_test.go
💤 Files with no reviewable changes (3)
- server/world/redstone/state.go
- server/block/redstone.go
- server/world/block.go
Fix redstone torch burnout to trigger at the configured eighth self turn-off and reload live torch state before scheduled mutation so delayed updates cannot act on stale receivers. Constraint: Minecraft Wiki documents burnout after more than eight state changes within 60 game ticks; this engine tracks forced turn-offs as the burnout signal. Rejected: Leave strict greater-than threshold | It requires a ninth recorded turn-off and contradicts the configured threshold. Confidence: high Scope-risk: narrow Directive: Keep scheduled torch mutation based on live world state, not caller-held block values. Tested: go test ./server/block ./server/world Not-tested: Full manual in-game redstone smoke pass
There was a problem hiding this comment.
🧹 Nitpick comments (4)
server/world/redstone_test.go (4)
53-71: 💤 Low valueTighten the burnout-window assertion wording.
The messages "eight turn-offs left in window" and "fewer than eight turn-offs left in window" are ambiguous about whether "left" refers to expired or still-active entries, especially since both assertions begin with
burnedOut:true. Consider phrasing in terms of the rolling window directly, e.g. "all eight turn-offs still inside the window" vs. "earliest turn-off has aged out of the window". This is a readability nit on test failure output only.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@server/world/redstone_test.go` around lines 53 - 71, Update the failing messages in TestRedstoneTorchBurnoutRecoveryUsesRollingWindow to clearly state the window semantics: when calling e.torchBurnoutStatus(pos, redstoneTorchBurnoutWindowTicks-1) change the message to say "all eight turn-offs still inside the rolling window" and when calling e.torchBurnoutStatus(pos, redstoneTorchBurnoutWindowTicks) change the message to say "earliest turn-off has aged out of the rolling window" so failures from torchBurnoutStatus and recordTorchTurnOff are unambiguous.
949-967: 💤 Low value
redstoneNonConductiveSolidBlockcollides withredstoneSolidBlockon hash and encode name.By embedding
redstoneSolidBlock, the non-conductive variant inheritsHash() = (1<<48, 0)andEncodeBlock()returning"test:solid_block". Today this is safe becauseTestRedstoneStrongPowerConductorExcludesMarkedNonConductorsinvokesredstoneStrongPowerConductordirectly without a registry, but if a future test registers both types in the sameBlockRegistry(e.g. to cover non-conductive solids through the engine end-to-end) the hashes and encodings will collide.Cheap to harden by giving the embedded variant its own identity:
🛡️ Proposed fix
type redstoneNonConductiveSolidBlock struct { redstoneSolidBlock } func (redstoneNonConductiveSolidBlock) RedstoneNonConductive() {} +func (redstoneNonConductiveSolidBlock) EncodeBlock() (string, map[string]any) { + return "test:non_conductive_solid_block", nil +} +func (redstoneNonConductiveSolidBlock) Hash() (uint64, uint64) { return 1 << 56, 0 }🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@server/world/redstone_test.go` around lines 949 - 967, redstoneNonConductiveSolidBlock currently embeds redstoneSolidBlock and thus inherits the same EncodeBlock() and Hash(), risking collisions if both types are registered; override EncodeBlock() and Hash() on redstoneNonConductiveSolidBlock to return a distinct name (e.g. "test:non_conductive_solid_block") and unique hash values different from redstoneSolidBlock's (not 1<<48,0) so the two types have separate identities; implement these methods on the redstoneNonConductiveSolidBlock type (referencing EncodeBlock and Hash) to harden against future registry collisions.
172-201: ⚡ Quick winStrengthen the out-of-bounds invalidation assertion.
The second
invalidateAroundcall (Line 197) passes a differentcause(RedstoneUpdateCauseScheduledTickvs. the originalRedstoneUpdateCauseBlockUpdate), but the follow-up assertion only compareslen(engine.dirty)againstlen(want). If a future change caused the out-of-bounds invalidation to mutate an existing dirty entry'scauserather than add a new one, the count check would still pass even though behavior regressed.Re-running the same map-equality loop used on Lines 191-195 after the second call would catch that case for negligible cost.
🛡️ Proposed strengthening
engine.invalidateAround(cube.Pos{0, -1, 0}, changed, RedstoneUpdateCauseScheduledTick, cube.Range{0, 1}) if len(engine.dirty) != len(want) { t.Fatalf("out-of-bounds invalidation changed dirty positions to %v, want %v", engine.dirty, want) } + for pos, dirty := range want { + if got, ok := engine.dirty[pos]; !ok || got != dirty { + t.Fatalf("dirty[%v] after out-of-bounds invalidation = %v, %t; want %v, true", pos, got, ok, dirty) + } + }🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@server/world/redstone_test.go` around lines 172 - 201, After the out-of-bounds invalidateAround call using RedstoneUpdateCauseScheduledTick, re-check that engine.dirty still exactly equals the original want map (not just the length) to ensure no existing entries were mutated (including the cause field); use the same map-equality loop that iterates over want and compares engine.dirty[pos] to dirty so any change to fields like cause (e.g., from RedstoneUpdateCauseBlockUpdate to RedstoneUpdateCauseScheduledTick) will fail the test.
270-289: 💤 Low valuePackage-level test state is fine sequentially but blocks
t.Parallel.
redstoneCancellationActions(andscheduledTickTestBlockTicksbelow) is a shared*intmutated fromRedstonePowerAction/ScheduledTick. The current sequential test order plust.Cleanupmakes this safe, but any future addition oft.Parallel()on these tests would introduce a race. Worth noting in a comment near the variable declaration, or passing the counter through the registered block instance to avoid the global entirely. Defer-able.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@server/world/redstone_test.go` around lines 270 - 289, The test uses package-level mutable state (redstoneCancellationActions and scheduledTickTestBlockTicks) which will race if tests are run with t.Parallel(); instead, remove the global pointers and pass counters into the registered block/handler instances: add an int field to redstoneCancellationHandler (and the scheduled-tick test handler) to hold the counter, initialize it in the test, update RedstonePowerAction/ScheduledTick implementations to increment that handler-local counter rather than the package variable, and keep the t.Cleanup to nil out any pointers if you must; alternatively, if you must keep the globals add a clear comment at their declaration warning that tests using them must not call t.Parallel().
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Nitpick comments:
In `@server/world/redstone_test.go`:
- Around line 53-71: Update the failing messages in
TestRedstoneTorchBurnoutRecoveryUsesRollingWindow to clearly state the window
semantics: when calling e.torchBurnoutStatus(pos,
redstoneTorchBurnoutWindowTicks-1) change the message to say "all eight
turn-offs still inside the rolling window" and when calling
e.torchBurnoutStatus(pos, redstoneTorchBurnoutWindowTicks) change the message to
say "earliest turn-off has aged out of the rolling window" so failures from
torchBurnoutStatus and recordTorchTurnOff are unambiguous.
- Around line 949-967: redstoneNonConductiveSolidBlock currently embeds
redstoneSolidBlock and thus inherits the same EncodeBlock() and Hash(), risking
collisions if both types are registered; override EncodeBlock() and Hash() on
redstoneNonConductiveSolidBlock to return a distinct name (e.g.
"test:non_conductive_solid_block") and unique hash values different from
redstoneSolidBlock's (not 1<<48,0) so the two types have separate identities;
implement these methods on the redstoneNonConductiveSolidBlock type (referencing
EncodeBlock and Hash) to harden against future registry collisions.
- Around line 172-201: After the out-of-bounds invalidateAround call using
RedstoneUpdateCauseScheduledTick, re-check that engine.dirty still exactly
equals the original want map (not just the length) to ensure no existing entries
were mutated (including the cause field); use the same map-equality loop that
iterates over want and compares engine.dirty[pos] to dirty so any change to
fields like cause (e.g., from RedstoneUpdateCauseBlockUpdate to
RedstoneUpdateCauseScheduledTick) will fail the test.
- Around line 270-289: The test uses package-level mutable state
(redstoneCancellationActions and scheduledTickTestBlockTicks) which will race if
tests are run with t.Parallel(); instead, remove the global pointers and pass
counters into the registered block/handler instances: add an int field to
redstoneCancellationHandler (and the scheduled-tick test handler) to hold the
counter, initialize it in the test, update RedstonePowerAction/ScheduledTick
implementations to increment that handler-local counter rather than the package
variable, and keep the t.Cleanup to nil out any pointers if you must;
alternatively, if you must keep the globals add a clear comment at their
declaration warning that tests using them must not call t.Parallel().
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: de9847f0-1334-4ba3-98c1-6acdf735d5d8
📒 Files selected for processing (7)
server/block/redstone_torch.goserver/block/redstone_wire_test.goserver/world/handler.goserver/world/redstone.goserver/world/redstone_test.goserver/world/tx.goserver/world/world.go
🚧 Files skipped from review as they are similar to previous changes (5)
- server/world/handler.go
- server/world/world.go
- server/block/redstone_wire_test.go
- server/world/redstone.go
- server/block/redstone_torch.go
Make runtime redstone behavior match vanilla-facing expectations for legacy torch states, vertical dust steps, TNT ignition, and action side effects while preserving the new graph scheduler contract. Constraint: PR #28 intentionally accepts the public redstone API break, so this follow-up focuses on correctness and parity issues raised during review. Rejected: Run basic RedstonePowerAction on every dirty same-power evaluation | It repeats side effects for no power transition and makes TNT/action blocks unsafe under ordinary neighbour invalidation. Confidence: high Scope-risk: moderate Directive: Keep context-aware actions eligible for dirty same-power refreshes, but basic old/new power actions must only run on real power transitions. Tested: go test ./server/block -run 'TestTNT|TestRedstoneTorchUnknown|TestRedstoneTorchTurnsOff|TestRedstoneTorchBurnsOut' -count=1 Tested: go test ./server/world -run 'TestRedstoneActionOnlyRunsOnPowerChange|TestRedstoneCancelledActionDoesNotRun|TestRedstoneStepFace|TestRedstoneTorchBurnoutRecoveryUsesRollingWindow' -count=1 Tested: go test ./server/block ./server/world Tested: go test ./... Not-tested: Manual Bedrock client redstone parity pass
Expose one world-level redstone clamp helper so block and engine code use the same vanilla 0-15 signal boundary. Constraint: Cursor Bugbot flagged duplicate newly introduced clamp logic across block and world packages. Rejected: Keep separate block-local clamp helper | It leaves two implementations of the same redstone signal bound and keeps the review thread actionable. Confidence: high Scope-risk: narrow Directive: Reuse ClampRedstonePower for redstone signal bounds instead of reintroducing package-local copies. Tested: go test ./server/block -run 'TestRedstoneWire|TestTNT' -count=1 Tested: go test ./server/world -run 'TestClampRedstonePower|TestRedstoneActionOnlyRunsOnPowerChange|TestRedstoneStepFace' -count=1 Tested: go test ./server/block ./server/world Not-tested: Manual Bedrock client redstone parity pass
Close low-risk review hardening around dirty chunk cleanup, clearer burnout assertions, test block identity, and transaction redstone wording. Constraint: Completion audit found valid older CodeRabbit top-level suggestions that were not active inline threads but still reduced review confidence. Rejected: Leave removeChunk matching dirty.changed without hasChanged | It can drop unrelated dirty entries whose changed position is only the zero value. Confidence: high Scope-risk: narrow Directive: Dirty cleanup by changed position must only use changed when hasChanged is true. Tested: go test ./server/world -run 'TestRedstoneTorchBurnoutRecoveryUsesRollingWindow|TestRedstoneEngineInvalidateAround|TestRedstoneEngineRemoveChunkKeepsUnchangedDirtyOutsideChunk|TestScheduledTickQueue|TestRedstoneStrongPowerConductorExcludesMarkedNonConductors' -count=1 Tested: go test ./server/world Not-tested: Manual Bedrock client redstone parity pass
Add regression coverage for the vanilla behavior where a redstone torch attached to a redstone block sees its attachment as powered. Constraint: Cursor Bugbot reported the existing behavior as incorrect, but Minecraft Wiki describes redstone torches as deactivating when their attachment block is powered and redstone blocks as permanently powered blocks. Rejected: Change torch attachmentPowered to ignore RedstoneBlock's own source power | That would contradict vanilla redstone behavior and make attached torches stay lit incorrectly. Confidence: high Scope-risk: narrow Directive: Do not remove redstone-block attachment power from torch inversion without new vanilla parity evidence. Tested: go test ./server/block -run 'TestRedstoneTorchTurnsOffOnRedstoneBlockAttachment|TestRedstoneTorchTurnsOffOnPoweredConductiveAttachment|TestRedstoneTorchUnknown' -count=1 Not-tested: Manual Bedrock client redstone parity pass
Remove unused comparator-facing API surface and add regression coverage proving lever changes still reach consumers behind the attached block through graph invalidation. Constraint: Cursor reported lever propagation, wire step-down, and unused comparator interface findings on the latest PR review. Rejected: Change wire step-down support semantics | Existing Bedrock/glowstone tests and Minecraft Wiki parity support checking the upper dust support block for downward transmission. Rejected: Reintroduce directional lever propagation | The graph compiler already reaches redstone behind dirty conductive neighbours, and the regression test proves the path. Confidence: high Scope-risk: narrow Directive: Keep lever propagation covered by a ticked consumer-behind-attachment regression. Tested: go test ./server/block -run 'TestLeverUpdatesConsumerBehindAttachedBlock|TestLeverStrongPowersAttachedBlockFace|TestRedstoneWireTransmitsDownGlassInBedrock|TestRedstoneWireTransmitsUpButNotDownGlowstone' -count=1 Tested: go test ./server/world -run TestClampRedstonePower -count=1 Tested: go test ./server/block ./server/world Tested: go test ./... Not-tested: Manual Bedrock client redstone parity pass
Keep the redstone review fixes easier to read by removing redundant clamp and action bookkeeping while preserving the tested runtime behavior. Constraint: Code simplifier pass was limited to the recent redstone PR files and behavior-preserving cleanup. Rejected: Broaden cleanup into test abstraction or API reshaping | The existing tests are explicit parity fixtures and the PR already accepted only the intentional redstone API break. Confidence: high Scope-risk: narrow Directive: Keep redstone clamp calls routed through world.ClampRedstonePower rather than reintroducing block-local wrappers. Tested: git diff --check Tested: go test -count=1 ./server/block ./server/world Tested: go test ./... Not-tested: Manual Bedrock client redstone parity pass
There was a problem hiding this comment.
🧹 Nitpick comments (1)
server/world/redstone.go (1)
1083-1090: 💤 Low valueAdd clarifying comment about assumed face direction in
compileEdgesweight computation.At lines 1083-1090, the call
relayer.RedstoneSignalLoss(node.pos, tx, face.Opposite(), face)hardcodes the incoming face as the exit's opposite, assuming straight-through traversal in the compiled-edge model. While all current implementations (RedstoneWire and test relayers) ignore thefromparameter and return constant loss 1, any future relayer with direction-dependent losses would silently receive the wrong incoming face. A brief comment clarifying this assumption would help future implementers recognize the constraint; alternatively, remove the unusedfromparameter from theRedstonePowerRelayerinterface since all consumers ignore it.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@server/world/redstone.go` around lines 1083 - 1090, The weight computation in compileEdges (where redstoneStepFace(node.pos, neighbour) is used and relayer.RedstoneSignalLoss(node.pos, tx, face.Opposite(), face) is called) implicitly assumes the "from" face equals the exit's opposite (straight-through traversal), which may be surprising for future RedstonePowerRelayer implementations (e.g., RedstoneWire) that could be direction-sensitive; add a brief clarifying comment immediately above that call explaining this assumption (that compiled edges model treats incoming face as face.Opposite()), and/or remove the unused "from" parameter from the RedstonePowerRelayer interface if you confirm all current relayers ignore it, updating RedstoneSignalLoss implementations accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Nitpick comments:
In `@server/world/redstone.go`:
- Around line 1083-1090: The weight computation in compileEdges (where
redstoneStepFace(node.pos, neighbour) is used and
relayer.RedstoneSignalLoss(node.pos, tx, face.Opposite(), face) is called)
implicitly assumes the "from" face equals the exit's opposite (straight-through
traversal), which may be surprising for future RedstonePowerRelayer
implementations (e.g., RedstoneWire) that could be direction-sensitive; add a
brief clarifying comment immediately above that call explaining this assumption
(that compiled edges model treats incoming face as face.Opposite()), and/or
remove the unused "from" parameter from the RedstonePowerRelayer interface if
you confirm all current relayers ignore it, updating RedstoneSignalLoss
implementations accordingly.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 56ee706f-014e-41e7-92bf-59b085210ac7
📒 Files selected for processing (7)
server/block/redstone_torch.goserver/block/redstone_wire.goserver/block/redstone_wire_test.goserver/block/tnt.goserver/world/redstone.goserver/world/redstone_test.goserver/world/tx_redstone.go
🚧 Files skipped from review as they are similar to previous changes (3)
- server/world/tx_redstone.go
- server/block/tnt.go
- server/block/redstone_torch.go
Keep the redstone core focused on capabilities that have current production users, while preserving the context-action path needed by torch burnout and wire discovery. Constraint: Cursor reported wire discovery omitted RedstonePowerContextAction, and review flagged unused redstone API and graph hashing as speculative overhead. Rejected: Keep NetworkID and redstoneGraphID for possible future diagnostics | No caller consumes them today, so the hash is per-tick overhead without behavior value. Rejected: Keep transition/sound/stored-power interfaces for later comparator or sound features | Dragonfly conventions favor adding exported interfaces with the feature that needs them. Confidence: high Scope-risk: moderate Directive: Re-add removed redstone interfaces only with a production block or handler that consumes them. Tested: git diff --check Tested: go test -count=1 ./server/block ./server/world Tested: go test -run '^$' -bench 'BenchmarkRedstoneDirtyTickLongLineWithClocks' ./server/world Tested: go test ./... Not-tested: Manual Bedrock client redstone parity pass
Remove direction parameters the graph solver cannot honor consistently, so relayer loss is modeled as one edge cost across both compiled graph and direct power probing paths. Constraint: Cursor flagged compileEdges passing a synthetic from-face that could diverge from powerFrom for any future direction-sensitive relayer. Rejected: Keep from/to parameters with a clarifying comment | The current graph edge model has no actual entry-face state, so documenting the mismatch would preserve a misleading API. Rejected: Change glowstone/glass wire step-down and step-up semantics | Existing Bedrock parity tests require glowstone ladders to transmit upward, glowstone not to transmit downward, and glass to transmit downward. Confidence: high Scope-risk: narrow Directive: Add directional redstone relayer behavior only with a graph model that carries entry direction through propagation. Tested: git diff --check Tested: go test -count=1 ./server/block -run 'TestRedstoneWireTransmitsUpButNotDownGlowstone|TestRedstoneWireTransmitsDownGlassInBedrock|TestRedstoneWireGlowstoneLadderDoesNotOscillateAfterNeighbourBlockUpdate' Tested: go test -count=1 ./server/world -run 'TestRedstoneRelayerToSinkDoesNotLosePower|TestRedstoneVerticalRelayerPropagation' Tested: go test -count=1 ./server/block ./server/world Tested: go test -run '^$' -bench 'BenchmarkRedstoneDirtyTickLongLineWithClocks' ./server/world Tested: go test ./... Not-tested: Manual Bedrock client redstone parity pass
Keep the redstone base small by removing a discarded action return value, documenting intentional Tx query methods, sharing conductor classification through one exported helper, and avoiding unnecessary old-block reads where redstone updates are suppressed. Constraint: PR 28 is a redstone core/base change, so exported surfaces must be intentional rather than speculative. Rejected: Reintroducing unused future-facing redstone interfaces | they encode unproven repeater/comparator/piston design before implementation needs them. Confidence: high Scope-risk: moderate Directive: Add future redstone extension interfaces with the block feature that first consumes them. Tested: git diff --check; go test -count=1 ./server/block ./server/world; go test ./... Not-tested: External downstream custom block implementations.
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@server/world/redstone.go`:
- Around line 283-297: The propagation calculation runs before cancelled
relayers are excluded, so modify the flow to exclude cancelled sources before
computing powers: after calling updateGraphSources and setting
e.suppressedSources, remove cancelledSources from the propagation graph (either
by filtering graph.nodes or by re-running e.compile with candidates minus
cancelledSources) so that graphPower considers the suppressed set, then compute
powers and call e.update on sinks; updateGraphSources, compile, graphPower,
suppressedSources, graph.nodes and update are the symbols to change.
- Around line 852-866: The traversal's visited cache uses seen map keyed only by
cube.Pos which incorrectly treats different incoming-face states as identical;
update the cache used in RedstonePowerFrom's BFS (the step struct and its queue
processing) to key by a composite of position and entry face (e.g. pos + from)
or include the from value in the map key so states like (pos=A, from=west) and
(pos=A, from=north) are tracked separately; ensure all checks/assignments that
consult seen (the seen lookup, the seen assignment, and the loss comparison) use
the new composite key while keeping the existing early exits on s.loss and
s.depth unchanged.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 8531f8e4-eddd-4487-aa14-89c035ef746e
📒 Files selected for processing (9)
server/block/redstone_torch.goserver/block/redstone_wire.goserver/block/redstone_wire_test.goserver/block/tnt.goserver/world/redstone.goserver/world/redstone_bench_test.goserver/world/redstone_test.goserver/world/tx_redstone.goserver/world/world.go
🚧 Files skipped from review as they are similar to previous changes (5)
- server/block/tnt.go
- server/world/world.go
- server/block/redstone_wire.go
- server/block/redstone_wire_test.go
- server/block/redstone_torch.go
| graph := e.compile(tx, candidates) | ||
| cancelledSources, checkedSources := e.updateGraphSources(tx, graph, dirty) | ||
| previousSuppressed := e.suppressedSources | ||
| e.suppressedSources = cancelledSources | ||
| defer func() { | ||
| e.suppressedSources = previousSuppressed | ||
| }() | ||
|
|
||
| powers := e.graphPower(tx, graph) | ||
| for i, node := range graph.nodes { | ||
| d := redstoneDirtyContext(dirty, node.pos) | ||
| if node.sink { | ||
| e.update(tx, node.pos, d, powers[i]) | ||
| } | ||
| } |
There was a problem hiding this comment.
Cancelled relayer updates still propagate downstream power.
graphPower() runs before e.update() can cancel sink/relayer mutations, so a cancelled wire-like node still powers downstream sinks/actions in the same tick. That breaks the RedstoneUpdate contract here, which says cancellation suppresses propagation from the blocked mutation. This needs a second pass or a preflight step that removes cancelled relayers from the propagation graph before applying powers.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@server/world/redstone.go` around lines 283 - 297, The propagation calculation
runs before cancelled relayers are excluded, so modify the flow to exclude
cancelled sources before computing powers: after calling updateGraphSources and
setting e.suppressedSources, remove cancelledSources from the propagation graph
(either by filtering graph.nodes or by re-running e.compile with candidates
minus cancelledSources) so that graphPower considers the suppressed set, then
compute powers and call e.update on sinks; updateGraphSources, compile,
graphPower, suppressedSources, graph.nodes and update are the symbols to change.
| queue := []step{{pos: pos.Side(face), from: face.Opposite(), loss: 0, depth: 0}} | ||
| seen := make(map[cube.Pos]int, 16) | ||
| for len(queue) != 0 { | ||
| s := queue[0] | ||
| queue = queue[1:] | ||
| if s.pos.OutOfBounds(tx.Range()) || s.loss >= 15 || s.depth >= 15 { | ||
| continue | ||
| } | ||
| if s.pos == pos { | ||
| continue | ||
| } | ||
| if loss, ok := seen[s.pos]; ok && loss <= s.loss { | ||
| continue | ||
| } | ||
| seen[s.pos] = s.loss |
There was a problem hiding this comment.
Track visited relayer states by entry face, not just position.
This traversal carries from and forbids stepping back through that face, so (pos=A, from=west) and (pos=A, from=north) are not equivalent states. Caching only by cube.Pos can prune a valid second path and return too little power for RedstonePowerFrom(...).
Suggested fix
type step struct {
pos cube.Pos
from cube.Face
loss int
depth int
}
+ type visit struct {
+ pos cube.Pos
+ from cube.Face
+ }
queue := []step{{pos: pos.Side(face), from: face.Opposite(), loss: 0, depth: 0}}
- seen := make(map[cube.Pos]int, 16)
+ seen := make(map[visit]int, 16)
for len(queue) != 0 {
s := queue[0]
queue = queue[1:]
@@
- if loss, ok := seen[s.pos]; ok && loss <= s.loss {
+ key := visit{pos: s.pos, from: s.from}
+ if loss, ok := seen[key]; ok && loss <= s.loss {
continue
}
- seen[s.pos] = s.loss
+ seen[key] = s.loss🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@server/world/redstone.go` around lines 852 - 866, The traversal's visited
cache uses seen map keyed only by cube.Pos which incorrectly treats different
incoming-face states as identical; update the cache used in RedstonePowerFrom's
BFS (the step struct and its queue processing) to key by a composite of position
and entry face (e.g. pos + from) or include the from value in the map key so
states like (pos=A, from=west) and (pos=A, from=north) are tracked separately;
ensure all checks/assignments that consult seen (the seen lookup, the seen
assignment, and the loss comparison) use the new composite key while keeping the
existing early exits on s.loss and s.depth unchanged.
Allow burned-out torches to recover from scheduled redstone updates caused by other components while continuing to suppress self-triggered scheduled recovery loops. Constraint: Cursor found scheduled tick recovery was filtered by cause instead of source identity. Rejected: Rejecting all scheduled-tick recovery updates | it blocks valid external input changes from recovering a burned-out torch. Confidence: high Scope-risk: narrow Directive: Use RedstoneUpdate source metadata, not only cause, when distinguishing self-triggered redstone updates. Tested: git diff --check; go test -count=1 ./server/block -run 'TestBurnedOutRedstoneTorch'; go test -count=1 ./server/block ./server/world; go test ./... Not-tested: Live multi-torch in-game contraption timing.
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@server/block/redstone_wire_test.go`:
- Around line 824-859:
TestBurnedOutRedstoneTorchDoesNotSelfRecoverWhenLoopUnpowersInput is a
byte-for-byte duplicate of the previous burnout test; either remove/merge it or
change its fixture so it exercises the distinct "loop unpowers input" case. To
fix, edit TestBurnedOutRedstoneTorchDoesNotSelfRecoverWhenLoopUnpowersInput:
either delete the duplicate test, or modify the sequence around
torchPos/inputPos (variables torchPos, attachmentPos, inputPos) and the redstone
events (calls to tx.Redstone().Torch(...).MarkSelfTriggered(), ScheduledTick,
and RedstonePowerActionUpdate) so the input is actually unpowered by the torch's
own loop (e.g. toggle a neighboring wire or change the final RedstoneUpdate
Cause/ChangedNeighbour to simulate a loop-driven unpower) rather than repeating
the exact same steps as the prior test.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 8d534c8d-e32d-43af-a78d-31f5a80b5114
📒 Files selected for processing (2)
server/block/redstone_torch.goserver/block/redstone_wire_test.go
🚧 Files skipped from review as they are similar to previous changes (1)
- server/block/redstone_torch.go
Record cqdetdev's initial redstone refactor commit in this PR branch ancestry while keeping the reviewed redstone-core tree unchanged. Constraint: Preserve contributor credit without replacing the current reviewed implementation. Rejected: Replaying the branch content over current work | it would reintroduce already-reviewed older refactor code. Confidence: high Scope-risk: narrow Directive: Treat this merge as history-only unless intentionally revisiting the older refactor content. Tested: Pre-merge and post-merge tree hash both 14cf5e1; 2a409ac is an ancestor of HEAD. Not-tested: Tests not rerun for the merge commit because the tree is byte-identical to the previously tested head.
Table-drive repeated redstone test cases and remove one duplicate burned-out torch self-recovery check while preserving the same behavioral coverage. Constraint: Keep regression coverage for Cursor findings and Bedrock parity cases while reducing review noise. Rejected: Table-driving the larger end-to-end burnout scheduler file | those cases use distinct setup and timing paths, and forcing one table would obscure the behavior under test. Confidence: high Scope-risk: narrow Directive: Prefer adding new redstone regressions as cases in existing tables when setup and assertions match. Tested: git diff --check; go test -count=1 ./server/block ./server/world; go test ./... Not-tested: Benchmark run, since this only refactors tests.
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
❌ Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.
Reviewed by Cursor Bugbot for commit b0d1c69. Configure here.
| if !e.redstoneUpdateAllowed(tx, update) { | ||
| return | ||
| } | ||
| } |
There was a problem hiding this comment.
Engine handler fires on every context-action evaluation regardless
Low Severity
When a block implements RedstonePowerContextAction (like RedstoneTorch), shouldRunAction is unconditionally true. This means the HandleRedstoneUpdate handler is dispatched on every dirty evaluation of that block, even when oldPower == newPower and nothing has changed. For worlds with many torches and active handler implementations, this generates a large volume of no-op handler events each tick.
Reviewed by Cursor Bugbot for commit b0d1c69. Configure here.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>


Note
High Risk
High risk because this replaces the core redstone propagation model and public handler API, which can affect many block interactions, scheduling, and plugin integrations even when individual block changes are small.
Overview
Introduces a new deterministic, graph-based redstone engine in
server/world/redstone.goand wires it into world ticking (World.redstone,tick.go), replacing the legacyConductor/RedstoneUpdatermodel and removing the old redstone BFS/update helpers (server/block/redstone.go,server/world/redstone/state.go).Updates block implementations (notably
RedstoneWire,RedstoneTorch,Lever,Note,RedstoneBlock,TNT,Glowstone) to the newworld.Redstone*interfaces, including new non-conductive semantics (RedstoneNonConductive), deferred post-update effects (note block), torch inversion/burnout behavior via the engine, and TNT priming as aRedstonePowerAction.Changes the redstone event hook from
HandleRedstoneUpdate(ctx, pos)toHandleRedstoneUpdate(ctx, update world.RedstoneUpdate)with richer context/cancellation semantics, adjusts scheduling behavior inscheduledTickQueue, and adds extensive redstone unit/integration tests plus a benchmark to validate propagation, burnout recovery, and oscillation regressions.Reviewed by Cursor Bugbot for commit f693714. Bugbot is set up for automated code reviews on this repo. Configure here.
Summary by CodeRabbit
New Features
Improvements
Bug Fixes
Tests