Skip to content

Add AbstractSkippingEnvelopeHandler so all handlers will skip BAD_PACKET#218

Merged
zilm13 merged 2 commits intoConsensys:masterfrom
zilm13:skip-bad-packet
Apr 23, 2026
Merged

Add AbstractSkippingEnvelopeHandler so all handlers will skip BAD_PACKET#218
zilm13 merged 2 commits intoConsensys:masterfrom
zilm13:skip-bad-packet

Conversation

@zilm13
Copy link
Copy Markdown
Contributor

@zilm13 zilm13 commented Apr 22, 2026

PR Description

We have errors like this in logs:

{"@timestamp":"2026-04-22T03:47:37,098","level":"WARN","thread":"nioEventLoopGroup-4-1","class":"PipelineImpl","message":"Unexpected error in pipeline handler UnknownPacketTagToSender","throwable":"java.lang.ClassCastException: class org.ethereum.beacon.discovery.packet.impl.WhoAreYouPacketImpl cannot be cast to class org.ethereum.beacon.discovery.packet.OrdinaryMessagePacket (org.ethereum.beacon.discovery.packet.impl.WhoAreYouPacketImpl and org.ethereum.beacon.discovery.packet.OrdinaryMessagePacket are in unnamed module of loader 'app')\n\tat org.ethereum.beacon.discovery.pipeline.handler.UnknownPacketTagToSender.handle(UnknownPacketTagToSender.java:46)\n\tat org.ethereum.beacon.discovery.pipeline.PipelineImpl.lambda$build$0(PipelineImpl.java:36)\n\tat reactor.core.publisher.FluxPeekFuseable$PeekFuseableConditionalSubscriber.onNext(FluxPeekFuseable.java:489)\n\tat reactor.core.publisher.FluxPeekFuseable$PeekFuseableConditionalSubscriber.onNext(FluxPeekFuseable.java:503)\n\tat reactor.core.publisher.FluxPeekFuseable$PeekFuseableConditionalSubscriber.onNext(FluxPeekFuseable.java:503)\n\tat reactor.core.publisher.FluxPeekFuseable$PeekFuseableConditionalSubscriber.onNext(FluxPeekFuseable.java:503)\n\tat reactor.core.publisher.FluxReplay$SizeBoundReplayBuffer.replayNormal(FluxReplay.java:877)\n\tat reactor.core.publisher.FluxReplay$SizeBoundReplayBuffer.replay(FluxReplay.java:965)\n\tat reactor.core.publisher.ReplayProcessor.tryEmitNext(ReplayProcessor.java:508)\n\tat reactor.core.publisher.InternalManySink.emitNext(InternalManySink.java:27)\n\tat reactor.core.publisher.ReplayProcessor.onNext(ReplayProcessor.java:495)\n\tat reactor.core.publisher.FluxCreate$IgnoreSink.next(FluxCreate.java:704)\n\tat reactor.core.publisher.FluxCreate$SerializedFluxSink.next(FluxCreate.java:163)\n\tat org.ethereum.beacon.discovery.pipeline.PipelineImpl.push(PipelineImpl.java:63)\n\tat reactor.core.publisher.FluxPeekFuseable$PeekFuseableConditionalSubscriber.onNext(FluxPeekFuseable.java:489)\n\tat reactor.core.publisher.FluxReplay$SizeBoundReplayBuffer.replayNormal(FluxReplay.java:877)\n\tat reactor.core.publisher.FluxReplay$SizeBoundReplayBuffer.replay(FluxReplay.java:965)\n\tat reactor.core.publisher.ReplayProcessor.tryEmitNext(ReplayProcessor.java:508)\n\tat reactor.core.publisher.InternalManySink.emitNext(InternalManySink.java:27)\n\tat reactor.core.publisher.ReplayProcessor.onNext(ReplayProcessor.java:495)\n\tat reactor.core.publisher.FluxCreate$IgnoreSink.next(FluxCreate.java:704)\n\tat reactor.core.publisher.FluxCreate$SerializedFluxSink.next(FluxCreate.java:163)\n\tat org.ethereum.beacon.discovery.network.IncomingMessageSink.channelRead0(IncomingMessageSink.java:31)\n\tat org.ethereum.beacon.discovery.network.IncomingMessageSink.channelRead0(IncomingMessageSink.java:20)\n\tat io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99)\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)\n\tat io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:107)\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)\n\tat io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:280)\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:355)\n\tat io.netty.handler.traffic.AbstractTrafficShapingHandler.channelRead(AbstractTrafficShapingHandler.java:506)\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:355)\n\tat io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1429)\n\tat io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:918)\n\tat io.netty.channel.nio.AbstractNioMessageChannel$NioMessageUnsafe.read(AbstractNioMessageChannel.java:100)\n\tat io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.handle(AbstractNioChannel.java:445)\n\tat io.netty.channel.nio.NioIoHandler$DefaultNioRegistration.handle(NioIoHandler.java:388)\n\tat io.netty.channel.nio.NioIoHandler.processSelectedKey(NioIoHandler.java:596)\n\tat io.netty.channel.nio.NioIoHandler.processSelectedKeysOptimized(NioIoHandler.java:571)\n\tat io.netty.channel.nio.NioIoHandler.processSelectedKeys(NioIoHandler.java:512)\n\tat io.netty.channel.nio.NioIoHandler.run(NioIoHandler.java:484)\n\tat io.netty.channel.SingleThreadIoEventLoop.runIo(SingleThreadIoEventLoop.java:225)\n\tat io.netty.channel.SingleThreadIoEventLoop.run(SingleThreadIoEventLoop.java:196)\n\tat io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:1195)\n\tat io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)\n\tat io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)\n\tat java.base/java.lang.Thread.run(Thread.java:1583)\n"}

It happens because WhoAreYou handler in pipeline was not able to parse WHOAREYOU packet, marked it with BAD_PACKET, next this packet arrives in UnknownPacketTagToSender handler, which is just down in the pipeline and UnknownPacketTagToSender just doesn't expect this type of packet at all and throws an exception. For me the proper fix would be to make all handlers ignoring any packet which is marked as a BAD_PACKET except bad packets handler. That idea is implemented in this PR

Fixed Issue(s)


Note

Low Risk
Behavioral change is narrowly scoped to pipeline error handling (early-exit on BAD_PACKET) and is covered by new unit tests; minimal impact on core protocol logic beyond skipping post-error processing.

Overview
Introduces AbstractSkippingEnvelopeHandler, a new base handler that short-circuits processing when an envelope has been marked with Field.BAD_PACKET.

Updates the main discovery pipeline handlers (packet parsing, session resolution, dispatch, and message/task handlers) to extend this base class and move their logic into handlePacket, so only the terminal BadPacketHandler (still implementing EnvelopeHandler directly) continues to observe and log bad packets.

Adds unit tests verifying that once BAD_PACKET is set, downstream skipping handlers are not invoked, while non-skipping terminal handlers still run and healthy envelopes continue through the chain.

Reviewed by Cursor Bugbot for commit 5b7468e. Bugbot is set up for automated code reviews on this repo. Configure here.

@zilm13 zilm13 requested a review from a team as a code owner April 22, 2026 14:27
@github-actions
Copy link
Copy Markdown

github-actions Bot commented Apr 22, 2026

All contributors have signed the CLA ✍️ ✅
Posted by the CLA Assistant Lite bot.

@zilm13
Copy link
Copy Markdown
Contributor Author

zilm13 commented Apr 22, 2026

I have read the CLA Document and I hereby sign the CLA

github-actions Bot added a commit that referenced this pull request Apr 22, 2026
@zilm13
Copy link
Copy Markdown
Contributor Author

zilm13 commented Apr 22, 2026

recheck

Copy link
Copy Markdown
Contributor

@gfukushima gfukushima left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@zilm13 zilm13 merged commit c289cc1 into Consensys:master Apr 23, 2026
5 checks passed
@zilm13 zilm13 deleted the skip-bad-packet branch April 23, 2026 08:08
@github-actions github-actions Bot locked and limited conversation to collaborators Apr 23, 2026
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants