Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposal to change async packet processing logic #2503

Merged

Conversation

Ingrim4
Copy link
Contributor

@Ingrim4 Ingrim4 commented Aug 7, 2023

I got recently confronted with an incompatibility between my plugin Orebufscator and RealisticSeasons. Both plugins are using async packet processing and delay the packet events (increase the processing delay). The way multiple async packet listeners are currently handled is as follows:

  1. get all async listeners for the packet type
  2. create an iterator for said list and attach it to the event
  3. enqueue packet into first listener of iterator
  4. processes packet (sync only)
  5. enqueue packet into next listener of iterator
  6. process packet (sync only)
  7. goto step 5 if iterator isn't empty
  8. mark processing slot free and try send packet if processing delay hit zero

This means if the first listener increases the processing delay then the second listener will process the same packet in parallel. The only way listeners can acquire exclusive access to a packet is using the processingLock object of the packet's AsyncMarker. This way of handling multi-threaded access to the packet event isn't very convenient since it could lead to unnecessary locked threads (if a listener would hold the lock for the entire processing step) and potential deadlocks if multiple packets are needed to process a single packet. This is especially harmful to the performance of plugins that process the packet in their own scheduler system (Orebfuscator does that because we need to jump back and forth between processing and main thread in some instances) because this could clog up the entire scheduler (particularly if a plugin needs a long time to process a packet). Furthermore there is currently a way for badly written plugins to prematurely send a packet through multiple decreasings of the processing delay without given other listeners time to process the packet.

That's why I propose the simple changes in this PR to effectively change steps 4 and 6 to also process the packet async (with processing delay reduced to zero) before calling the next listener. This way only one listener at a time can process a packet meaning the packet would always be "stable". Another way of solving this problem would be a new method for a listeners to explicitly tell ProtocolLib to only process this packet "sequentially" in that only this listener can process it and other listeners can after it's done. I would also be open for other solutions and would gladly implement them.

@dmulloy2
Copy link
Owner

@derklaro thoughts on this?

Copy link
Contributor

@derklaro derklaro left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Based on the description and by looking at the code this propasal lgtm. From the description I also got a feeling that using a more reactive approach here would be way better (kinda like netty does, the handler returns a future and when the future returns the next handler is called and so on - another approach would be via a context that knows the next handler to call). This would give plugins the ability to process the packets in their own scheduler system and take their time before passing to the next handler, and even cancelling would be easy: just don't pass it to the next handler 🤷

But this is something for later, for now this is fine 😄

@dmulloy2 dmulloy2 merged commit f0401ac into dmulloy2:master Aug 27, 2023
3 checks passed
@Ingrim4 Ingrim4 deleted the fix-async-packet-processing-chain branch September 27, 2023 18:28
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants