Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Worker threads panic when client is disconnected during the same poll cycle as a packet send request #34

Closed
caelunshun opened this issue Aug 5, 2019 · 0 comments · Fixed by #42
Labels
bug Something isn't working
Milestone

Comments

@caelunshun
Copy link
Member

No description provided.

@caelunshun caelunshun added the bug Something isn't working label Aug 5, 2019
@caelunshun caelunshun modified the milestones: 0.4, 0.5 Aug 13, 2019
caelunshun added a commit that referenced this issue Sep 14, 2019
This is a temporary fix for panics with the IO worker until
the Tokio rewrite is completed.
caelunshun added a commit that referenced this issue Sep 30, 2019
# Summary
* Replaced the mio-based IO event loop with a Tokio task
* Rewrote packet encoding/decoding code, should now be more efficient
* Converted `Packet` to write to `BytesMut` instead of `ByteBuf`; `ByteBuf` removed

# Motivation
The current network code is a homebrew asynchronous IO event loop based on mio. Numerous issues have appeared with this implementation, and the code is quite inefficient as well (three to five HashMap lookups when data is received or sent).

Tokio provides its own heavily optimized event loop, in addition to a much easier to use interface. Furthermore, it can extend the async IO capability beyond the TCP streams themselves; for example, the authentication request to the Mojang API in the initial handler can now be run asynchronously, which improves performance (somewhat).

Ultimately, the decision to switch to Tokio is based on performance and stability, which both are improved through the switch.

# Implications
* Server packet handling code will be unaffected.
* The new networking code uses async/await, which was stabilized a few weeks ago and will be released in Rust stable in early to mid November. In the meantime, compiling Feather will require Rust nightly.

Fixes #34 .
Resolves #35.
Resolves #36.
cheako pushed a commit to cheako/feather that referenced this issue Jan 4, 2020
This is a temporary fix for panics with the IO worker until
the Tokio rewrite is completed.
cheako pushed a commit to cheako/feather that referenced this issue Jan 4, 2020
# Summary
* Replaced the mio-based IO event loop with a Tokio task
* Rewrote packet encoding/decoding code, should now be more efficient
* Converted `Packet` to write to `BytesMut` instead of `ByteBuf`; `ByteBuf` removed

# Motivation
The current network code is a homebrew asynchronous IO event loop based on mio. Numerous issues have appeared with this implementation, and the code is quite inefficient as well (three to five HashMap lookups when data is received or sent).

Tokio provides its own heavily optimized event loop, in addition to a much easier to use interface. Furthermore, it can extend the async IO capability beyond the TCP streams themselves; for example, the authentication request to the Mojang API in the initial handler can now be run asynchronously, which improves performance (somewhat).

Ultimately, the decision to switch to Tokio is based on performance and stability, which both are improved through the switch.

# Implications
* Server packet handling code will be unaffected.
* The new networking code uses async/await, which was stabilized a few weeks ago and will be released in Rust stable in early to mid November. In the meantime, compiling Feather will require Rust nightly.

Fixes feather-rs#34 .
Resolves feather-rs#35.
Resolves feather-rs#36.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant