Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Transport endpoint is not connected, peer_addr() unwrap panics #461

Closed
Zennii opened this issue Aug 16, 2020 · 6 comments · Fixed by #497
Closed

Transport endpoint is not connected, peer_addr() unwrap panics #461

Zennii opened this issue Aug 16, 2020 · 6 comments · Fixed by #497

Comments

@Zennii
Copy link

Zennii commented Aug 16, 2020

I received this error while letting a server run overnight while being requested by 3 clients every 60 seconds. I noticed the errors for peer_addr() are interestingly undocumented in the std lib and I'm not exactly sure what causes it in this scenario (Maybe the connection drops out really fast before it's handled? I'd like to hear more about this error if anyone knows), I thought I would report it in case you guys wanted to reconsider this unwrap call to handle this instead of panicking.

I'm going to try poking around with it myself and see if I can get it to recreate itself and continue on with it's life without crashing as that's important for my use case. I will give any further details if I get it to happen again. This is using gotham pulled from this repo some time near the beginning of August 2020.

thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Os { code: 107, kind: NotConnected, message: "Transport endpoint is not connected" }', /home/user/rustwww/test/gotham/gotham/src/lib.rs:118:24
stack backtrace:
   0: backtrace::backtrace::libunwind::trace
             at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.44/src/backtrace/libunwind.rs:86
   1: backtrace::backtrace::trace_unsynchronized
             at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.44/src/backtrace/mod.rs:66
   2: std::sys_common::backtrace::_print_fmt
             at src/libstd/sys_common/backtrace.rs:78
   3: <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt
             at src/libstd/sys_common/backtrace.rs:59
   4: core::fmt::write
             at src/libcore/fmt/mod.rs:1063
   5: std::io::Write::write_fmt
             at src/libstd/io/mod.rs:1426
   6: std::sys_common::backtrace::_print
             at src/libstd/sys_common/backtrace.rs:62
   7: std::sys_common::backtrace::print
             at src/libstd/sys_common/backtrace.rs:49
   8: std::panicking::default_hook::{{closure}}
             at src/libstd/panicking.rs:204
   9: std::panicking::default_hook
             at src/libstd/panicking.rs:224
  10: std::panicking::rust_panic_with_hook
             at src/libstd/panicking.rs:470
  11: rust_begin_unwind
             at src/libstd/panicking.rs:378
  12: core::panicking::panic_fmt
             at src/libcore/panicking.rs:85
  13: core::option::expect_none_failed
             at src/libcore/option.rs:1211
  14: <futures_util::stream::stream::for_each::ForEach<St,Fut,F> as core::future::future::Future>::poll
  15: <std::future::GenFuture<T> as core::future::future::Future>::poll
  16: tokio::runtime::enter::Enter::block_on
  17: tokio::runtime::thread_pool::ThreadPool::block_on
  18: tokio::runtime::context::enter
  19: tokio::runtime::Runtime::block_on
  20: gotham::plain::start
  21: test::main
  22: std::rt::lang_start::{{closure}}
  23: std::rt::lang_start_internal::{{closure}}
             at src/libstd/rt.rs:52
  24: std::panicking::try::do_call
             at src/libstd/panicking.rs:303
  25: __rust_maybe_catch_panic
             at src/libpanic_unwind/lib.rs:86
  26: std::panicking::try
             at src/libstd/panicking.rs:281
  27: std::panic::catch_unwind
             at src/libstd/panic.rs:394
  28: std::rt::lang_start_internal
             at src/libstd/rt.rs:51
  29: main
  30: __libc_start_main
  31: _start
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
@Zennii
Copy link
Author

Zennii commented Aug 16, 2020

It has occurred again. I replaced the unwrap with a match which does nothing upon this specific error, which for my use case is fine and doesn't need further consideration. Server seems to continue on like it never happened. I did not see any of my client processes return anything other than 200 statuses, so I'm unsure where this error is coming from or why it doesn't seem to affect the clients at all. I'd think one of them should have been left unhandled or something. I didn't expect to see the error happen again so soon.

@sezna
Copy link
Collaborator

sezna commented Aug 26, 2020

It looks like whatever errors are rising up from the mio stack are unwrapped in gotham. There are a few things that could be going wrong but they are all arising from either tokio, mio, or std::net, so the best I think we can do is to just bubble up the error instead of unwrapping.

@sezna sezna added the refactor label Aug 26, 2020
@sezna sezna added this to the 0.6 milestone Aug 26, 2020
@msrd0 msrd0 added the bug label Aug 26, 2020
@msrd0
Copy link
Member

msrd0 commented Nov 6, 2020

Hi @Zennii, sorry for the delay. Can you test #497 and see if that fixes this issue?

@Zennii
Copy link
Author

Zennii commented Nov 6, 2020

I'll try to give it a go this weekend and see if it comes up

@Zennii
Copy link
Author

Zennii commented Nov 9, 2020

@msrd0 So far so good, I've run it for a bit over a day and haven't seen that particular error (Or anything, really) pop up, I'll leave it running and report back again if I see anything

@msrd0
Copy link
Member

msrd0 commented Nov 11, 2020

Thanks for testing. If this happens again, please reopen this issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants