Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

It will take a long time to turn off the HTTP service #3188

Closed
Cloud33 opened this issue Mar 30, 2023 · 2 comments
Closed

It will take a long time to turn off the HTTP service #3188

Cloud33 opened this issue Mar 30, 2023 · 2 comments
Labels
C-bug Category: bug. Something is wrong. This is bad!

Comments

@Cloud33
Copy link

Cloud33 commented Mar 30, 2023

Version
List the version(s) of hyper, and any relevant hyper dependency (such as h2 if this is related to HTTP/2).

axum = { version = "0.6.12"}
hyper = {version = "0.14.25"}

Platform
The output of uname -a (UNIX), or version and 32 or 64-bit (Windows)

64-bit (Windows)

Description
Enter your issue details here.
One way to structure the description:

I reported this issue at axumtokio-rs/axum#991, but the shutdown of axum originated from super, so I briefly explained this issue. I made a very simple API derived from the example hello world + Graceful shutdown. If you only start the http server, it will be shut down immediately, which is very fast, but if you have counted a request, it will get stuck here after shutdown:

2023-03-30 17:14:59.9816 TRACE hyper::proto::h1::conn: shut down IO complete
2023-03-30 17:14:59.9818 TRACE mio::poll: deregistering event source from poller
2023-03-30 17:17:34.4422 TRACE hyper::proto::h1::conn: Conn::read_head
2023-03-30 17:17:34.4427 TRACE hyper::proto::h1::io: received 0 bytes
2023-03-30 17:17:34.4430 TRACE hyper::proto::h1::io: parse eof

Total waiting time: 17:14:59 ~ 17:17:34 = 153 s..

2023-03-30 17:14:59.9818 TRACE mio::poll: deregistering event source from poller

[short summary of the bug]

2023-03-30 17:14:59.9785  INFO fly_api: signal received, starting graceful shutdown
2023-03-30 17:14:59.9788 DEBUG hyper::server::shutdown: signal received, starting graceful shutdown
2023-03-30 17:14:59.9791 TRACE mio::poll: deregistering event source from poller
2023-03-30 17:14:59.9792 TRACE hyper::proto::h1::conn: disable_keep_alive; closing idle connection
2023-03-30 17:14:59.9792 TRACE hyper::proto::h1::conn: disable_keep_alive; in-progress connection
2023-03-30 17:14:59.9797 TRACE hyper::proto::h1::conn: State::close()
2023-03-30 17:14:59.9799 TRACE hyper::proto::h1::conn: Conn::read_head
2023-03-30 17:14:59.9802 TRACE hyper::proto::h1::conn: State::close_read()
2023-03-30 17:14:59.9806 TRACE hyper::proto::h1::conn: State::close_write()
2023-03-30 17:14:59.9804 TRACE hyper::proto::h1::conn: flushed({role=server}): State { reading: Init, writing: Init, keep_alive: Disabled }
2023-03-30 17:14:59.9808 TRACE hyper::proto::h1::conn: flushed({role=server}): State { reading: Closed, writing: Closed, keep_alive: Disabled }
2023-03-30 17:14:59.9816 TRACE hyper::proto::h1::conn: shut down IO complete
2023-03-30 17:14:59.9818 TRACE mio::poll: deregistering event source from poller
2023-03-30 17:17:34.4422 TRACE hyper::proto::h1::conn: Conn::read_head
2023-03-30 17:17:34.4427 TRACE hyper::proto::h1::io: received 0 bytes
2023-03-30 17:17:34.4430 TRACE hyper::proto::h1::io: parse eof
2023-03-30 17:17:34.4433 TRACE hyper::proto::h1::conn: State::close_read()
2023-03-30 17:17:34.4436 DEBUG hyper::proto::h1::conn: read eof
2023-03-30 17:17:34.4439 TRACE hyper::proto::h1::conn: State::close_write()
2023-03-30 17:17:34.4442 TRACE hyper::proto::h1::conn: State::close_read()
2023-03-30 17:17:34.4446 TRACE hyper::proto::h1::conn: State::close_write()
2023-03-30 17:17:34.4450 TRACE hyper::proto::h1::conn: flushed({role=server}): State { reading: Closed, writing: Closed, keep_alive: Disabled }
2023-03-30 17:17:34.4456 TRACE hyper::proto::h1::conn: shut down IO complete
2023-03-30 17:17:34.4459 TRACE mio::poll: deregistering event source from poller
2023-03-30 17:17:34.4467  INFO fly_api: close http service
2023-03-30 17:17:34.4471  INFO fly_api: shutdown start
2023-03-30 17:17:34.4474  INFO fly_api: shutdown end

I tried this code:

pub async fn serve(app_port: u16, client: Client) {
    // build our application with a route
    let app = Router::new()
        .route("/", get(handler))
        .layer(Extension(ApiContext { client }))
        .layer(TraceLayer::new_for_http());

    // add a fallback service for handling routes to unknown paths
    let app = app.fallback(handler_404);
    // run it
    let addr = SocketAddr::from(([127, 0, 0, 1], app_port));

    tracing::debug!("listening on {}", addr);
    axum::Server::bind(&addr)
        .serve(app.into_make_service())
        .with_graceful_shutdown(shutdown_signal())
        .await
        .unwrap();
    tracing::info!("close http service");
}

// 404
async fn handler_404(method: Method, uri: Uri) -> impl IntoResponse {
    (
        StatusCode::NOT_FOUND,
        format!("Nothing to see at {} {}", method, uri),
    )
}

async fn handler() -> Html<&'static str> {
    Html("<h1>Hello, World!</h1>")
}

// 关机
async fn shutdown_signal() {
    let ctrl_c = async {
        tokio::signal::ctrl_c()
            .await
            .expect("failed to install Ctrl+C handler");
    };

    #[cfg(unix)]
    let terminate = async {
        tokio::signal::unix::signal(signal::unix::SignalKind::terminate())
            .expect("failed to install signal handler")
            .recv()
            .await;
    };

    #[cfg(not(unix))]
    let terminate = std::future::pending::<()>();

    tokio::select! {
        _ = ctrl_c => {},
        _ = terminate => {},
    }
    tracing::info!("signal received, starting graceful shutdown");
}
@Cloud33 Cloud33 added the C-bug Category: bug. Something is wrong. This is bad! label Mar 30, 2023
@seanmonstar
Copy link
Member

2023-03-30 17:14:59.9792 TRACE hyper::proto::h1::conn: disable_keep_alive; in-progress connection

2023-03-30 17:14:59.9804 TRACE hyper::proto::h1::conn: flushed({role=server}): State { reading: Init, writing: Init, keep_alive: Disabled }

This looks like to me that there is connection that was received, but hasn't sent any headers. It will currently wait for that connection to send its first request. That's discussed in #2730.

Besides probably fixing that in #2730, these would also be relevant:

  • Providing an idle/read-header timeout.
  • As described in Make a GracefulShutdown helper in hyper-util #2862, after triggering a graceful shutdown, an application should likely always start a new timer to force an abrupt shutdown, because even if the previous two points are fixed, and constantly busy connection will keep the server alive anyways.

@seanmonstar seanmonstar closed this as not planned Won't fix, can't repro, duplicate, stale Mar 30, 2023
@Cloud33
Copy link
Author

Cloud33 commented Mar 31, 2023

I tried to test this version again, and I found a key issue. If there are no requests during shutdown, the shutdown will be slow, but if there are new requests entering (during shutdown), the shutdown will be fast. What is the reason?

Shutdown, new requests,Very slow, took around 100s, log:

2023-03-31 10:01:31.0933  INFO shutdown_demo: signal received, starting graceful shutdown
2023-03-31 10:01:31.0941 DEBUG hyper::server::shutdown: signal received, starting graceful shutdown
2023-03-31 10:01:31.0950 TRACE mio::poll: deregistering event source from poller
2023-03-31 10:01:31.0951 TRACE hyper::proto::h1::conn: disable_keep_alive; in-progress connection
2023-03-31 10:01:31.0951 TRACE hyper::proto::h1::conn: disable_keep_alive; in-progress connection
2023-03-31 10:01:31.0953 TRACE hyper::proto::h1::conn: disable_keep_alive; closing idle connection
2023-03-31 10:01:31.0964 TRACE hyper::proto::h1::conn: Conn::read_head
2023-03-31 10:01:31.0971 TRACE hyper::proto::h1::conn: Conn::read_head
2023-03-31 10:01:31.0979 TRACE hyper::proto::h1::conn: State::close()
2023-03-31 10:01:31.0987 TRACE hyper::proto::h1::conn: flushed({role=server}): State { reading: Init, writing: Init, keep_alive: Disabled }
2023-03-31 10:01:31.0992 TRACE hyper::proto::h1::conn: flushed({role=server}): State { reading: Init, writing: Init, keep_alive: Disabled }
2023-03-31 10:01:31.0998 TRACE hyper::proto::h1::conn: State::close_read()
2023-03-31 10:01:31.1023 TRACE hyper::proto::h1::conn: State::close_write()
2023-03-31 10:01:31.1030 TRACE hyper::proto::h1::conn: flushed({role=server}): State { reading: Closed, writing: Closed, keep_alive: Disabled }
2023-03-31 10:01:31.1042 TRACE hyper::proto::h1::conn: shut down IO complete
2023-03-31 10:01:31.1050 TRACE mio::poll: deregistering event source from poller
2023-03-31 10:03:07.5054 TRACE hyper::proto::h1::conn: Conn::read_head
2023-03-31 10:03:07.5058 TRACE hyper::proto::h1::io: received 0 bytes
2023-03-31 10:03:07.5061 TRACE hyper::proto::h1::io: parse eof
2023-03-31 10:03:07.5065 TRACE hyper::proto::h1::conn: State::close_read()
2023-03-31 10:03:07.5068 DEBUG hyper::proto::h1::conn: read eof
2023-03-31 10:03:07.5071 TRACE hyper::proto::h1::conn: State::close_write()
2023-03-31 10:03:07.5074 TRACE hyper::proto::h1::conn: State::close_read()
2023-03-31 10:03:07.5077 TRACE hyper::proto::h1::conn: State::close_write()
2023-03-31 10:03:07.5079 TRACE hyper::proto::h1::conn: flushed({role=server}): State { reading: Closed, writing: Closed, keep_alive: Disabled }
2023-03-31 10:03:07.5083 TRACE hyper::proto::h1::conn: shut down IO complete
2023-03-31 10:03:07.5086 TRACE mio::poll: deregistering event source from poller
2023-03-31 10:03:07.5089 TRACE hyper::proto::h1::conn: Conn::read_head
2023-03-31 10:03:07.5091 TRACE hyper::proto::h1::io: received 0 bytes
2023-03-31 10:03:07.5093 TRACE hyper::proto::h1::io: parse eof
2023-03-31 10:03:07.5096 TRACE hyper::proto::h1::conn: State::close_read()
2023-03-31 10:03:07.5098 DEBUG hyper::proto::h1::conn: read eof
2023-03-31 10:03:07.5100 TRACE hyper::proto::h1::conn: State::close_write()
2023-03-31 10:03:07.5102 TRACE hyper::proto::h1::conn: State::close_read()
2023-03-31 10:03:07.5105 TRACE hyper::proto::h1::conn: State::close_write()
2023-03-31 10:03:07.5107 TRACE hyper::proto::h1::conn: flushed({role=server}): State { reading: Closed, writing: Closed, keep_alive: Disabled }
2023-03-31 10:03:07.5111 TRACE hyper::proto::h1::conn: shut down IO complete
2023-03-31 10:03:07.5115 TRACE mio::poll: deregistering event source from poller
2023-03-31 10:03:07.5120  INFO shutdown_demo: close http service

Shutdown, new requests, It's fast and only takes about 4s, log:

2023-03-31 10:04:10.0580  INFO shutdown_demo: signal received, starting graceful shutdown
2023-03-31 10:04:10.0583 DEBUG hyper::server::shutdown: signal received, starting graceful shutdown
2023-03-31 10:04:10.0587 TRACE mio::poll: deregistering event source from poller
2023-03-31 10:04:10.0587 TRACE hyper::proto::h1::conn: disable_keep_alive; in-progress connection
2023-03-31 10:04:10.0587 TRACE hyper::proto::h1::conn: disable_keep_alive; closing idle connection
2023-03-31 10:04:10.0587 TRACE hyper::proto::h1::conn: disable_keep_alive; in-progress connection
2023-03-31 10:04:10.0592 TRACE hyper::proto::h1::conn: Conn::read_head
2023-03-31 10:04:10.0595 TRACE hyper::proto::h1::conn: State::close()
2023-03-31 10:04:10.0598 TRACE hyper::proto::h1::conn: Conn::read_head
2023-03-31 10:04:10.0600 TRACE hyper::proto::h1::conn: flushed({role=server}): State { reading: Init, writing: Init, keep_alive: Disabled }
2023-03-31 10:04:10.0602 TRACE hyper::proto::h1::conn: State::close_read()
2023-03-31 10:04:10.0605 TRACE hyper::proto::h1::conn: flushed({role=server}): State { reading: Init, writing: Init, keep_alive: Disabled }
2023-03-31 10:04:10.0611 TRACE hyper::proto::h1::conn: State::close_write()
2023-03-31 10:04:10.0617 TRACE hyper::proto::h1::conn: flushed({role=server}): State { reading: Closed, writing: Closed, keep_alive: Disabled }
2023-03-31 10:04:10.0622 TRACE hyper::proto::h1::conn: shut down IO complete
2023-03-31 10:04:10.0625 TRACE mio::poll: deregistering event source from poller
2023-03-31 10:04:10.8455 TRACE hyper::proto::h1::conn: Conn::read_head
2023-03-31 10:04:10.8463 TRACE hyper::proto::h1::io: received 724 bytes
2023-03-31 10:04:10.8468 TRACE parse_headers: hyper::proto::h1::role: Request.parse bytes=724
2023-03-31 10:04:10.8474 TRACE parse_headers: hyper::proto::h1::role: Request.parse Complete(724)
2023-03-31 10:04:10.8479 DEBUG hyper::proto::h1::io: parsed 15 headers
2023-03-31 10:04:10.8483 DEBUG hyper::proto::h1::conn: incoming body is empty
2023-03-31 10:04:10.8488 TRACE encode_headers: hyper::proto::h1::role: Server::encode status=200, body=Some(Known(22)), req_method=Some(GET)
2023-03-31 10:04:10.8494 TRACE hyper::proto::h1::encode: sized write, len = 22
2023-03-31 10:04:10.8501 TRACE hyper::proto::h1::io: buffer.queue self.len=116 buf.len=22
2023-03-31 10:04:10.8511 DEBUG hyper::proto::h1::io: flushed 138 bytes
2023-03-31 10:04:10.8516 TRACE hyper::proto::h1::conn: State::close()
2023-03-31 10:04:10.8520 TRACE hyper::proto::h1::conn: flushed({role=server}): State { reading: Closed, writing: Closed, keep_alive: Disabled }
2023-03-31 10:04:10.8533 TRACE hyper::proto::h1::conn: shut down IO complete
2023-03-31 10:04:10.8538 TRACE mio::poll: deregistering event source from poller
2023-03-31 10:04:14.1184 TRACE hyper::proto::h1::conn: Conn::read_head
2023-03-31 10:04:14.1188 TRACE hyper::proto::h1::io: received 724 bytes
2023-03-31 10:04:14.1194 TRACE parse_headers: hyper::proto::h1::role: Request.parse bytes=724
2023-03-31 10:04:14.1199 TRACE parse_headers: hyper::proto::h1::role: Request.parse Complete(724)
2023-03-31 10:04:14.1206 DEBUG hyper::proto::h1::io: parsed 15 headers
2023-03-31 10:04:14.1210 DEBUG hyper::proto::h1::conn: incoming body is empty
2023-03-31 10:04:14.1215 TRACE encode_headers: hyper::proto::h1::role: Server::encode status=200, body=Some(Known(22)), req_method=Some(GET)
2023-03-31 10:04:14.1219 TRACE hyper::proto::h1::encode: sized write, len = 22
2023-03-31 10:04:14.1221 TRACE hyper::proto::h1::io: buffer.queue self.len=116 buf.len=22
2023-03-31 10:04:14.1225 DEBUG hyper::proto::h1::io: flushed 138 bytes
2023-03-31 10:04:14.1227 TRACE hyper::proto::h1::conn: State::close()
2023-03-31 10:04:14.1230 TRACE hyper::proto::h1::conn: flushed({role=server}): State { reading: Closed, writing: Closed, keep_alive: Disabled }
2023-03-31 10:04:14.1235 TRACE hyper::proto::h1::conn: shut down IO complete
2023-03-31 10:04:14.1237 TRACE mio::poll: deregistering event source from poller
2023-03-31 10:04:14.1241  INFO shutdown_demo: close http service

This should not be normal behavior, should it be normal behavior to shut down and reject new requests?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
C-bug Category: bug. Something is wrong. This is bad!
Projects
None yet
Development

No branches or pull requests

2 participants