Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pool::get_conn hangs under contention #65

Closed
jonhoo opened this issue Jul 8, 2019 · 2 comments · Fixed by #92
Closed

Pool::get_conn hangs under contention #65

jonhoo opened this issue Jul 8, 2019 · 2 comments · Fixed by #92

Comments

@jonhoo
Copy link
Contributor

jonhoo commented Jul 8, 2019

The following (new) test case occasionally hangs for me:

#[test]
fn can_handle_the_pressure() {
    let mut runtime = tokio::runtime::Runtime::new().unwrap();
    let pool = Pool::new(&**DATABASE_URL);
    for _ in 0..100 {
        use futures::{Sink, Stream};
        let (tx, rx) = futures::sync::mpsc::unbounded();
        for i in 0..10_000 {
            let pool = pool.clone();
            let tx = tx.clone();
            runtime.spawn(futures::future::lazy(move || {
                pool.get_conn()
                    .map_err(|e| unreachable!("{:?}", e))
                    .and_then(move |_| {
                        tx.send(i).map_err(|e| unreachable!("{:?}", e))
                    }).map(|_| ())
            }));
        }
        drop(tx);
        runtime.block_on(rx.fold(0, |_, _i| {
            Ok(0)
        })).unwrap();
    }
    drop(pool);
    runtime.shutdown_on_idle().wait().unwrap();
}

It specifically tries to set up contention over the Pool (notice that it doesn't actually issue any queries, it just repeatedly "gets" connections and then return them immediately), and it seems like occasionally it hits a deadlock. This may be related to #64, but I'm not sure.

@jonhoo
Copy link
Contributor Author

jonhoo commented Feb 11, 2020

Interestingly enough, I still see this with a lot of concurrent connection churn, even after #66. I might take a stab at greatly simplifying the connection pooling tomorrow using an mpsc channel + an async mutex around the receiver. Or maybe even just an async mutex with a VecDeque. It'll be worse scalability wise, but my guess is that you will never really notice in any real workload since most of the work will be happening on the connection, and the overhead of getting the connection will be negligible.

Thoughts @blackbeam ?

@blackbeam
Copy link
Owner

Hi!

Well, i have nothing against it, especially if it'll lead to a more stable behavior. Obviously, scalability worth nothing if pool simply hangs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants