Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

qp deadlocks in Astro36/rust-pool-benchmark #2

Open
bikeshedder opened this issue May 3, 2022 · 3 comments
Open

qp deadlocks in Astro36/rust-pool-benchmark #2

bikeshedder opened this issue May 3, 2022 · 3 comments
Labels
bug Something isn't working good first issue Good for newcomers help wanted Extra attention is needed

Comments

@bikeshedder
Copy link
Contributor

I haven't been able to figure out what's wrong with it but I'm curious why that's happening:

I'm opening this as I'm currently evaluating the possibility of going back to a crossbeam based implementation in deadpool. I only switched to a Mutex<VecDeque> in 0.7 as I saw a neglectable performance impact back then. It also made features like resizable pools trivial.

I've looked at your Semaphore implementation and the one from async-lock. I gave both a try and either one performs better than the one from tokio. I'm just very hesitant to switch until we figure out why the benchmark deadlocks.

Using a different Semaphore implementation and crossbeam-queue yields a performance improvement of 2x-3x which is quite amazing and worth exploring.


I also opened an issue in the bb8 issue tracker, but since it has a completely different implementation and doesn't even use semaphores I don't think those problems are related in any way:


I just hope it's not an error in the benchmark implementation and I've opened those two issues in error. 🙈

@Astro36 Astro36 added bug Something isn't working good first issue Good for newcomers labels May 5, 2022
@Astro36 Astro36 added the help wanted Extra attention is needed label Jun 12, 2022
@romanstingler
Copy link

@bikeshedder for me the qp-postgres-v0.1 works but as you said the qp-v0.2.0 fails,
If I increase the pool_size to exactly or higher my CPU thread count, the benchmark passes.
Could you check the on your machine?

@bikeshedder
Copy link
Contributor Author

bikeshedder commented Jan 25, 2023

If you do increase the pool_size >= num_cpus it's very likely that the semaphore is never contested. You could try adding some tokio::task::yield_now calls. Though there isn't a great alternative to real CPU cores except for more real CPU cores.

Connection pools need a fair and correct semaphore implementation. Even async-lock fails in one of those regards:

To my knowledge only one fair and correct semaphore for async code exists. That's the one from the tokio project.

This is the reason why I'm sticking with the tokio::sync::Semaphore in deadpool for now. It's fair and proven correct.

Personally I wouldn't trust bb8 or qp as long as those issues haven't been addressed. Any high load situation can easily render your program unuseable. It doesn't even have to be a DoS attack. Even normal load can trigger this issue.

@romanstingler
Copy link

@bikeshedder thank you for your answer,
I just started learning Rust and I started porting some of our PHP Controllers to Rust and use Axum and diesel_async and tested with it's deadpool and bb8 feature flag.

Within various benchmarks and concurrent testing, I couldn't reproduce a hang on bb8.
Please note, that this is just a prototype and not meant for production use.
I love your passion and your work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working good first issue Good for newcomers help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

3 participants