-
-
Notifications
You must be signed in to change notification settings - Fork 109
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Stuck in "Timed out in bb8" #67
Comments
I'm afraid there are still ways to "leak" connections out of the pool. Can you tell me how you have configured/instantiated the |
pub type Pool = bb8::Pool<bb8_postgres::PostgresConnectionManager<tokio_postgres::NoTls>>;
pub type Conn = tokio_postgres::Client;
pub async fn create_pool(settings: &Settings) -> Result<Pool, tokio_postgres::Error> {
let db_settings = &settings.database;
let pg_mgr = PostgresConnectionManager::new_from_stringlike(
format!(
"postgresql://{}:{}@{}:{}/{}",
&db_settings.user, &db_settings.pass,
&db_settings.host, &db_settings.port,
&db_settings.name,
),
tokio_postgres::NoTls,
).map_err(|err| {
error!("{}", err);
}).unwrap();
Pool::builder().build(pg_mgr).await
} |
and then I pass it hyper like this: let db_pool = create_pool(settings).await?;
let addr = (host, port).into();
let make_service = make_service_fn(move |_| {
let db_pool = db_pool.clone();
async move {
Ok::<_, Error>(service_fn(move |req| {
let db_pool = db_pool.clone();
async move {
handle_request(req, db_pool).await
}
}))
}
}); |
In Cargo.toml: |
Can you try with #68? I think this should fix the problem. |
I've changed Cargo.toml: |
It could be because the unfairness of |
Interesting. I pushed a change in 8ec0c0a to switch to tokio's |
This commit also didn't help solve the problem. With this connection pool https://github.com/bikeshedder/deadpool everything is ok. Maybe you could check how it's implemented and do the same thing, but I suspect its implementation is fundamentally different. |
Yeah, deadpool uses a very different approach. I have one more idea which is in progress, would be happy if you could test that as soon as I finish it. |
Ok, I'm still here. |
I just pushed some more changes to the |
No, it doesn't fix the problem. The error message doesn't appear now, requests just hang forever. |
So how are you stress-testing your project? |
I used jmeter. But I noticed that this problem appears even if I simply take the url of the slowest GET request (~100ms), put it into Firefox's address bar and press F5 frequently and too many times (it takes ~10 seconds for the problem to appear.) So your changes I tested this way and didn't use jmeter tests. |
i was experiencing the same issue when testing pooling with redis, would get stuck in the |
I was just guessing so I could be wrong. Altough tokio's mutex is backed by a FIFO linked list which makes it fair but it also make the connection returning to pool have a very long wait for the lock or even a dead lock. I imagine a lock where it's fair and can also give returning connection a highest priority in the wait list would be the best for bb8. |
Hi @djc do you have any progress about this issue? I know cdrs use bb8 as its connection pool to cassandra. |
I have a branch in progress, but haven't been able to spend much time on it recently. I'll see if I can spend some time on it soon. |
I've spent a bunch of time over the past days to improve the internals, and I think issues like this should be fixed. If you're still interested, please give current master a shot and let me know if it improves things for you. |
Hi @djc thank you so much. I read in another that someone has tried master version and issue is solved. Do you plan to release a new version as 0.4.x? |
No, in order to provide more robustness I've had to make some API changes. In particular, the I'll likely also release the 0.6 version soon after, which will rely on tokio 0.3. That will not come with redis support until the redis crate is also updated to tokio 0.3, though. This is just my thinking, any feedback on what people are looking for is much appreciated. |
0.5.0 has been released, so I'm going to close this issue for now. If you still see issues, please open a new issue! |
This seems to be reproducible (again?) in 0.6.0. To clarify, I'm using bb8-postgres. This consistently happens when I stress test my server using wrk with a large number of connections. |
Can you say a bit more about how you've configured the pool and how you use it? Just |
Just I made a wrapper around the future returned by As for why the handlers were being canceled, my best guess is that at the end of the benchmark wrk forcefully closed all the connections that were still waiting for a response, and Hyper decided that there was no reason to wait for their handlers to complete. |
This minimally reproduces the bug: use std::{convert::Infallible, future::Future, task::Poll};
use async_trait::async_trait;
use futures::future::poll_fn;
struct ConnectionManager;
struct Connection;
impl bb8::ManageConnection for ConnectionManager {
type Connection = Connection;
type Error = Infallible;
async fn connect(&self) -> Result<Self::Connection, Self::Error> {
Ok(Connection)
}
async fn is_valid(&self, _: &mut bb8::PooledConnection<'_, Self>) -> Result<(), Self::Error> {
Ok(())
}
fn has_broken(&self, _: &mut Self::Connection) -> bool {
false
}
}
// Works with flavor = "current_thread", too.
#[tokio::main]
async fn main() {
let pool = bb8::Pool::builder().max_size(20).build(ConnectionManager).await.unwrap();
let mut connections = Vec::new();
// With flavor = "current_thread" this issue never occurs if the number of connections acquired
// here is less then the pool size. With flavor = "multi_thread", however, this still
// *sometimes* occurs as long as at least one connection is acquired.
for _ in 0..20u32 {
connections.push(pool.get().await);
}
let mut futures = Vec::new();
for _ in 0..20u32 {
futures.push(Box::pin(pool.get()));
// Poll the future once.
poll_fn(|context| {
let _ = futures.last_mut().unwrap().as_mut().poll(context);
Poll::Ready(())
}).await;
}
// The order in which these are dropped is important, and the futures need to be dropped, not awaited.
drop(connections);
drop(futures);
pool.get().await.unwrap(); // error!
} |
That is awesome, thanks! I'll check it out ASAP. |
@lassipulkkinen #91 fixes your minimal reproduction. Can you check if it fixes your actual application? |
That fixed it, thanks! |
I released this fix as 0.5.2 (tokio 0.2) and 0.6.2 (tokio 0.3), thanks for the quick feedback! |
… (#149) Co-authored-by: Danilo Cianfrone <danilocianfr@gmail.com>
I have a web api application that uses bb8-postgres, tokio_postgres and hyper. When I started to do performance tests, I noticed that if there're too many requests, it shows this error "Timed out in bb8" and it doesn't go away, only restart helps to get rid of it.
I do create a single instance of
bb8::Pool
and clone it for each request. Then, I dodb_pool.run(|conn| async {Ok((<some_func_that_does_queries_using_conn>.await, conn))}).await
.I'm not sure if it's a bug or I'm doing something wrong. Could you help me, please?
The text was updated successfully, but these errors were encountered: