Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"dispatch dropped without returning error" when using Client from tests #2112

Closed
Kestrer opened this issue Jan 18, 2020 · 13 comments
Closed

Comments

@Kestrer
Copy link

Kestrer commented Jan 18, 2020

My library uses a global Client to run its requests.

The following code:

#[cfg(test)]
mod tests {
    use hyper::{
        client::{Client, HttpConnector},
        Body, Request,
    };

    lazy_static::lazy_static! {
        static ref CLIENT: Client<HttpConnector, Body> = Client::new();
    }

    async fn req(uri: &str) {
        CLIENT
            .request(Request::builder().uri(uri).body(Body::from("")).unwrap())
            .await
            .unwrap();
    }

    macro_rules! tests {
        ($($name:ident)*) => {
            $(
                #[tokio::test]
                async fn $name() {
                    req("http://example.org/").await;
                }
            )*
        };
    }
    tests!(t1 t2 t3 t4 t5 t6 t7 t8 t9 t10);
}

With tokio 0.2.9, hyper 0.13.1 and lazy_static 1.4.0, generates 10 tests, t1-t10, with each one making a simple request using a global client. Every cargo test, a random number of them panic with this backtrace.

This has been reported before, but was closed by the author.

The error seems to be that I am using the same Client from different runtimes on different threads.

@seanmonstar
Copy link
Member

seanmonstar commented Jan 20, 2020

The Client spawns background tasks to monitor the HTTP connection status, and if the executor drops it before it determines the connection was closed, it panics with that message. It was originally assumed that an executor wouldn't ever drop the task early. You can fix that in your code by being sure to use the Client in the same executor.

@Kestrer
Copy link
Author

Kestrer commented Jan 21, 2020

So what is the best way to write tests that use a global client? #[tokio::test] uses differrent executors by default.

@sfackler
Copy link
Contributor

sfackler commented Jan 21, 2020

Why do you need to use a global client?

@LucioFranco
Copy link
Member

LucioFranco commented Jan 21, 2020

I would highly recommend not using a global client but instead creating a client per test. This is what we do for most of our tests in https://github.com/timberio/vector/ and works great.

@Kestrer
Copy link
Author

Kestrer commented Jan 21, 2020

@LucioFranco That would require a major breaking API change for my library and make it more complex. I would rather not, but if it is the only way to make tests work when multi-threaded then I will.

@LucioFranco
Copy link
Member

LucioFranco commented Jan 21, 2020

I know rusoto does this which has caused weird test failures for me. In general using a global client for all your tests is bound to cause issues with race conditions etc.

@roignpar
Copy link

roignpar commented Jan 29, 2020

@Koxiaet I ran into a similar issue when using the same reqwest client for several integration tests. The client does some authentication and I wanted to avoid running that on every test. I ended up using a Mutex. You can find the code here: https://github.com/roignpar/thetvdb/blob/master/tests/client.rs

@rinde
Copy link

rinde commented May 5, 2020

I know rusoto does this which has caused weird test failures for me. In general using a global client for all your tests is bound to cause issues with race conditions etc.

@LucioFranco Did you manage to find a solution for testing Rusoto? My current workaround is to limit the number of threads for my tests to 1. It would be nice if it can somehow be resolved..

@cquintana-verbio
Copy link

cquintana-verbio commented May 18, 2020

We are hitting the same panic when testing with rusoto, specially with the s3 module.

If there's any other other way rather than running these tests with the thread limit set to 1, it would be nice to hear!

(And of course other than wrapping the client into a Mutex)

@LucioFranco
Copy link
Member

LucioFranco commented May 19, 2020

For rusoto you can create a new client https://docs.rs/rusoto_core/0.43.0/rusoto_core/request/struct.HttpClient.html from here and then pass that into the specific service's constructor. This will avoid using the lazy_static client.

@cquintana-verbio
Copy link

cquintana-verbio commented May 20, 2020

@LucioFranco That was it!
Thanks a lot!

@rinde
Copy link

rinde commented May 20, 2020

@LucioFranco amazing! Thank you very much

@marioortizmanero
Copy link

marioortizmanero commented Aug 17, 2020

I had a similar issue and ended up removing the client as a global and using multiple ones. Why is the client marked as thread-safe when it isn't? Shouldn't it implement !Sync?

Do I open a new issue about this?

joshuafleck added a commit to meetcleo/re_dms that referenced this issue Oct 7, 2021
There is an [issue](hyperium/hyper#2112) on the hyper repo that matches an [exception](https://rollbar.com/sagemaker-rollbar/re_dms/items/76/?item_page=0&item_count=100&#traceback) we've been encountering in re_dms. The comments in the issue suggest that hyper does not support sharing a client across threads.

Changing our implementation to create a new Rollbar client for each panic to see if that prevents this particular issue from occurring in the future.
joshuafleck added a commit to meetcleo/re_dms that referenced this issue Aug 15, 2022
There is an [issue](hyperium/hyper#2112) on the hyper repo that matches an [exception](https://rollbar.com/sagemaker-rollbar/re_dms/items/76/?item_page=0&item_count=100&#traceback) we've been encountering in re_dms. The comments in the issue suggest that hyper does not support sharing a client across threads.

Changing our implementation to create a new Rollbar client for each panic to see if that prevents this particular issue from occurring in the future.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants