Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

After switch to tokio 0.2 from 0.2.0-alpha.6, I found a performance decrease #1859

Closed
importcjj opened this issue Nov 29, 2019 · 12 comments
Closed

Comments

@importcjj
Copy link

After switch to tokio 0.2 from 0.2.0-alpha.6, I found a performance decrease

Version

  • 0.2
  • 0.2.0-alpha.6

Platform

Darwin helloworlddeMBP 18.0.0 Darwin Kernel Version 18.0.0: Wed Aug 22 20:13:40 PDT 2018; root:xnu-4903.201.2~1/RELEASE_X86_64 x86_64

Description

mobc is a database connection pool, it uses tokio 0.2.0-alpha.6. After tokio 0.2 has been released, I want to switch to version 0.2. But when I finished upgraded, mine example has a performance decrease.

There are two versions of the examples.

They are almost the same. As for DefaultExcutor, in the tokio 0.2.0-alpha.6, it's tokio::executor::DefaultExcutor, and in tokio 0.2, it's just a wrapper to call the global spawn function.

  1. tokio(0.2.0-alpha.6) takes 760ms to run the example.
  2. tokio(0.2.1) takes 2.1s to finish.
  3. By the way, async-std(1.0) takes 860ms.

I don't know where the problem is. Someone can help me?

@carllerche
Copy link
Member

try wrapping the main fn in a spawn(...).await and let me know how that impacts results?

I can try to dig in next week if it doesn't help. The steps to repro are to just run global_tokio_runtime?

@importcjj
Copy link
Author

Thanks for your reply.

After I read through the documentation, I found that I only had the feature 'macros' and 'time' enabled and did not enable the rt-thread feature. After I enable it, the example takes 900ms to run the example.

One small suggestion: it might be more helpful to the newcomers if you listed all the features in the documentation.

@carllerche
Copy link
Member

What happens when you do the spawn thing from ^^?

@carllerche
Copy link
Member

If there are any perf regressions from the alpha, I consider they a bug

@importcjj
Copy link
Author

What happens when you do the spawn thing from ^^?

Do you mean wrap like this ?

#[tokio::main]
async fn main() {
    env_logger::init();
    tokio::spawn(async {
        let mark = Instant::now();
        let (tx, mut rx) = mpsc::channel::<()>(MAX);
        do_redis(tx).await.unwrap();

        let mut num: usize = 0;
        while let Some(_) = rx.next().await {
            num += 1;
            if num == MAX {
                break;
            }
        }

        println!("costs {:?}", mark.elapsed());
    })
    .await;
}

@importcjj importcjj reopened this Nov 29, 2019
@importcjj
Copy link
Author

Here are some test data

tokio 0.2

spawn 5000 requests

item 1 2 3 4 5
init pool costs 5.8391ms 4.4394ms 3.1135ms 3.6065ms 3.314ms
total costs 902.1982ms 1.1180583s 1.0059648s 1.0079833s 1.0039282s

spawn 1000 requests

item 1 2 3 4 5
init pool costs 8.318ms 4.6037ms 3.3679ms 3.518ms 5.1848ms
total costs 359.3212ms 195.5018ms 135.8528ms 196.5892ms 137.9491ms

tokio 0.2 alpha.6

spawn 5000 requests

item 1 2 3 4 5
init pool costs 12.5228ms 6.1448ms 4.6994ms 4.433ms 4.8311ms
total costs 733.5706ms 627.137ms 627.7345ms 668.4303ms 702.078ms

spawn 1000 requests

item 1 2 3 4 5
init pool costs 4.747ms 9.3077ms 4.0337ms 5.4986ms 4.7055ms
total costs 150.4542ms 148.3627ms 112.566ms 130.1201ms 112.9115ms

@carllerche
Copy link
Member

Ok thanks, this is unexpected. I will try to dig in more soon.

@carllerche
Copy link
Member

Can you explain more how you get that data? What is init pool cost vs. total cost? How can I repro?

@carllerche
Copy link
Member

I dug into it. I believe the small performance regression you are still seeing is due to the redis crate still being on Tokio 0.1. Because of this, you are running two runtimes simultaneously (0.1 and 0.2) and there is overhead in communicating between them.

I would suggest trying to update the redis crate to 0.2 and then seeing what results you get. I expect they will be favorable.

@importcjj
Copy link
Author

redis-rs doesn't seem to be 0.2 yet,so I use the tokio-postgres 0.5.0-alpha.2 for testing.

The test code is almost the same as the previous example.
new_tokio_runtime.rs

Here are the test data.

tokio 0.2 + tokio-postgres 0.5.0-alpha.2(upgraded to tokio 0.2)

spawn 5000

1 2 3 4 5
2.3626501s 2.9645146s 2.3965243s 2.3799485s 2.2836898s

spawn 1000

1 2 3 4 5
953.6143ms 921.8127ms 737.6344ms 749.6553ms 726.3452ms

tokio 0.2-alpha + tokio-postgres =0.5.0-alpha.1

spawn 5000

1 2 3 4 5
2.4305968s 2.1030018s 2.1764689s 2.3157885s 2.0305765s

spawn 1000

1 2 3 4 5
853.4546ms 886.0249ms 1.2153216s 853.1735ms 752.0457ms

As you can see, they're pretty close.

@carllerche
Copy link
Member

Still not what I would expect.

Assuming that this is what you are testing, could you wrap main in a tokio::spawn:

#[tokio::main]
async fn main() {
    // env_logger::init();
    tokio::spawn(async {
        let mark = Instant::now();
        let (tx, mut rx) = mpsc::channel::<()>(MAX);

        do_postgres(tx).await.unwrap();

        let mut num: usize = 0;
        while let Some(_) = rx.next().await {
            num += 1;
            if num == MAX {
                break;
            }
        }

        println!("cost {:?}", mark.elapsed());
    }).await.unwrap();
}

If that doesn't improve things measurably, could you help me reproduce your benchmark? It looks like i need postgresql running somehow, are there steps to reproduce the test?

@importcjj
Copy link
Author

Sorry for late.

When I wrap main in a tokio::spawn, nothing improved.

To run this example, you just need to set up a Postgres server at local and create a user and a database that has the same name, and then you should change the connecting config in the code and run.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants