Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance of bb8 with tokio-postgres worse than r2d2 with postgres #29

Closed
bikeshedder opened this issue Jul 31, 2019 · 5 comments
Closed

Comments

@bikeshedder
Copy link

I wanted to go full async with the application I'm currently building and tried both bb8 and l337 with tokio-postgres just to find that both performed worse than r2d2 with postgres:

https://bitbucket.org/bikeshedder/actix_web_async_postgres/src/master/

Am I doing anything wrong or is this maybe an issue with tokio-postgres?

@khuey
Copy link
Collaborator

khuey commented Jul 31, 2019

One useful statistic to know there would be how many postgres connections are actually used in each case. I suspect the async libraries are using far more connections than the blocking r2d2.

@bikeshedder
Copy link
Author

As you can see in the code I use the same pool size for all three implementations:

const POOL_MIN_SIZE: u16 = 4;`
const POOL_MAX_SIZE: u16 = 16;`

They were not extracted into constants before. I just changed that to make it more obvious that the pools are configured exactly the same.

I also made sure to run htop with a filter and could see 48 (3x16) DB connections to PostgreSQL which are all evently utilized, depending on the test currently running.

@khuey
Copy link
Collaborator

khuey commented Jul 31, 2019

On my machine, with a release build of your test binary, I get

Running 2m test @ http://localhost:8000/v1.0/event_list_l337
  4 threads and 128 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     4.12ms  567.15us  29.99ms   78.09%
    Req/Sec     7.79k   164.50     8.97k    73.35%
  Latency Distribution
     50%    4.04ms
     75%    4.37ms
     90%    4.77ms
     99%    5.98ms
  3720626 requests in 2.00m, 1.49GB read
Requests/sec:  30994.28
Transfer/sec:     12.71MB
Running 2m test @ http://localhost:8000/v1.0/event_list_r2d2
  4 threads and 128 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     4.65ms    2.41ms  50.40ms   88.78%
    Req/Sec     7.23k   316.71    27.38k    98.71%
  Latency Distribution
     50%    3.69ms
     75%    4.42ms
     90%    7.49ms
     99%   14.98ms
  3453682 requests in 2.00m, 1.38GB read
Requests/sec:  28756.74
Transfer/sec:     11.79MB
Running 2m test @ http://localhost:8000/v1.0/event_list_bb8
  4 threads and 128 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     4.96ms  322.86us  32.02ms   82.08%
    Req/Sec     6.48k    92.65     7.41k    92.40%
  Latency Distribution
     50%    4.92ms
     75%    5.09ms
     90%    5.28ms
     99%    5.81ms
  3096725 requests in 2.00m, 1.24GB read
Requests/sec:  25802.00
Transfer/sec:     10.58MB

which is more along the lines of what I would expect.

@bikeshedder
Copy link
Author

I just updated r2d2 to the latest RC which also uses a newer version of (tokio-)postgres which resulted in a very measurable performance drop for the r2d2 implementation. It might all be related to this: sfackler/rust-postgres#469

@bikeshedder
Copy link
Author

It really seams to be an issue of the rust-postgres crate: sfackler/rust-postgres#469 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants