New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PG Performance #859

Closed
u2 opened this Issue Apr 14, 2017 · 4 comments

Comments

Projects
None yet
3 participants
@u2

u2 commented Apr 14, 2017

Hi, I have used diesel in my development.But I find that the db inserting and updating is a bit slow.Here is my benchmark.

My laptop is:
MacBook Pro (Retina, 13-inch, Early 2015)
CPU: 2.7 GHz Intel Core i5
RAM: 8 GB 1867 MHz DDR3
Disk: Macintosh HD (ssd)

Rust:
rustc 1.18.0-nightly (50c186419 2017-04-06)

diesel:
diesel 0.12.0

postgres:
postgres (PostgreSQL) 9.6.2

pub fn update_balance(conn: &PgConnection,
                      _account: &str,
                      _amount: i64,
                      add: bool)
                      -> diesel::result::QueryResult<Balance> {

    use schema::balances::dsl::*;

    let result: Result<Balance, _> = balances.filter(account.eq(_account)).first(conn);

    match result {
        Ok(balance) => {
            let pre_amount = balance.amount;
            let new_amount = if add {
                pre_amount + _amount
            } else {
                pre_amount - _amount
            };

            if new_amount < 0 {
                return Err(Error::RollbackTransaction);
            }

            diesel::update(balances.filter(account.eq(_account)))
                .set(amount.eq(new_amount))
                .get_result::<Balance>(conn)
        }
        _ => {
            let new_balance = NewBalance {
                account: _account,
                amount: _amount,
            };

            diesel::insert(&new_balance)
                .into(schema::balances::table)
                .get_result::<Balance>(conn)
        }
    }
}
    pub fn test_connection() -> PgConnection {
        dotenv().ok();

        let database_url = env::var("DATABASE_URL").expect("DATABASE_URL must be set");
        PgConnection::establish(&database_url)
            .expect(&format!("Error connecting to {}", database_url))
    }
#[cfg(test)]
mod tests {
    use super::*;
    use test_helper::tests::*;

    describe! stainless {
        bench "db insert" (bencher) {
            let conn = test_connection();
            bencher.iter(|| {
                update_balance(&conn, "Ada Lovelace", 1, true)
            });

            clear_tabs(&conn,
                    vec!["balances"]);
        }

        bench "db query" (bencher) {
            use schema::states::dsl::*;
            let conn = test_connection();
            bencher.iter(|| {
                let _: Result<State, _> = states.filter(name.eq("Satoshi Nakamoto")).first(&conn);
            });

            clear_tabs(&conn,
                    vec!["states"]);
        }
    }
}

RUST_TEST_THREADS=1 cargo bench
The result is:

test tests::stainless::db_insert ... bench:     424,769 ns/iter (+/- 1,338,453)
test tests::stainless::db_query  ... bench:      54,655 ns/iter (+/- 34,997)

I have used stainless, but it does not affect the performance, I have tested it before.
So the benchmark shows that the inserting is only about 3124/per second.

@sgrif

This comment has been minimized.

Member

sgrif commented Apr 15, 2017

You claim that it is Diesel which is slow, but give nothing to use as a baseline for comparison. Writes are much slower than reads in any RBDMS, including PG. I doubt you're seeing anything other than the performance of PG itself. Can you provide more specific details about why you think it is Diesel that is causing the slowdown here?

@u2

This comment has been minimized.

u2 commented Apr 16, 2017

$ cat test.sql    
\set aid random(1, 100000 * :scale)
\set bid random(1, 1 * :scale)
\set tid random(1, 10 * :scale)
\set delta random(-5000, 5000)
BEGIN;
INSERT INTO distributors (did, name) VALUES (3, :bid);
END;
(pyethapp) 
$ pgbench -c 15 -t 300 pgbench -r -f test.sql  
starting vacuum...ERROR:  relation "pgbench_branches" does not exist
(ignoring this error and continuing anyway)
ERROR:  relation "pgbench_tellers" does not exist
(ignoring this error and continuing anyway)
ERROR:  relation "pgbench_history" does not exist
(ignoring this error and continuing anyway)
end.
transaction type: test.sql
scaling factor: 1
query mode: simple
number of clients: 15
number of threads: 1
number of transactions per client: 300
number of transactions actually processed: 4500/4500
latency average = 1.039 ms
tps = 14436.819665 (including connections establishing)
tps = 14607.259483 (excluding connections establishing)
script statistics:
 - statement latencies in milliseconds:
         0.001  \set aid random(1, 100000 * :scale)
         0.001  \set bid random(1, 1 * :scale)
         0.000  \set tid random(1, 10 * :scale)
         0.000  \set delta random(-5000, 5000)
         0.172  BEGIN;
         0.244  INSERT INTO distributors (did, name) VALUES (3, :bid);
         0.365  END;
(pyethapp)

while excluding establishing connection, the tps is about 14607.259483.

but when I change to one connection and one thread:

$ pgbench -c 1 -t 1 pgbench -r -f test.sql  
starting vacuum...ERROR:  relation "pgbench_branches" does not exist
(ignoring this error and continuing anyway)
ERROR:  relation "pgbench_tellers" does not exist
(ignoring this error and continuing anyway)
ERROR:  relation "pgbench_history" does not exist
(ignoring this error and continuing anyway)
end.
transaction type: test.sql
scaling factor: 1
query mode: simple
number of clients: 1
number of threads: 1
number of transactions per client: 1
number of transactions actually processed: 1/1
latency average = 5.903 ms
tps = 169.405387 (including connections establishing)
tps = 487.329435 (excluding connections establishing)
script statistics:
 - statement latencies in milliseconds:
         0.031  \set aid random(1, 100000 * :scale)
         0.002  \set bid random(1, 1 * :scale)
         0.000  \set tid random(1, 10 * :scale)
         0.001  \set delta random(-5000, 5000)
         1.834  INSERT INTO distributors (did, name) VALUES (3, :bid);
(pyethapp) 

I am a bit confused, why it's slower than diesel. The cargo bench is not in one thread?

@theduke

This comment has been minimized.

Contributor

theduke commented Apr 28, 2017

So since diesel is actually faster than simple pgbench, I think this can be closed?

@sgrif @u2

@u2

This comment has been minimized.

u2 commented Apr 28, 2017

Thank you.

@u2 u2 closed this Apr 28, 2017

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment