Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error with PostgreSQL and high scale #219

Closed
pimeys opened this issue Jan 31, 2020 · 11 comments · Fixed by #221
Closed

Error with PostgreSQL and high scale #219

pimeys opened this issue Jan 31, 2020 · 11 comments · Fixed by #221

Comments

@pimeys
Copy link
Contributor

pimeys commented Jan 31, 2020

I'm experiencing a hard crash reading a decimal value from a postgres table using a high-scale numeric type. The error happens with rust_decimal version 1.2.0, version 1.1.0 works fine. We can replicate the issue in all major postgres versions from 9 to 12.

My schema looks like this:

postgres=# \d "Test";
                Table "sql_load_test.Test"
 Column |      Type      | Collation | Nullable | Default
--------+----------------+-----------+----------+---------
 fl     | numeric(65,30) |           | not null | 0
 id     | integer        |           | not null |
 fl2    | numeric(65,15) |           |          |
Indexes:
    "Test_pkey" PRIMARY KEY, btree (id)

And we can reproduce this with a simple test program:

Cargo.toml:

[dependencies]
rust_decimal = { version = "1.2", features = ["postgres"] }
tokio-postgres = { version = "0.5", features = ["with-uuid-0_8", "with-chrono-0_4", "with-serde_json-1"] }
tokio = { version = "0.2", features = ["rt-core", "macros"] }

main.rs:

use rust_decimal::Decimal;
use tokio_postgres::{Error, NoTls};

#[tokio::main]
async fn main() -> Result<(), Error> {
    let (client, connection) = tokio_postgres::connect(
        "postgresql://postgres:prisma@localhost:5432/postgres",
        NoTls,
    )
    .await?;

    tokio::spawn(async move {
        if let Err(e) = connection.await {
            eprintln!("connection error: {}", e);
        }
    });

    let rows = client
        .query("SELECT fl2 from sql_load_test.\"Test\"", &[])
        .await?;

    let value: Decimal = rows[0].get(0);
    println!("WORKS: {}", value);

    let rows = client
        .query("SELECT fl from sql_load_test.\"Test\"", &[])
        .await?;

    let value: Decimal = rows[0].get(0);
    println!("CRASHES: {}", value);

    Ok(())
}

The program output is here:

env RUST_BACKTRACE=1 cargo run
    Finished dev [unoptimized + debuginfo] target(s) in 0.04s
     Running `target/debug/decimal-test`
WORKS: 1.200000000000000
thread 'main' panicked at 'attempt to multiply with overflow', /rustc/5e1a799842ba6ed4a57e91f7ab9435947482f7d8/src/libcore/num/mod.rs:1983:27
stack backtrace:
   0: backtrace::backtrace::libunwind::trace
             at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.40/src/backtrace/libunwind.rs:88
   1: backtrace::backtrace::trace_unsynchronized
             at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.40/src/backtrace/mod.rs:66
   2: std::sys_common::backtrace::_print_fmt
             at src/libstd/sys_common/backtrace.rs:84
   3: <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt
             at src/libstd/sys_common/backtrace.rs:61
   4: core::fmt::write
             at src/libcore/fmt/mod.rs:1025
   5: std::io::Write::write_fmt
             at src/libstd/io/mod.rs:1426
   6: std::sys_common::backtrace::_print
             at src/libstd/sys_common/backtrace.rs:65
   7: std::sys_common::backtrace::print
             at src/libstd/sys_common/backtrace.rs:50
   8: std::panicking::default_hook::{{closure}}
             at src/libstd/panicking.rs:193
   9: std::panicking::default_hook
             at src/libstd/panicking.rs:210
  10: std::panicking::rust_panic_with_hook
             at src/libstd/panicking.rs:471
  11: rust_begin_unwind
             at src/libstd/panicking.rs:375
  12: core::panicking::panic_fmt
             at src/libcore/panicking.rs:84
  13: core::panicking::panic
             at src/libcore/panicking.rs:51
  14: core::num::<impl i64>::pow
             at /rustc/5e1a799842ba6ed4a57e91f7ab9435947482f7d8/src/libcore/num/mod.rs:1983
  15: rust_decimal::postgres::<impl rust_decimal::decimal::Decimal>::from_postgres
             at /home/pimeys/.cargo/registry/src/github.com-1ecc6299db9ec823/rust_decimal-1.2.0/src/postgres.rs:99
  16: rust_decimal::postgres::postgres::<impl postgres_types::FromSql for rust_decimal::decimal::Decimal>::from_sql
             at /home/pimeys/.cargo/registry/src/github.com-1ecc6299db9ec823/rust_decimal-1.2.0/src/postgres.rs:507
  17: postgres_types::FromSql::from_sql_nullable
             at /home/pimeys/.cargo/registry/src/github.com-1ecc6299db9ec823/postgres-types-0.1.0/src/lib.rs:433
  18: tokio_postgres::row::Row::get_inner
             at /home/pimeys/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-postgres-0.5.1/src/row.rs:174
  19: tokio_postgres::row::Row::get
             at /home/pimeys/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-postgres-0.5.1/src/row.rs:140
  20: decimal_test::main::{{closure}}
             at src/main.rs:29
  21: <std::future::GenFuture<T> as core::future::future::Future>::poll
             at /rustc/5e1a799842ba6ed4a57e91f7ab9435947482f7d8/src/libstd/future.rs:43
  22: tokio::runtime::basic_scheduler::BasicScheduler<P>::block_on
             at /home/pimeys/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-0.2.11/src/runtime/basic_scheduler.rs:138
  23: tokio::runtime::Runtime::block_on::{{closure}}
             at /home/pimeys/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-0.2.11/src/runtime/mod.rs:411
  24: tokio::runtime::context::enter
             at /home/pimeys/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-0.2.11/src/runtime/context.rs:72
  25: tokio::runtime::handle::Handle::enter
             at /home/pimeys/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-0.2.11/src/runtime/handle.rs:34
  26: tokio::runtime::Runtime::block_on
             at /home/pimeys/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-0.2.11/src/runtime/mod.rs:408
  27: decimal_test::main
             at src/main.rs:4
  28: std::rt::lang_start::{{closure}}
             at /rustc/5e1a799842ba6ed4a57e91f7ab9435947482f7d8/src/libstd/rt.rs:67
  29: std::rt::lang_start_internal::{{closure}}
             at src/libstd/rt.rs:52
  30: std::panicking::try::do_call
             at src/libstd/panicking.rs:292
  31: __rust_maybe_catch_panic
             at src/libpanic_unwind/lib.rs:78
  32: std::panicking::try
             at src/libstd/panicking.rs:270
  33: std::panic::catch_unwind
             at src/libstd/panic.rs:394
  34: std::rt::lang_start_internal
             at src/libstd/rt.rs:51
  35: std::rt::lang_start
             at /rustc/5e1a799842ba6ed4a57e91f7ab9435947482f7d8/src/libstd/rt.rs:67
  36: main
  37: __libc_start_main
  38: _start
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.

To save looking into the large backtrace, the bug line is this and happens in the pow function call (overflow).

@pimeys
Copy link
Contributor Author

pimeys commented Jan 31, 2020

Oh, and of course we need some values inserted before running this:

insert into sql_load_test."Test" (id, fl, fl2) values (1, 1.2, 1.2);

@paupino
Copy link
Owner

paupino commented Jan 31, 2020

Thanks for raising this issue @pimeys - I'll take a look at this ASAP. My guess is that it's related to retaining scale upon reading values.

@pimeys
Copy link
Contributor Author

pimeys commented Jan 31, 2020

Check the PR for even easier test to reveal the bug.

@paupino
Copy link
Owner

paupino commented Jan 31, 2020

Overflow has been sourced to the line: https://github.com/paupino/rust-decimal/blob/master/src/postgres.rs#L99
Specifically: 10i64.pow((fixed_scale - scale) as u32).
Fixing this in itself isn't a huge issue. That being said, I'll need to create a follow up issue for appropriately handling higher scales than Decimal can represent. Typically in other cases it will do underflow rounding.

@pimeys
Copy link
Contributor Author

pimeys commented Jan 31, 2020

So is the underflow rounding what happens with 1.1 right now? We can survive, and I found this just by accident by updating decimal and our default scale being that big... :)

@paupino
Copy link
Owner

paupino commented Jan 31, 2020

Not quite, but it was implicitly normalizing the number. One of the changes in 1.2 was to retain the postgres scale.

For example: if the database stored the number 1.200.

In 1.1 the number 1.2 would be created as a Decimal (i.e. m = 12, e = 1).
In 1.2 the number 1.200 would be created as a Decimal (i.e. m = 1200, e = 3)

I can get a fix out for the issue you're having pretty quickly, and would likely generate a follow up issue for the broader fix (i.e. if scale was 100 etc).

@pimeys
Copy link
Contributor Author

pimeys commented Jan 31, 2020

Yep, I'll have a test suite here that will show if the fix works quite easily. Would also like to follow the latest versions from all the crates in quaint/prisma, so ping me when there's a fix available! Thank you.

@paupino
Copy link
Owner

paupino commented Jan 31, 2020

@pimeys I've created a PR #221 which hopefully should resolve the issue. Do you want to run any tests on this first to make sure it resolves your issue?

@pimeys
Copy link
Contributor Author

pimeys commented Feb 1, 2020

Tested this PR and it works fine for numbers such as 1.2 with a large scale (30). Although another test with pi (3.141592653589793238462643383279) of course cuts the last two numbers off when writing and crashes with

---- postgres::postgres::test::read_numeric_type stdout ----
thread 'postgres::postgres::test::read_numeric_type' panicked at 'Multiplication overflowed', src/decimal.rs:2572:21

when reading.

@paupino
Copy link
Owner

paupino commented Feb 4, 2020

Hi @pimeys - it may be worth taking another look at master. This now rounds when representations > 28 significant digits happens rather than crashes.

@pimeys
Copy link
Contributor Author

pimeys commented Feb 7, 2020

Had quite a busy week. Yeah, this works. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants