Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MathFlux sumBigDecimal looses scale of BigDecimal #260

Closed
kiwisincebirth opened this issue Sep 6, 2021 · 3 comments · Fixed by #261
Closed

MathFlux sumBigDecimal looses scale of BigDecimal #260

kiwisincebirth opened this issue Sep 6, 2021 · 3 comments · Fixed by #261
Labels
type/bug A general bug
Milestone

Comments

@kiwisincebirth
Copy link
Contributor

kiwisincebirth commented Sep 6, 2021

MathFlux sumBigDecimal looses scale of BigDecimal. And is not respecting standard BigDecimal behaviour.

Expected Behavior

The summation in the Flux should follow BigDecimal arithmetic, using a standard plus() operator provided by BigDecimal.

Actual Behavior

The summation logic is loosing precision, and the cause is related to the conversion of BigDecimal to it Double value

Steps to Reproduce

The following test case (Kotlin code) highlights the error. I have not tried this in Java, but assume the same issue would be prevalent in Java Code also.

    @Test
    fun test() {
        val ONE = BigDecimal("1.00")
        val ONE_PLUS_ONE = ONE.plus(ONE)
        val sumBigDecimal = MathFlux.sumBigDecimal(Flux.just(ONE,ONE)).block()

        assertThat(sumBigDecimal).isEqualTo(ONE_PLUS_ONE)
    }

the result if the above is

expected: 2.00
but was : 2.0
org.opentest4j.AssertionFailedError: 

Possible Solution

The issue seems to be in the Class MonoSumBigDecimal Line:77

BigDecimal bigDecimalValue = BigDecimal.valueOf(number.doubleValue());

where the Number being processed in the stream, is converted via Double back to a BigDecimal. For the general case of Number this is fine. but in this case the Number (IS) a bigdecimal and does not need to be cast in this way.

BigDecimal bigDecimalValue = (number instanceof BigDecimal) ?
    (BigDecimal) number : new BigDecimal(number.toString());

noting that number.toString() is my (OPINION) of how to convert a Number more generally into a BigDecimal, since the translation avoids the Double type.

I am fairly sure MonoAverageBigDecimal suffers from the same issue, since it has the same line of code indicated above.

Your Environment

Not Relevant the above test case fairly easy to reproduce

  • Reactor version(s) used: 3.4.3
  • JVM version (java -version): 11
@reactorbot reactorbot added the ❓need-triage This issue needs triage, hasn't been looked at by a team member yet label Sep 6, 2021
@simonbasle
Copy link
Member

That's a great catch @kiwisincebirth!

After having looked into your suggestion to use the BigDecimal(String value) constructor, I agree this seems like the best approach 👍

Would you like to submit a fixing PR for both sum and average?

@simonbasle simonbasle added type/bug A general bug and removed ❓need-triage This issue needs triage, hasn't been looked at by a team member yet labels Sep 6, 2021
@simonbasle simonbasle added this to the 3.4.5 milestone Sep 6, 2021
@kiwisincebirth
Copy link
Contributor Author

#261

@kiwisincebirth
Copy link
Contributor Author

See The pull Request #261 for the change I proposed.

I have updated all but one test to use verifyResult() instead of verifyBigDecimalResult() - It turns out the verifyBigDecimalResult() was actually masking the precision issue! Because verifyBigDecimalResult() uses compareTo() ==0 for its comparisons. Simply changing Tests to use verifyResult() - highlighted the stated issue.

@simonbasle simonbasle linked a pull request Sep 7, 2021 that will close this issue
simonbasle pushed a commit that referenced this issue Sep 7, 2021
This commit ensures that no lossy conversion is made on a Number that
happens to be a BigDecimal already.

Furthermore, it uses the `new BigDecimal(String)` constructor instead
of `valueOf`, since creating BigDecimal from doubles can sometimes
result in less meaningful values due to double's limited precision.

Fixes #260.
simonbasle added a commit that referenced this issue Apr 11, 2022
This commit improves the precision and conversion behavior of `MathFlux`
`sumBigInteger` and `averageBigInteger` methods. These method better deal with
fractional types by internally summing using a `BigDecimal` rather than a
`BigInteger`. As a result, the only lossy step is at the end when the result is
effectively presented as a `BigInteger`: 
 - `sum`: remaining fractional part of the BigDecimal sum is dropped
 - `average`: the division by `count` is applied, then the result's fractional
 part is dropped

Previously the fractional part would be progressively dropped by implicit
conversion of each incoming element to `long`.

Similar to precision improvement done in #260/#261 and earlier iteration done
in #284.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type/bug A general bug
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants