New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix Issue 21601 - std.math : pow(float/double, -2) produces sometimes wrong result #7783
Conversation
Thanks for your pull request and interest in making D better, @berni44! We are looking forward to reviewing it, and you should be hearing from a maintainer soon.
Please see CONTRIBUTING.md for more information. If you have addressed all reviews or aren't sure how to proceed, don't hesitate to ping us with a simple comment. Bugzilla references
Testing this PR locallyIf you don't have a local development environment setup, you can use Digger to test this PR: dub run digger -- build "master + phobos#7783" |
|
Sorry, ignore my last comment - |
Doesn't this mean that this change might break user code? |
Yes, that is possible. But it will only affect code, that is bad designed (like the unittests I had to correct) - that is, code, that is based on an implied guarantee, that does not exist. With floating point numbers, there is always the difficulty to decide how to cope with rounding errors. IEEE gives a clear standard, what the result of FPU instructions has to be. One can rely on this. And one knows ahead, that the error is the most minimal possible (taking rounding mode into account). This guarantee cannot be given for code using the FPU: With every instruction used, a certain amount of error will occur. If you are lucky, they cancel out, if not they sum up. This means, that the only guarantee, such a function can give, is, that the result will be in a certain interval around the correct result. For the With the proposed algorithm, the result is always in this interval, while the current algorithm breaks in some cases - see the issue, this PR addresses. |
The fact that code is wrongly designed doesn't change the fact that users might be depending on that faulty behavior. There might be projects out there that used this as a feature and rely on this behavior. As such, we cannot simply silently change the semantics of the code (even if we are actually fixing it). We need to somehow issue a deprecation and instruct the user to use the new, improved, version. If the behavior is changed only for the |
I disagree here. Think of the impact, this would have: Because -2 is a runtime parameter, we would have to add a deprecation message to the whole Because I think that neither not fixing, nor using a deprecation cycle is a good solution, maybe we could do something in between? I think of adding a release note, something like: "We noticed that the implementation of What do you think? Would this be OK? (*) As a side note: This resembles somewhat the deprecation message one gets when using |
You make a good point. I think that adding a changelog entry for this should suffice. Please add a changelog entry. |
I'm not convinced by people relying on buggy math being an issue. Buggy math algorithms could cost companies trillions of dollars (just ask @donc). |
…th 64-bit real Required since dlang#7783.
…th 64-bit real Required since dlang#7783.
…th 64-bit real Required since #7783.
Recently I compared several algorithms for calculating
pow(b, e)
withb
being a floating point ande
an integer. I checked the current implementation, the implementation suggested in PR #7297, the implementation suggested in issue 16200 and the original algorithm from stepanov quoted in issue 16200 with the intend to replace the current algorithm with the one mentioned in issue 16200.First, I tested the algorithms for correctness, by comparing several random generated calls with the exact result calculated with the linux tool 'bc' and rounded appropriately. The results showed, that the current algorithm has a design flaw for exponent -2 and PR #7297 has further design flaws for exponent -3 and 3.
250000 tests for float, 100000 tests for double and 7500 tests for real. Tests did not include results, that reached a fixpoint (NaN, +/-inf, 0.0); difference has been messured in the number of 'nextUp' calls that where needed to get from the smaller number to the larger.
IMHO, the result clearly rules PR #7297 out.
The issue with the current implementation can easily be fixed by removing the special treatment for -2 (this is, what this PR does - the suggested speed optimization is more difficult and left for an other PR).
I also replaced the
switch
statement by anif
statement. With that, the hack around a bug in theswitch
statement (see PR #7294) is not needed anymore and I removed this too.Furthermore I had to fix some unittests for
quantize
: They used==
for comparing floating point numbers which luckily worked with the old version, but does not with the new one.