New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
test failure in 32 bit due to bad rounding in set_precision #68
Comments
Thanks for the report, Gonzalo. I'll make the suggested changes, though I do not have a 32-bit machine to test it on. I'll do it via a pull request to the master branch and would appreciate your testing it when I do. |
Sure. If it is of any help, the patch I'm using right now is below, with this all tests pass. I can also do a PR if you want.
|
Please test the constants branch. Instead of putting 0.3L into the code in a few places I used a const double set to a higher precision value. |
All tests pass now, thanks! |
The first test failure is this:
There are others but it seems they all arise from
set_precision(70)
which evaluateslog(0.3*70)
as 20 instead of 21. I suppose the issue is that 0.3 rounds up in double precision (which happens in 64bit) but it rounds down in single precision (which happens 32bit).Workaround: replace
0.3
by0.3L
so it's always adouble
. That's enough to make all tests pass for me in 32bit.Might as well change it to
0.30103L
which is more accurate,0.3010299956639812L
, orM_LN2/M_LN10
.The text was updated successfully, but these errors were encountered: