-
-
Notifications
You must be signed in to change notification settings - Fork 31.6k
Improve coverage and rigour of test.test_math #70228
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
In many cases, test_math does not exercise the functions defined by the math module, other than at a few values needing special handling. Where it does test a function with values representative of the domain, the accuracy demanded is often loose in one place and tight in another. By comparison, test_cmath uses more interesting values and a rational approach to accuracy. In most implementations, math simply proxies an expertly built (and tested) platform library or firmware, so it is unlikely we are the victims of mathematical inexactitude. We observed this as a problem in the Jython project, where failures in cmath were traced eventually to the naive (but passing) implementation of some functions in math. For Jython, we now supplement test_math with stronger tests more like those in test_cmath. So this issue is really an offer to fold that work back into the CPython test_math, to strengthen testing for all. Coverage: sqrt sin exp sinh The three rows here are positive, negative and "special" arguments, and "..." means I abridged the list. Accuracy: In the attached program, in the accuracy section, I wrap most of the functions so as to fuzz their results by plus or minus 2e-6, test_math will still pass them. test_cmath uses a relative measure of error (so many ulps of the expected value), which is worth emulating. If people think it better, coverage and accuracy can be tracked as separate issues. |
Here is a patch that improves coverage and addresses the uneven accuracy. Required accuracy is now specified in ulps. Mostly, I have choses 1 ulp, since this passed for me on an x86 architecture (and also ARM), but this may be too ambitious. I have also responded to the comment relating to erfc: I found I could not contribute the code I used to generate the additional test cases in Tools/scripts without failing test_tools. (It complained of a missing dependency. The generator uses mpmath.) |
Big +1 for the idea, and thank you for doing this work. I can't promise to find time to do a thorough review in the near future (though I'll try). |
Thanks for the prompt acknowledgement and for accepting this to review. I have updated the coverage & tolerance demo program. Usage in the comments (in v3). I have also added the program I used to generate the extra test cases (needs mpmath -- easier to get working than mpf in the original Windows/Jython environment). |
I finally have some time to look at this. I've double-checked all the new cmath testcases against MPFR (via bigfloat), and they all look good. I plan to apply this in two stages: (1) apply the new cmath testcases, (2) apply the test_math changes, and (1.5) watch for buildbot failures in-between. |
New changeset 1017215f5492 by Mark Dickinson in branch 'default': |
Mark, many buildbots are unhappy. For example: ====================================================================== Traceback (most recent call last):
File "/root/buildarea/3.x.angelico-debian-amd64/build/Lib/test/test_math.py", line 1106, in test_testfile
self.ftest("%s:%s(%r)" % (id, fn, ar), result, er)
File "/root/buildarea/3.x.angelico-debian-amd64/build/Lib/test/test_math.py", line 194, in ftest
(name, value, expected))
AssertionError: cosh0063:cosh(709.7827) returned 8.988349783319008e+307, expected 8.988349783319007e+307 |
Thanks, Ned. (And apologies.) The test_math tests are too strict in this case, which is one of the issues that Jeff's full patch fixes. It looks like my strategy of applying the patch in two pieces isn't going to work. I'll revert the change for now, and have another go at this tomorrow. |
Reverted in https://hg.python.org/cpython/rev/4389aa3507c5 |
Mark: Thanks for validating the additional cases so carefully. If you still want to apply it in stages then I suppose the change to the comparison logic could go first (untested idea), although that's also where I could most easily have made a mistake. |
iss26040.patch didn't apply cleanly to master (because of the recent math.tau addition). Here's an updated patch. |
I've reviewed the test_math portion of the patch (which looks good, thank you!), and made a few revisions in iss26040_v3.patch.
|
One more version; increased the default ulp_tol to 5 everywhere, and made some minor style fixes in nearby code. |
Mark: Thanks for doing my homework. Points 1 and 3 I can readily agree with. I must take another look at to_ulps() with your patch on locally. I used the approach I did because I thought it was incorrect in exactly those corners where you prefer it. I'll take a closer look. |
Ah, cunning: I can make sense of it in hex. >>> hex(to_ulps(expected))
'0x3ff0000000000000'
>>> hex(to_ulps(got))
'0x3feffffffffffffc'
>>> hex( to_ulps(got) - to_ulps(expected) )
'-0x4' ... and what you've done with ulp then follows. In my version a format like "{:d} ulps" was a bad idea when the error was a gross one, but your to_ulps is only piece-wise linear -- large differences are compressed. I'm pleased my work has mostly survived: here's hoping the house build-bots agree. erfc() is perhaps the last worry, but math & cmath pass on my machine. |
New changeset e3dbe8b7279a by Mark Dickinson in branch 'default': |
iss26040_v4.patch applied. I'm watching the buildbots; if everything looks good there, I'll close the issue. |
Buildbots seem happy (or at least, no more unhappy than before). Closing. |
Not quite true: test_math and test_cmath are failing for the x86 Tiger 3.x buildbot: ====================================================================== Traceback (most recent call last):
File "/Users/db3l/buildarea/3.x.bolen-tiger/build/Lib/test/test_math.py", line 1190, in test_testfile
'\n '.join(failures))
AssertionError: Failures in test_testfile:
tan0064: tan(1.5707963267948961): expected 1978937966095219.0, got 1978945885716843.0 (error = 7.92e+09 (31678486496 ulps); permitted error = 0 or 5 ulps) ====================================================================== Traceback (most recent call last):
File "/Users/db3l/buildarea/3.x.bolen-tiger/build/Lib/test/test_cmath.py", line 398, in test_specific_values
msg=error_message)
File "/Users/db3l/buildarea/3.x.bolen-tiger/build/Lib/test/test_cmath.py", line 147, in rAssertAlmostEqual
'{!r} and {!r} are not sufficiently close'.format(a, b))
AssertionError: tan0064: tan(complex(1.5707963267948961, 0.0))
Expected: complex(1978937966095219.0, 0.0)
Received: complex(1978945885716843.0, 0.0)
Received value insufficiently close to expected value. It's probably not worth trying to fix this for such an ancient version of OS X. Re-opening; I'll aim to skip the test on the platform. |
Opened bpo-27953 to track the issue with tan on OS X Tiger. Re-closing here. |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: