Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP

Loading…

BUG: Overflow in tan and tanh for large complex values #3010

Closed
wants to merge 35 commits into from

6 participants

@ewmoore

This is a fix for gh-2321. I've imported the implementation for ctanh from FreeBSD's math library which handles large arguments correctly. Which fixes this bug and the equivalent bug in np.tan.

The problem here is that importing the function into npy_math_complex.c and adding a config check causes us to use the implementation from glibc on Linux, which also has this bug. Although it seems to have been fixed in glibc last April (http://sourceware.org/bugzilla/show_bug.cgi?id=11521).

I see several things that could be done here, the easiest is probably to use our version of ctanh unconditionally. Although there are a multitude of ways to go about that, for instance should I remove the implementation from npy_math_complex.c and just place it directly in umath/funcs.inc, where the buggy version was? Or should I just remove the config check and add a note about it somewhere? Or some other choice all together.

Some kind of test will also need to be added, but I thought I'd get some input on how to handle this before doing anything else.

@ewmoore

Actually I don't think that tanh has any tests at all for real or complex values. This also appears true for some of the other ufuncs.

@charris
Owner

Looks good at a first glance. Some or the indentation looks off, the numpy standard is no hard tabs and 4 space indentations. There is also a long line or two that could maybe be wrapped.

As to your questions, I don't know the answers ;) I'm not sure there are any real drawbacks to having our own version always used except maybe speed. Working around compiler/library bugs is always a bit unsettling. There used to be the WTF-MathExtras from Apple dealing with similar problems. We could maybe run some accuracy checks during configuration, but I suspect that would be the start of a long, twisty road.

@njsmith
Owner
@ewmoore

I've been looking at this some more. And while I like the idea to just try calling the system ctanh, if it exists, to check if it has this bug, I suspect require that will break cross compiling. Aren't the windows builds cross compiled? Is it worth detecting that we are cross-compiling then? This seems, as charris suggested to be a little fraught. I'll note too that numpy's distutils includes a warning against running code on the target machine as part of the configure checks.

The implementation from FreeBSD that I've pulled in seems to have issues when the result will be a denormal, as was corrected in glibc. Although I think I understand where that comes from and might be able to fix it. FWIW the version in python's cmath module also suffers from this problem. For instance:

In [1]: import numpy as np

In [2]: import cmath

In [3]: np.tanh(372+1j)
Out[3]: (1+1.4821969375237396e-323j)

In [4]: cmath.tanh(372+1j)
Out[4]: (1+1.5e-323j)

The correct answer, according to Mathematica or Wolfram Alpha, at least, is 1+1.3952e-323j (everything smaller than ~1e-308 is denormal.)

The other though that I had is, do we really want to continue to use our own version of the complex functions when they are available in libm? There are a number of them in umath/funcs.inc.src for which we don't even check for a platform version. I'd propose moving all of them to npymath with checks, and probably importing the FreeBSD code for the ones we have custom code for currently. I suppose the drawback to that is possibly changing the behavior of these functions in corner cases. Although I suspect that all is not perfect there already.

Thoughts?

@charris
Owner

That sounds like a good idea to me. The npymath library hasn't been much worked on since it was created several years ago and could use a little TLC.

@pv
Owner
pv commented

I don't think you can cross-compile Numpy, at least with Python's distutils, so that doesn't need to be a worry. The windows binaries are built either on native, or on Wine, I believe.

+1 from me for using platform complex functions, if available and determined to be reasonable. As a data point, our complex functions used to have accuracy issues (and some of them possibly still have).

@ewmoore

This is going to fail the travis build again. I think that the currently common glibc's on linux don't have as good of complex functions as we have, even if the latest release has equal good implementations. So I'm going to disable using the functions from libc. The machinery to use them is in place and worth keeping, though. There are also a number of improvements I'll import to our versions from those used in python's cmath. There is no reason we shouldn't be as good as python.

@njsmith
Owner

Those improvements sound excellent. But I'm still worried about this plan of just disabling the platform functions, because even if we leapfrog past the platform versions now, they may pass us again in a few years, after we've all forgotten about this...

@ewmoore

I agree with you in principle about a few years from now. Practically, I'm not entirely sure what the answer is since we (or perhaps I should say, I) don't really want to copy the test suite to C just to check for these things. I can write some spot checks for the tests that fail for me and include those, which I guess is my plan for now.

For reference the functions that cause test failures against eglibc 2.13-38 (debian wheezy) are: cpow, cacos, casin, catan, cacosh, casinh, catanh.

@ewmoore

As a start for checking the system provided functions, I've written some tests for clog and ctanh which include everything from appendix G of the c99 standard and whatever tests are in our test suite. My current work is available here.

Attaching this to the numpy build doesn't seem so straightforward though. The tests need to run after we check if the functions exist, so a fine looking place to me is around line 200 of core/setup.py. However if I use config.try_run there is no way to get either the output of the run or its return code. So I'm a little stuck plugging this into the numpy build. Hopefully someone that knows more about distutils can provide some guidance.

@charris
Owner

@cournape Any suggestions?

@cournape
Owner

There is no obvious good solution. We generally want to avoid running configuration tests, OTOH the risk of forgetting about our implementation is real. Maybe we could check explicitly for the GLIBC version ?

@ewmoore

Sorry I dissapeared on this for so long. If I can get some pointers on how to find the glibc version at compile time, I'm happy to try and insert a check.

Also, I rebased this PR onto current master.

@juliantaylor

whats the reason for trying to avoid running configuration checks? Its pretty well suited for libm tests as its always available in the system paths.
Checking functions individually is preferable to a version check in this case.

@ewmoore

For future reference commit 56e3b7d95a3906ec932d7ded39ccc1a1820e1a64 in this pull request fixes #2354.

@ewmoore

Okay. I've made a stab at detecting the glibc version, and then adjusting.

I'm doing this using ctypes, to call a function unique to glibc. One would hope that we could just use python's platform.libc_ver() to find this information. However it doesn't work, and returns 2.7 instead of the correct 2.17 on my system. Obviously for this to be a complete solution, information for some more versions need to be included and possibly equivalent functions on win and mac.

Note that libc.so.6 is the linux standards base name for libc most places, (x86, x86_64, PPC32, PPC64, S390, S390X) but it's libc.so.6.1 on IA64. So I guess that issue would need to be handled too.

The Travis build should pass since it uses glibc 2.15 and for anything but 2.17 I'm forcing nearly all of our own functions right now.

@njsmith
Owner
@njsmith
Owner
@ewmoore

I put in a check on the glibc version based on @cournape's comment above.

I can call libm.tanh and friends using ctypes, but if we are going to add such checks why not just compile something and do the checks there? That would eliminate trying to figure out the library name or which library to use, and push that to the compiler which already knows how to do it. This does bring me back to where I was earlier though (see my earlier comment)-- I don't see a way to use distutils to compile, link, and run something that also allows me to retrieve either the output or a return code.

I don't have access to the MKL to know. Although unless it comes with a drop in replacement for libm, I don't think that we do. Might be a nice thing to do though. Using MKL for trig functions looks like #671

@ewmoore

We need to make a decision on what we want to do here. As I see it we have a number of options:

  1. Punt on using any additional system C99 complex functions than we were using previously. This has been done before (#1484), since there isn't an easy answer here. I can pull the non-build related changes out and submit a different pull request if this is our choice.

  2. Add build time checks in C. I wrote some checks for clog and ctanh (see linked gist above) but there isn't an obvious way to insert this into the build system.

  3. Add build time checks in python using ctypes.

  4. Add a check for the glibc version (and similar on other platforms) and use white lists. (This is currently partially implemented in this pull request.)

I don't see any benefit to option 3. It is just as much work as option 2 and with drawbacks such as needing a list of possible libm names.

Personally, I vote for option 4. It is the least invasive and improves on the current situation.

Also, I pushed a commit that (finally) makes travis green on this pull request.

@juliantaylor

-1 on option 4, maintaining a whitelist is a pain.
For me option 2 is the best, I'll check how to do it in the buildsystem, of the top of my head I don't see an issue.

@juliantaylor

config.try_run(body, libraries=["m"]) in check_math_capabilities of numpy/core/setup.py seems to work fine.
you can't get the exitcode directly, but you do get if it exited with 0 or not, so you have to run it once for every function.
e.g.

body = obody.replace("PYTESTFUNCTION", "test_clog")
if config.try_run(body, libraries=["m"]):
    moredefs.append("HAVE_CMATH_GOOD_CLOG")

with
return PYTESTFUNCTION() != 0 ? 0 : 1;
in main() of your gist

btw your gist returns that clog has bad branch cuts in glib 2.17, has this been filed as a bug against glibc?

@ewmoore

It has not been filed as a bug against glibc, but I wouldn't yet trust that the code in the gist is correct. It has various issues as it stands. Give me some time and I'll see what I can do about option 4.

@njsmith
Owner
@juliantaylor

why?

edit: mixed up opt 2 and 4, I agree option 4 is not good

@ewmoore

@njsmith, that was my understanding from earlier. But what I don't 't know is what will be accepted. An earlier comment from @cournape suggested checking the glibc version. You've suggested adding build time checks in python using ctypes. Hence my "We need to make a decision..." post above.

@njsmith
Owner
@juliantaylor

I wouldn't use ctypes, during configure you can build and run your test file easily. Then you define preprocessor macros and use those to chose during numpy build..

Doing it at runtime would be more flexible, but I doubt many people exchange the math library often during the lifetime of a system.

@njsmith
Owner
@ewmoore

Sorry for the confusion. I mixed up my option. I'll see what I can do about checking at build time in C.

@ewmoore

A quick status update:

I've updated the gist with my current tests. Everything from C99's annex G is in there, as well as tests for loss of precision for small values in casin, catan, casinh, catanh. There are still a bunch of things from the numpy test suite that I need to add. I haven't integrated it into the build either.

Running what I have against eglibc 2.17 in double precision, I find that only cacosh, ccos, csin ccosh, csinh, clog and csqrt pass. cacos, ctan, ctanh, and cexp fail on particular annex G tests. casin, catan, casinh, catanh all fail various parts of our loss of precision tests.

I'm of two minds about the annex G special value tests. On one hand, they aren't required in C99 and how strongly do we care about some of them? But on the other hand, passing all of them is a good sign that the functions were probably carefully implemented. I also seriously doubt that all of our functions in npy_math can pass the full set.

(Note that I've ignored checking the sign of nans in my tests.)

@ewmoore

I have pushed a complete set of build time checks, including everything in the numpy test suite and annex G of C99. I'm not checking the sign of nans, and I'm not checking for disallowed floating point exceptions in the annex G tests.

I'm sure there are some issues with portability and with long double. At the least it needs some L's added a bunch of places. I haven't updated the bento build. But its good enough to start getting some feedback.

@juliantaylor
Owner

have you also done some benchmarks of your correct functions against libcs broken ones?
I think you should definitely bring the issues you find with libc upstreams like glibc/eglibc, especially if your functions are not significantly slower, upstreams should be willing to accept patches or fix them.
That way much more people than just numpy users profit from your work.

@ewmoore

Rebased, to see if that will convince travis to run.

@ewmoore

Okay this is all green now.

I've also run my tests against the functions defined in npymath, clog, cexp, csqrt, ctan, ctanh pass (at least for doubles) the others fail. But since the rest are about a simple of implementations as one could imagine this isn't surprising. I'm sure there are other existing implementations we can import.

I'm certainly willing to file bugs against glibc and even to submit patches, but, no I have yet to do any of that.

@ewmoore

Alright, this should be ready to review and merge now. Its gotten much larger, if requested I can try to break it out into cleaner, smaller chunks to submit separately.

All of the non-trivial c99 complex functions are checked for and if they exist we run a set of compile time checks to evaluate whether or not to use them. To complement this change most of the complex functions in npymath have be updated or replaced based on the versions in FreeBSD head or cpython. This ensures that if the platform functions can't pass our tests, we can use a version that we know will pass the tests.

Really unrelated but here anyway are commits making the frexp and ldexp ufuncs be generated like the others rather than being created by hand. These functions are also documented properly now.

Possible issues that I see are portablity of test_c99complex.c, that the long double versions of the functions I've changed have only been tested against intel 80 bit long doubles, and that we probably don't want test_c99complex where I've placed it.

@ewmoore

Push two commits that allow these changes to compile and pass np.test() on windows using MSVC.

numpy/core/src/npymath/npy_math_complex.c.src
@@ -1,10 +1,13 @@
/*
+ * vim: syntax=c
+ *
* Implement some C99-compatible complex math functions
*
* Most of the code is taken from the msun library in FreeBSD (HEAD @ 30th June
* 2009), under the following license:
@njsmith Owner
njsmith added a note

This comment should perhaps be updated.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
@njsmith
Owner

I didn't read through everything carefully but it seems generally reasonable.

I'll leave the question of where test_complex99.c should go to someone with more of an opinion :-).

Are there any platforms in particular you think this would be good to test on? We could probably find volunteers...

Am I understanding right that there are two sets of tests for c99 numerical behaviour here: one set in test_complex99.c that we run at compile time, and a second set that we run as part of the test suite? I'm a little worried about these getting out of sync, though of course it is good to have end-to-end tests on the ufuncs. Would it be possible to also run the test_complex99.c tests against the selected functions, just to make sure that we're holding ourselves to the same standards that we're holding the system libraries too?

@ewmoore

Are there any platforms in particular you think this would be good to test on? We could probably find volunteers...

Well, there are two issues, the long double constants are for 80 bit, if anybody actually has anything else they'll be wrong. Some of these can be replaced by constant expressions like sqrt(DBL_MAX)/4 etc.

_real_part_reciprocal(f, ,l) is the only function that comes to mind where I'm using the underlying representation explicitly. I've tried to rewrite things in a generic manner as much as possible. This function will be broken if HAVE_LDOUBLE_DOUBLE_DOUBLE_BE is defined which I guess is a POWER or PowerPC thing, since 1) the underlying representation is entirely different and 2) the macros for manipulating it are in ieee754.c.src. To be fair, unless HAVE_COPYSIGNL is defined we just use the double version for long doubles which could also cause precision issues here.

What platforms that would be, I dunno. Let's say Mac and maybe sparc? The later is certain to be different.

Am I understanding right that there are two sets of tests for c99 numerical behaviour here: one set in test_complex99.c that we run at compile time, and a second set that we run as part of the test suite? I'm a little worried about these getting out of sync, though of course it is good to have end-to-end tests on the ufuncs. Would it be possible to also run the test_complex99.c tests against the selected functions, just to make sure that we're holding ourselves to the same standards that we're holding the system libraries too?

Yes, that's right. Currently the compile time tests are more stringent and more complete than in the test suite. Although both could probably be improved.

As part of improving the npy_c* functions I did run test_99complex.c on our functions. As of today on AMD64 linux they pass. To do this I've patched numpy to ensure that our functions are used, and then explicitly run the tests. The necessary patch, and a script I've been using are here: https://gist.github.com/ewmoore/6939178

Apply the patch, then run ./check PREC, where PREC is FLOAT DOUBLE or LONGDOUBLE to run the tests. If there are no errors emitted, they passed.

I'm out until Tuesday, now, but I'll address your other comments then.

@njsmith
Owner
@charris
Owner

MSVC long double is double, and SPARC long double is quad (128 bit) precision.

@ewmoore

Okay, I've added these tests to the test suite.

Next I'll need to move numpy's macros that normalize the floating point exception checks somewhere else so that that I can include them. (They're in ufuncobject.h right now.) Until then this is broken everywhere but linux more or less.

@ewmoore

Ignore that last commit. It's wrong. Something else is rotten on windows...

@ewmoore

This is now the correct fix for npy_hypot.

@ewmoore

@njsmith, I believe I've addressed all of your comments now.

@ewmoore

One final concern, numpy/core/src/umath/funcs.inc.src says:

/*
 * Don't pass structures between functions (only pointers) because how
 * structures are passed is compiler dependent and could cause segfaults if
 * umath_ufunc_object.inc is compiled with a different compiler than an
 * extension that makes use of the UFUNC API
 */

Does this then imply that we don't want to export these complex functions in npy_math.h without also only passing pointers? I'm guessing no, since some have been there since late 2009.

@juliantaylor

this conflicts with #3974 which cleans up the tangling of fenv and ufuncs, sorry I wasn't aware you changed this too.

@ewmoore

I've rebased this so that it will apply cleanly again. The hypot fix isn't here anymore since it was taken in #3992 and I've dumped my changes to fenv et al.

#3974 added some functions to check the floating point status to npymath, while this is certainly where they belong, I'd propose moving them out of ieee754.c and into their own file. For numpy this doesn't make any difference, but the compile time tests added here need access to these functions too, and the easiest way to do that is to move them to a different file which I can then include. Nicely, they use bare C types (i.e. int instead of npy_int) right now, which I would also propose keeping. Any thoughts?

@ewmoore

I went ahead and moved those functions to a different file. I've update everything so this should be clean and ready to go from my end again.

@juliantaylor
Owner

have you forwarded the issues to some free libc maintainers?

@charris
Owner

@ewmoore Needs another rebase.

@ewmoore

I'll get to this sometime on Saturday.

ewmoore added some commits
@ewmoore ewmoore BUG: Overflow in tan and tanh for large complex values
gh-2321 (trac 1726).

np.tanh(1000+0j) gave nan+nan*j, should be 1.0+0j.  The
same bug was present in np.tan(0+1000j).

Bug fixed by replacing our complex tan and tanh implementation
with one from FreeBSD.
9e7be8c
@ewmoore ewmoore STY: npymath: tabs -> spaces, long lines f442ead
@ewmoore ewmoore ENH: Check for, and use all C99 complex functions if available
Previously, a number of functions operating on complex numbers from C99
were checked for during the configuration stage.  If they existed, they
were used.  This commit extends this to all of the C99 complex
functions.

All of our local implementations are now in npymath, instead of being
spread between npymath and umath/funcs.inc.src. Functions that numpy has
but C99 does not are still in funcs.inc.src.
d62c590
@ewmoore ewmoore ENH: npymath: Use a better complex division algorithm.
Using the same technique used for divide complex arrays, copied from
umath/loops.c.src.
af5ff55
@ewmoore ewmoore MAINT: umath: generate the ldexp and frexp ufuncs like all the others. e6b99c9
@ewmoore ewmoore MAINT: umath/npymath: Add ldexp and frexp to npymath
npy_frexp and npy_ldexp are used unconditionally in umath. (i.e.
HAVE_LDEXPF, etc. no longer appears in umath.)
0fdaa54
@ewmoore ewmoore ENH: npymath: handle clog edge cases more carefully. 2d33f7c
@ewmoore ewmoore ENH: evaluate c99 complex funcs at build time
For all of the C99 complex function excepting the creal, cimag, cabs and
carg (which are either trivial or defined by a C89 function) we now run
a set of build time tests to evaluate if we want to use the system
version of these functions or our own.  These tests involve all of the
special values for these functions defined in Annxex G of the C99
standard as well as any tests that were existing in the numpy test
suite. These tests do not consider the sign of NAN.
50d6720
@ewmoore ewmoore BUG: NPY_LOGE2 is log(2) d504ea1
@ewmoore ewmoore BUG: Test aginst +/-TANH_HUGE
This is a bug introduced when adapting the FreeBSD implementation. Since
they play some games extracting parts of the double and comparing them
as unsigned ints and we don't. We don't since that doesn't generalize to
float, and long double as we do in npy_math_complex.c.src
5362b35
@ewmoore ewmoore TST: check for tanh(1000+0j) == 1 + 0j etc.
This is a test for the fix for gh-2321.
e61e74f
@ewmoore ewmoore BUG: Fix up some corner cases in npy_cexp
The failing c99 annex G values for cexp were:
cexp(Inf, -0) == (NAN, -0)
cexp(Inf, Inf) == (+-Inf, NAN) + raises FE_INVALID

The first returned (NAN, 0) and the second was correct but failed to
raise the invalid floating-point exception.
852cd10
@ewmoore ewmoore MAINT: remove debug print statement 0f57b42
@ewmoore ewmoore MAINT: fix two typos in test_c99complex.c eb06557
@ewmoore ewmoore ENH: Be more careful with large real parts in npy_cexp
This change is adapted from k_exp.c in FreeBSD.

The idea is to calculate a scaled exponential, which is then multiplied by
cos(imag) and sin(imag) and rescaled to the appropriate value.  This means
that for float64, things like np.exp(complex(729, 1e-16)) which used to give
inf+infj now gives (inf+3.987285262042597e+300j)
77705ab
@ewmoore ewmoore ENH: Import the ccosh/ccos implementation from FreeBSD
The code from FreeBSD was lightly adapted to fit with the numpy style
and to correct an Annex G failure. (ccosh(Inf + 0j) should be (Inf + 0),
and symmetric).

With this commit, npy_ccosh(f) and npy_ccos(f) pass all of the tests in
test_c99complex.c.
e574e29
@ewmoore ewmoore ENH: Import the csinh/csin implementation from FreeBSD
The code from FreeBSD was lightly adapted to fit with the numpy style.

With this commit, npy_csinh(f) and npy_csin(f) pass all of the tests in
test_c99complex.c.
3710481
@ewmoore ewmoore ENH: Import the catanh/catan implemenation from FreeBSD
The code from FreeBSD was lightly adapted to fit with the numpy style.

An incorrect test for the branch cuts of both arctanh and arctan was
corrected in both test_umath.py and test_c99complex.c.

With this commit, npy_catanh(f) and npy_catan(f) pass all of the tests
in test_c99complex.c.
a837853
@ewmoore ewmoore BUG: printf requires a literal for the first argument.
Not sure why this shows up as a warning on py2.x but as an error on
py3.x.
36e0d8b
@ewmoore ewmoore BUG: Py3 '-2j' is (NZERO - 2j) causes test failures
On python2.x evaluating '-2j' produces complex(0,-2), however, on
python3, it instead produces complex(NZERO, -2).  This ends up causing
a test for the complex arctan to fail.

This happens I suppose for the same reason it does in C.  The expression
-2j ends up being evaluated as (-2 + 0j) * (0 + 1j) == (-0 + -2j).
330b54a
@ewmoore ewmoore ENH: Import the cacos,casin,cacosh,casinh implemenation from FreeBSD
The code from FreeBSD was lightly adapted to fit with the numpy style.

Several incorrect branch cut tests where fixed. (All of these for all of
the arc* functions that were changed have to do with getting the correct
branch cut for (x,0) or (0,x).)

With this commit npy_cacos(f), npy_casin(f), npy_cacosh(f), and
npy_casinh(f) pass all of the tests in test_c99complex.c.
def6327
@ewmoore ewmoore TST: Enable signed zero branch cut tests
Now we have our own implementations that can pass these tests and we
only link against the system versions if they can also pass.
Additionally, run the branch cut tests for complex64.
c8f13ee
@ewmoore ewmoore BUG: fix cpow tests to match what npy_cpow does.
Annex G of the C99 standard does not define any special values for cpow,
allowing the implementation to be as simple as cpow(z,w) =
cexp(w*clog(z)).  We have a large number of tests for our cpow function,
both in the test suite and in test_c99complex.c. (There are actually
more in test_c99complex.c, since I'm doing all combinations from
TestCpow::test_scalar in test_umath_complex.py.)

As of right now, these tests probably mean that we will never use a
system cpow implemenation. This is fine, since we handle a large number
of edge cases in a sane way and at least glibc does not.

With this commit all 48 of our complex functions pass test_c99complex.c.
eab5228
@ewmoore ewmoore STY: long lines, whitespace 2b5f1ad
@ewmoore ewmoore MAINT: remove gccisms and unbreak MSVC build. 1fb1b74
@ewmoore ewmoore MAINT: fix ldexp/frexp changes to compile with MSVC.
Regenerated loops.h.
be42cae
@ewmoore ewmoore MAINT: npy_a(exp,log,sqrt,fabs) don't really exist. ac075b2
@ewmoore ewmoore ENH: Run the test_c99complex.c tests with the test suite too.
Some of the other tests in test_umath_complex.py are duplicates of these
tests.
79afefd
@ewmoore ewmoore BUG: Fix tests to compile under MSVC cfe9fcc
@ewmoore ewmoore MAINT: pick sunmath.h from npy_math.h cbc97fa
@ewmoore ewmoore MAINT: Update note about FreeBSD.
The names and copyright dates were updated earlier.
863d97d
@ewmoore ewmoore MAINT: make sure npy_math builds when long double is double double 2348023
@ewmoore ewmoore MAINT: move npy_(get,clear)_floatstatus() to their own file. 5771dae
@ewmoore ewmoore BUG Use the floating point status functions in build time tests 566c15e
@ewmoore ewmoore MAINT: move test_c99complex.c to npymath 503047b
@ewmoore

Rebased.

@ewmoore

So this failed the Travis tests since they now try to do a build with bento.

@charris
Owner

Current state on Fedora 21.

In [2]: np.tanh(372+1j)
Out[2]: (nan+nan*j)

Looks like library function is not used. gcc 4.9.2 gives 1.000000000000000000+0.000000000000000000i, where denormals look to be flushed to zero, probably a side affect of how the function compiles. The code for that is

#include <stdio.h>
#include <math.h>
#include <complex.h>

int main(int argc, char **args)
{
    double complex z = 372.0 + 1.0 * I;
    double complex res = ctanh(z);

    printf("%20.18f%+20.18fi\n", creal(res), cimag(res));
}

Do we have a policy on denormals? Do we need one?

I haven't followed this PR, but we should bring it to conclusion.

@charris
Owner

@ewmoore IIRC, the changes to the code generators is already merged and the documentation for frexp and ldexp moved.

I managed to get these patches applied without too much work, but no guarantee that the result was correct., There was one test failure

FAIL: test_ldexp_overflow (test_umath.TestLdexp)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/charris/Workspace/numpy.git/build/testenv/lib64/python2.7/site-packages/numpy/core/tests/test_umath.py", line 548, in test_ldexp_overflow
    assert_equal(ncu.ldexp(2., imax), np.inf)
  File "/home/charris/Workspace/numpy.git/build/testenv/lib64/python2.7/site-packages/numpy/testing/utils.py", line 322, in assert_equal
    raise AssertionError(msg)
AssertionError: 
Items are not equal:
 ACTUAL: 1.0
 DESIRED: inf
@charris
Owner

Rebased and squashed in #5510.

@charris charris closed this
@charris
Owner

I'd be happy to just use it when there are no system supplied functions. GCC seems OK now. I think the thing to do is run the improved tests now and then and submit bug reports to the relevant projects. If they can't fix things, then we might consider using these functions instead.

@charris
Owner

One thing that does bother me is that the system complex functions do not seem to be detected, at least on Fedora 21.

@ewmoore

I guess this discussion should move else where, but do you mean that the system functions are not detected here or with master? Do the system functions pass the tests?

@charris
Owner

With master, I've got that figured out now, which I assume you already did ;) I'm working through things and will probably take your work and modify it a bit.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Commits on Feb 26, 2014
  1. @ewmoore

    BUG: Overflow in tan and tanh for large complex values

    ewmoore authored
    gh-2321 (trac 1726).
    
    np.tanh(1000+0j) gave nan+nan*j, should be 1.0+0j.  The
    same bug was present in np.tan(0+1000j).
    
    Bug fixed by replacing our complex tan and tanh implementation
    with one from FreeBSD.
  2. @ewmoore
  3. @ewmoore

    ENH: Check for, and use all C99 complex functions if available

    ewmoore authored
    Previously, a number of functions operating on complex numbers from C99
    were checked for during the configuration stage.  If they existed, they
    were used.  This commit extends this to all of the C99 complex
    functions.
    
    All of our local implementations are now in npymath, instead of being
    spread between npymath and umath/funcs.inc.src. Functions that numpy has
    but C99 does not are still in funcs.inc.src.
  4. @ewmoore

    ENH: npymath: Use a better complex division algorithm.

    ewmoore authored
    Using the same technique used for divide complex arrays, copied from
    umath/loops.c.src.
  5. @ewmoore
  6. @ewmoore

    MAINT: umath/npymath: Add ldexp and frexp to npymath

    ewmoore authored
    npy_frexp and npy_ldexp are used unconditionally in umath. (i.e.
    HAVE_LDEXPF, etc. no longer appears in umath.)
  7. @ewmoore
  8. @ewmoore

    ENH: evaluate c99 complex funcs at build time

    ewmoore authored
    For all of the C99 complex function excepting the creal, cimag, cabs and
    carg (which are either trivial or defined by a C89 function) we now run
    a set of build time tests to evaluate if we want to use the system
    version of these functions or our own.  These tests involve all of the
    special values for these functions defined in Annxex G of the C99
    standard as well as any tests that were existing in the numpy test
    suite. These tests do not consider the sign of NAN.
  9. @ewmoore

    BUG: NPY_LOGE2 is log(2)

    ewmoore authored
  10. @ewmoore

    BUG: Test aginst +/-TANH_HUGE

    ewmoore authored
    This is a bug introduced when adapting the FreeBSD implementation. Since
    they play some games extracting parts of the double and comparing them
    as unsigned ints and we don't. We don't since that doesn't generalize to
    float, and long double as we do in npy_math_complex.c.src
  11. @ewmoore

    TST: check for tanh(1000+0j) == 1 + 0j etc.

    ewmoore authored
    This is a test for the fix for gh-2321.
  12. @ewmoore

    BUG: Fix up some corner cases in npy_cexp

    ewmoore authored
    The failing c99 annex G values for cexp were:
    cexp(Inf, -0) == (NAN, -0)
    cexp(Inf, Inf) == (+-Inf, NAN) + raises FE_INVALID
    
    The first returned (NAN, 0) and the second was correct but failed to
    raise the invalid floating-point exception.
  13. @ewmoore
  14. @ewmoore
  15. @ewmoore

    ENH: Be more careful with large real parts in npy_cexp

    ewmoore authored
    This change is adapted from k_exp.c in FreeBSD.
    
    The idea is to calculate a scaled exponential, which is then multiplied by
    cos(imag) and sin(imag) and rescaled to the appropriate value.  This means
    that for float64, things like np.exp(complex(729, 1e-16)) which used to give
    inf+infj now gives (inf+3.987285262042597e+300j)
  16. @ewmoore

    ENH: Import the ccosh/ccos implementation from FreeBSD

    ewmoore authored
    The code from FreeBSD was lightly adapted to fit with the numpy style
    and to correct an Annex G failure. (ccosh(Inf + 0j) should be (Inf + 0),
    and symmetric).
    
    With this commit, npy_ccosh(f) and npy_ccos(f) pass all of the tests in
    test_c99complex.c.
  17. @ewmoore

    ENH: Import the csinh/csin implementation from FreeBSD

    ewmoore authored
    The code from FreeBSD was lightly adapted to fit with the numpy style.
    
    With this commit, npy_csinh(f) and npy_csin(f) pass all of the tests in
    test_c99complex.c.
  18. @ewmoore

    ENH: Import the catanh/catan implemenation from FreeBSD

    ewmoore authored
    The code from FreeBSD was lightly adapted to fit with the numpy style.
    
    An incorrect test for the branch cuts of both arctanh and arctan was
    corrected in both test_umath.py and test_c99complex.c.
    
    With this commit, npy_catanh(f) and npy_catan(f) pass all of the tests
    in test_c99complex.c.
  19. @ewmoore

    BUG: printf requires a literal for the first argument.

    ewmoore authored
    Not sure why this shows up as a warning on py2.x but as an error on
    py3.x.
  20. @ewmoore

    BUG: Py3 '-2j' is (NZERO - 2j) causes test failures

    ewmoore authored
    On python2.x evaluating '-2j' produces complex(0,-2), however, on
    python3, it instead produces complex(NZERO, -2).  This ends up causing
    a test for the complex arctan to fail.
    
    This happens I suppose for the same reason it does in C.  The expression
    -2j ends up being evaluated as (-2 + 0j) * (0 + 1j) == (-0 + -2j).
  21. @ewmoore

    ENH: Import the cacos,casin,cacosh,casinh implemenation from FreeBSD

    ewmoore authored
    The code from FreeBSD was lightly adapted to fit with the numpy style.
    
    Several incorrect branch cut tests where fixed. (All of these for all of
    the arc* functions that were changed have to do with getting the correct
    branch cut for (x,0) or (0,x).)
    
    With this commit npy_cacos(f), npy_casin(f), npy_cacosh(f), and
    npy_casinh(f) pass all of the tests in test_c99complex.c.
  22. @ewmoore

    TST: Enable signed zero branch cut tests

    ewmoore authored
    Now we have our own implementations that can pass these tests and we
    only link against the system versions if they can also pass.
    Additionally, run the branch cut tests for complex64.
  23. @ewmoore

    BUG: fix cpow tests to match what npy_cpow does.

    ewmoore authored
    Annex G of the C99 standard does not define any special values for cpow,
    allowing the implementation to be as simple as cpow(z,w) =
    cexp(w*clog(z)).  We have a large number of tests for our cpow function,
    both in the test suite and in test_c99complex.c. (There are actually
    more in test_c99complex.c, since I'm doing all combinations from
    TestCpow::test_scalar in test_umath_complex.py.)
    
    As of right now, these tests probably mean that we will never use a
    system cpow implemenation. This is fine, since we handle a large number
    of edge cases in a sane way and at least glibc does not.
    
    With this commit all 48 of our complex functions pass test_c99complex.c.
  24. @ewmoore

    STY: long lines, whitespace

    ewmoore authored
  25. @ewmoore
  26. @ewmoore

    MAINT: fix ldexp/frexp changes to compile with MSVC.

    ewmoore authored
    Regenerated loops.h.
  27. @ewmoore
  28. @ewmoore

    ENH: Run the test_c99complex.c tests with the test suite too.

    ewmoore authored
    Some of the other tests in test_umath_complex.py are duplicates of these
    tests.
  29. @ewmoore
  30. @ewmoore
  31. @ewmoore

    MAINT: Update note about FreeBSD.

    ewmoore authored
    The names and copyright dates were updated earlier.
  32. @ewmoore
  33. @ewmoore
  34. @ewmoore
  35. @ewmoore
Something went wrong with that request. Please try again.