The intended output was obviously 0.5 (where numpy.linalg.norm gives 0.0); this seems to be the case on most but not all systems. The reason is that snrm seems to internally use double precision for most BLAS versions, but not all. Note that test_overflow() for this implementation of norm() passes on all systems, so even where test_stable() fails the performance of norm() looks better than for numpy.linalg.norm().
This test has been reported to fail for a couple of releases on Windows at least, and now also on Debian (s390x architecture).
Reported by Yaroslav Halchenko to fail for 0.11.0rc1 on Debian, s390x architecture, as well as by Derek Homeier for 0.8.0rc3 on PPC OS X.
#include <vecLib/vecLib.h> fails with Xcode 4.4/OS X 10.8
Some test failures were reported against 0.11.0rc1 due to too high precision.
… 3.x Submitted as PR-268, but not reviewed. Committing now to get it in for 0.11.0.
….10.x. See #1559. It looked like PR-229 had fixed this issue by regenerating SWIG wrappers with a newer version of SWIG (something we didn't understand), but a reported failure from 0.11.0rc1 shows that this is not the case. Therefore committing this to master. First submitted as PR-237. (backported from 3ba682b)
…st so check the output against the same filter coefficients from MATLAB
Used Cython 0.16 plus sed replacement from ticket 1673.
Closes ticket 1701.
… knownfail See ticket 1684. (backported from a792eb7)
Use a real initial guess in tests to trigger the problem. See this discussion: http://mail.scipy.org/pipermail/scipy-user/2012-June/032404.html
See ticket 1667.