TST: fix linalg.norm test failure. Thanks to Rudiger Kessel.
The intended output was obviously 0.5 (where numpy.linalg.norm gives 0.0); this seems to be the case on most but not all systems. The reason is that snrm seems to internally use double precision for most BLAS versions, but not all. Note that test_overflow() for this implementation of norm() passes on all systems, so even where test_stable() fails the performance of norm() looks better than for numpy.linalg.norm().
|@@ -582,7 +582,14 @@ def test_overflow(self):|
|# more stable than numpy's norm|
|a = array([1e4] + *10000, dtype=float32)|
|- assert_almost_equal(norm(a) - 1e4, 0.5)|
|+ # snrm in double precision; we obtain the same as for float64|
|+ assert_almost_equal(norm(a) - 1e4, 0.5)|
|+ except AssertionError:|
|+ # snrm implemented in single precision, == np.linalg.norm result|
|+ msg = ": Result should equal either 0.0 or 0.5 (depending on " \|
|+ "implementation of snrm2)."|
|+ assert_almost_equal(norm(a) - 1e4, 0.0, err_msg=msg)|
|assert_equal(norm([1,0,3], 0), 2)|