Revert line from #4585 to get mask, data shapes to match in .flat
BUG: Fix lack of NULL check in array_richcompare.
fixed bad signature in docstring for uniform()
The lack of this check led to a segfault. Closes #4613.
BUG: Ensure MaskedArray.flat can access single items
ENH: adding ppc64le architecture support
ENH: add a 'return_counts=' keyword argument to `np.unique`
BLD: remove cython c source from git
BUG: ifort has issues with optimization flag /O2
Fixes scipy test failures.
This PR adds a new keyword argument to `np.unique` that returns the number of times each unique item comes up in the array. This allows replacing a typical numpy construct: unq, _ = np.unique(a, return_inverse=True) unq_counts = np.bincount(_) with a single line of code: unq, unq_counts = np.unique(a, return_counts=True) As a plus, it runs faster, because it does not need the extra operations required to produce `unique_inverse`.
BUG: Prevent division by zero. Closes #650.
ENH: Better error w/ line num for bad column count in np.loadtxt()
Resolves #2591. Adds more explicit error handling in line parsing loop.
Instead generate at build time. The generated sources are still part of the sdist. tools/cythonize.py is copied from SciPy with small changes to the configuration.
BUG: Explicitly reject nan values for p in binomial(n, p).
Adds check with np.isnan(p) and raises ValueError if check is positive.
ENH: Ensure that repr and str work for MaskedArray non-ndarray bases
For repr, use the name of the base class in output as "masked_<name>" (with name=array for ndarray to match the previous implementation). For str, insert masked_print_option in an ndarray view of the object array that is created for string output, to avoid calling __setitem__ in the base class. Add tests to ensure this works.
BUG: Masked arrays and apply_over_axes
__numpy_ufunc__ check improvement
BUG: fix memory leaks and missing NULL checks
found by cpychecker gcc plugin
Masked arrays version of apply_over_axes did not apply function correctly to arrays with non-trivial masks. Fixes #4461.
Checking for the attribute is a very large bottlenecks for reductions. dtype, out, keepdims will often be basic python types so the check can be skipped. Also add a couple missing types to helper function _is_basic_python_type and move it into a header so it can be used in umath.
Was always intended this way but not done due to a mistake in Python3 fix. Speeds up dictionary lookups a bit as string comparisons can be skipped on hash collisions.
ENH: Replace exponentiation with cumulative product in np.vander
Speeds calculation up by ~3x for 100x100 matrices, and by ~45x for 1000x1000
structured arrays with different byteorders do not compare
Fixes two places where dtypes with fields are compared for *exact* equality where they should be compared for *equivalency*.