Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update numpy to 1.17.2 #87

Closed
wants to merge 2 commits into from
Closed

Conversation

pyup-bot
Copy link
Collaborator

@pyup-bot pyup-bot commented Sep 7, 2019

This PR updates numpy from 1.15.4 to 1.17.2.

Changelog

1.17.1

==========================

This release contains a number of fixes for bugs reported against NumPy 1.17.0
along with a few documentation and build improvements.  The Python versions
supported are 3.5-3.7, note that Python 2.7 has been dropped.  Python 3.8b3
should work with the released source packages, but there are no future
guarantees.

Downstream developers should use Cython >= 0.29.13 for Python 3.8 support and
OpenBLAS >= 3.7 to avoid problems on the Skylake architecture. The NumPy wheels
on PyPI are built from the OpenBLAS development branch in order to avoid those
problems.


Contributors
============

A total of 17 people contributed to this release.  People with a "+" by their
names contributed a patch for the first time.

* Alexander Jung +
* Allan Haldane
* Charles Harris
* Eric Wieser
* Giuseppe Cuccu +
* Hiroyuki V. Yamazaki
* Jérémie du Boisberranger
* Kmol Yuan +
* Matti Picus
* Max Bolingbroke +
* Maxwell Aladago +
* Oleksandr Pavlyk
* Peter Andreas Entschev
* Sergei Lebedev
* Seth Troisi +
* Vladimir Pershin +
* Warren Weckesser


Pull requests merged
====================

A total of 24 pull requests were merged for this release.

* `14156 <https://github.com/numpy/numpy/pull/14156>`__: TST: Allow fuss in testing strided/non-strided exp/log loops
* `14157 <https://github.com/numpy/numpy/pull/14157>`__: BUG: avx2_scalef_ps must be static
* `14158 <https://github.com/numpy/numpy/pull/14158>`__: BUG: Remove stray print that causes a SystemError on python 3.7.
* `14159 <https://github.com/numpy/numpy/pull/14159>`__: BUG: Fix DeprecationWarning in python 3.8.
* `14160 <https://github.com/numpy/numpy/pull/14160>`__: BLD: Add missing gcd/lcm definitions to npy_math.h
* `14161 <https://github.com/numpy/numpy/pull/14161>`__: DOC, BUILD: cleanups and fix (again) 'build dist'
* `14166 <https://github.com/numpy/numpy/pull/14166>`__: TST: Add 3.8-dev to travisCI testing.
* `14194 <https://github.com/numpy/numpy/pull/14194>`__: BUG: Remove the broken clip wrapper (Backport)
* `14198 <https://github.com/numpy/numpy/pull/14198>`__: DOC: Fix hermitian argument docs in svd.
* `14199 <https://github.com/numpy/numpy/pull/14199>`__: MAINT: Workaround for Intel compiler bug leading to failing test
* `14200 <https://github.com/numpy/numpy/pull/14200>`__: TST: Clean up of test_pocketfft.py
* `14201 <https://github.com/numpy/numpy/pull/14201>`__: BUG: Make advanced indexing result on read-only subclass writeable...
* `14236 <https://github.com/numpy/numpy/pull/14236>`__: BUG: Fixed default BitGenerator name
* `14237 <https://github.com/numpy/numpy/pull/14237>`__: ENH: add c-imported modules for freeze analysis in np.random
* `14296 <https://github.com/numpy/numpy/pull/14296>`__: TST: Pin pytest version to 5.0.1
* `14301 <https://github.com/numpy/numpy/pull/14301>`__: BUG: Fix leak in the f2py-generated module init and `PyMem_Del`...
* `14302 <https://github.com/numpy/numpy/pull/14302>`__: BUG: Fix formatting error in exception message
* `14307 <https://github.com/numpy/numpy/pull/14307>`__: MAINT: random: Match type of SeedSequence.pool_size to DEFAULT_POOL_SIZE.
* `14308 <https://github.com/numpy/numpy/pull/14308>`__: BUG: Fix numpy.random bug in platform detection
* `14309 <https://github.com/numpy/numpy/pull/14309>`__: ENH: Enable huge pages in all Linux builds
* `14330 <https://github.com/numpy/numpy/pull/14330>`__: BUG: Fix segfault in `random.permutation(x)` when x is a string.
* `14338 <https://github.com/numpy/numpy/pull/14338>`__: BUG: don't fail when lexsorting some empty arrays (14228)
* `14339 <https://github.com/numpy/numpy/pull/14339>`__: BUG: Fix misuse of .names and .fields in various places (backport...
* `14345 <https://github.com/numpy/numpy/pull/14345>`__: BUG: fix behavior of structured_to_unstructured on non-trivial...
* `14350 <https://github.com/numpy/numpy/pull/14350>`__: REL: Prepare 1.17.1 release


==========================

1.17.0

==========================

This NumPy release contains a number of new features that should substantially
improve its performance and usefulness, see Highlights below for a summary. The
Python versions supported are 3.5-3.7, note that Python 2.7 has been dropped.
Python 3.8b2 should work with the released source packages, but there are no
future guarantees.

Downstream developers should use Cython >= 0.29.11 for Python 3.8 support and
OpenBLAS >= 3.7 (not currently out) to avoid problems on the Skylake
architecture. The NumPy wheels on PyPI are built from the OpenBLAS development
branch in order to avoid those problems.


Highlights
==========

* A new extensible `random` module along with four selectable `random number
generators <random.BitGenerators>` and improved seeding designed for use in parallel
processes has been added. The currently available bit generators are `MT19937
<random.mt19937.MT19937>`, `PCG64 <random.pcg64.PCG64>`, `Philox
<random.philox.Philox>`, and `SFC64 <random.sfc64.SFC64>`. See below under
New Features.

* NumPy's `FFT <fft>` implementation was changed from fftpack to pocketfft,
resulting in faster, more accurate transforms and better handling of datasets
of prime length. See below under Improvements.

* New radix sort and timsort sorting methods. It is currently not possible to
choose which will be used. They are hardwired to the datatype and used
when either ``stable`` or ``mergesort`` is passed as the method. See below
under Improvements.

* Overriding numpy functions is now possible by default,
see ``__array_function__`` below.


New functions
=============

* `numpy.errstate` is now also a function decorator


Deprecations
============

`numpy.polynomial` functions warn when passed ``float`` in place of ``int``
---------------------------------------------------------------------------
Previously functions in this module would accept ``float`` values provided they
were integral (``1.0``, ``2.0``, etc). For consistency with the rest of numpy,
doing so is now deprecated, and in future will raise a ``TypeError``.

Similarly, passing a float like ``0.5`` in place of an integer will now raise a
``TypeError`` instead of the previous ``ValueError``.

Deprecate `numpy.distutils.exec_command` and ``temp_file_name``
---------------------------------------------------------------
The internal use of these functions has been refactored and there are better
alternatives. Replace ``exec_command`` with `subprocess.Popen` and
`temp_file_name <numpy.distutils.exec_command>` with `tempfile.mkstemp`.

Writeable flag of C-API wrapped arrays
--------------------------------------
When an array is created from the C-API to wrap a pointer to data, the only
indication we have of the read-write nature of the data is the ``writeable``
flag set during creation. It is dangerous to force the flag to writeable.
In the future it will not be possible to switch the writeable flag to ``True``
from python.
This deprecation should not affect many users since arrays created in such
a manner are very rare in practice and only available through the NumPy C-API.

`numpy.nonzero` should no longer be called on 0d arrays
-------------------------------------------------------
The behavior of `numpy.nonzero` on 0d arrays was surprising, making uses of it
almost always incorrect. If the old behavior was intended, it can be preserved
without a warning by using ``nonzero(atleast_1d(arr))`` instead of
``nonzero(arr)``.  In a future release, it is most likely this will raise a
``ValueError``.

Writing to the result of `numpy.broadcast_arrays` will warn
-----------------------------------------------------------

Commonly `numpy.broadcast_arrays` returns a writeable array with internal
overlap, making it unsafe to write to. A future version will set the
``writeable`` flag to ``False``, and require users to manually set it to
``True`` if they are sure that is what they want to do. Now writing to it will
emit a deprecation warning with instructions to set the ``writeable`` flag
``True``.  Note that if one were to inspect the flag before setting it, one
would find it would already be ``True``.  Explicitly setting it, though, as one
will need to do in future versions, clears an internal flag that is used to
produce the deprecation warning. To help alleviate confusion, an additional
`FutureWarning` will be emitted when accessing the ``writeable`` flag state to
clarify the contradiction.

Note that for the C-side buffer protocol such an array will return a
readonly buffer immediately unless a writable buffer is requested. If
a writeable buffer is requested a warning will be given. When using
cython, the ``const`` qualifier should be used with such arrays to avoid
the warning (e.g. ``cdef const double[::1] view``).


Future Changes
==============

Shape-1 fields in dtypes won't be collapsed to scalars in a future version
--------------------------------------------------------------------------

Currently, a field specified as ``[(name, dtype, 1)]`` or ``"1type"`` is
interpreted as a scalar field (i.e., the same as ``[(name, dtype)]`` or
``[(name, dtype, ()]``). This now raises a FutureWarning; in a future version,
it will be interpreted as a shape-(1,) field, i.e. the same as ``[(name,
dtype, (1,))]`` or ``"(1,)type"`` (consistently with ``[(name, dtype, n)]``
/ ``"ntype"`` with ``n>1``, which is already equivalent to ``[(name, dtype,
(n,)]`` / ``"(n,)type"``).


Compatibility notes
===================

``float16`` subnormal rounding
------------------------------
Casting from a different floating point precision to ``float16`` used incorrect
rounding in some edge cases. This means in rare cases, subnormal results will
now be rounded up instead of down, changing the last bit (ULP) of the result.

Signed zero when using divmod
-----------------------------
Starting in version `1.12.0`, numpy incorrectly returned a negatively signed zero
when using the ``divmod`` and ``floor_divide`` functions when the result was
zero. For example::

>>> np.zeros(10)//1
array([-0., -0., -0., -0., -0., -0., -0., -0., -0., -0.])

With this release, the result is correctly returned as a positively signed
zero::

>>> np.zeros(10)//1
array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.])

``MaskedArray.mask`` now returns a view of the mask, not the mask itself
------------------------------------------------------------------------
Returning the mask itself was unsafe, as it could be reshaped in place which
would violate expectations of the masked array code. The behavior of `mask
<ma.MaskedArray.mask>` is now consistent with `data <ma.MaskedArray.data>`,
which also returns a view.

The underlying mask can still be accessed with ``._mask`` if it is needed.
Tests that contain ``assert x.mask is not y.mask`` or similar will need to be
updated.

Do not lookup ``__buffer__`` attribute in `numpy.frombuffer`
------------------------------------------------------------
Looking up ``__buffer__`` attribute in `numpy.frombuffer` was undocumented and
non-functional. This code was removed. If needed, use
``frombuffer(memoryview(obj), ...)`` instead.

``out`` is buffered for memory overlaps in `take`, `choose`, `put`
------------------------------------------------------------------
If the out argument to these functions is provided and has memory overlap with
the other arguments, it is now buffered to avoid order-dependent behavior.

Unpickling while loading requires explicit opt-in
-------------------------------------------------
The functions `load`, and ``lib.format.read_array`` take an
``allow_pickle`` keyword which now defaults to ``False`` in response to
`CVE-2019-6446 <https://nvd.nist.gov/vuln/detail/CVE-2019-6446>`_.


.. currentmodule:: numpy.random.mtrand

Potential changes to the random stream in old random module
-----------------------------------------------------------
Due to bugs in the application of ``log`` to random floating point numbers,
the stream may change when sampling from `~RandomState.beta`, `~RandomState.binomial`,
`~RandomState.laplace`, `~RandomState.logistic`, `~RandomState.logseries` or
`~RandomState.multinomial` if a ``0`` is generated in the underlying `MT19937
<~numpy.random.mt11937.MT19937>` random stream.  There is a ``1`` in
:math:`10^{53}` chance of this occurring, so the probability that the stream
changes for any given seed is extremely small. If a ``0`` is encountered in the
underlying generator, then the incorrect value produced (either `numpy.inf` or
`numpy.nan`) is now dropped.

.. currentmodule:: numpy

`i0` now always returns a result with the same shape as the input
-----------------------------------------------------------------
Previously, the output was squeezed, such that, e.g., input with just a single
element would lead to an array scalar being returned, and inputs with shapes
such as ``(10, 1)`` would yield results that would not broadcast against the
input.

Note that we generally recommend the SciPy implementation over the numpy one:
it is a proper ufunc written in C, and more than an order of magnitude faster.

`can_cast` no longer assumes all unsafe casting is allowed
----------------------------------------------------------
Previously, `can_cast` returned `True` for almost all inputs for
``casting='unsafe'``, even for cases where casting was not possible, such as
from a structured dtype to a regular one.  This has been fixed, making it
more consistent with actual casting using, e.g., the `.astype <ndarray.astype>`
method.

``ndarray.flags.writeable`` can be switched to true slightly more often
-----------------------------------------------------------------------

In rare cases, it was not possible to switch an array from not writeable
to writeable, although a base array is writeable. This can happen if an
intermediate `ndarray.base` object is writeable. Previously, only the deepest
base object was considered for this decision. However, in rare cases this
object does not have the necessary information. In that case switching to
writeable was never allowed. This has now been fixed.


C API changes
=============

dimension or stride input arguments are now passed by ``npy_intp const*``
-------------------------------------------------------------------------
Previously these function arguments were declared as the more strict
``npy_intp*``, which prevented the caller passing constant data.
This change is backwards compatible, but now allows code like::

 npy_intp const fixed_dims[] = {1, 2, 3};
 // no longer complains that the const-qualifier is discarded
 npy_intp size = PyArray_MultiplyList(fixed_dims, 3);


New Features
============

.. currentmodule:: numpy.random

New extensible `numpy.random` module with selectable random number generators
-----------------------------------------------------------------------------
A new extensible `numpy.random` module along with four selectable random number
generators and improved seeding designed for use in parallel processes has been
added. The currently available :ref:`Bit Generators <bit_generator>` are
`~mt19937.MT19937`, `~pcg64.PCG64`, `~philox.Philox`, and `~sfc64.SFC64`.
``PCG64`` is the new default while ``MT19937`` is retained for backwards
compatibility. Note that the legacy random module is unchanged and is now
frozen, your current results will not change. More information is available in
the :ref:`API change description <new-or-different>` and in the `top-level view
<numpy.random>` documentation.

.. currentmodule:: numpy

libFLAME
--------
Support for building NumPy with the libFLAME linear algebra package as the LAPACK,
implementation, see
`libFLAME <https://www.cs.utexas.edu/~flame/web/libFLAME.html>`_ for details.

User-defined BLAS detection order
---------------------------------
`distutils` now uses an environment variable, comma-separated and case
insensitive, to determine the detection order for BLAS libraries.
By default ``NPY_BLAS_ORDER=mkl,blis,openblas,atlas,accelerate,blas``.
However, to force the use of OpenBLAS simply do::

NPY_BLAS_ORDER=openblas python setup.py build

which forces the use of OpenBLAS.
This may be helpful for users which have a MKL installation but wishes to try
out different implementations.

User-defined LAPACK detection order
-----------------------------------
``numpy.distutils`` now uses an environment variable, comma-separated and case
insensitive, to determine the detection order for LAPACK libraries.
By default ``NPY_LAPACK_ORDER=mkl,openblas,flame,atlas,accelerate,lapack``.
However, to force the use of OpenBLAS simply do::

NPY_LAPACK_ORDER=openblas python setup.py build

which forces the use of OpenBLAS.
This may be helpful for users which have a MKL installation but wishes to try
out different implementations.

`ufunc.reduce` and related functions now accept a ``where`` mask
----------------------------------------------------------------
`ufunc.reduce`, `sum`, `prod`, `min`, `max` all
now accept a ``where`` keyword argument, which can be used to tell which
elements to include in the reduction.  For reductions that do not have an
identity, it is necessary to also pass in an initial value (e.g.,
``initial=np.inf`` for `min`).  For instance, the equivalent of
`nansum` would be ``np.sum(a, where=~np.isnan(a))``.

Timsort and radix sort have replaced mergesort for stable sorting
-----------------------------------------------------------------
Both radix sort and timsort have been implemented and are now used in place of
mergesort. Due to the need to maintain backward compatibility, the sorting
``kind`` options ``"stable"`` and ``"mergesort"`` have been made aliases of
each other with the actual sort implementation depending on the array type.
Radix sort is used for small integer types of 16 bits or less and timsort for
the remaining types.  Timsort features improved performace on data containing
already or nearly sorted data and performs like mergesort on random data and
requires :math:`O(n/2)` working space.  Details of the timsort algorithm can be
found at `CPython listsort.txt
<https://github.com/python/cpython/blob/3.7/Objects/listsort.txt>`_.

`packbits` and `unpackbits` accept an ``order`` keyword
-------------------------------------------------------
The ``order`` keyword defaults to ``big``, and will order the **bits**
accordingly. For ``'order=big'`` 3 will become ``[0, 0, 0, 0, 0, 0, 1, 1]``,
and ``[1, 1, 0, 0, 0, 0, 0, 0]`` for ``order=little``

`unpackbits` now accepts a ``count`` parameter
----------------------------------------------
``count`` allows subsetting the number of bits that will be unpacked up-front,
rather than reshaping and subsetting later, making the `packbits` operation
invertible, and the unpacking less wasteful. Counts larger than the number of
available bits add zero padding. Negative counts trim bits off the end instead
of counting from the beginning. None counts implement the existing behavior of
unpacking everything.

`linalg.svd` and `linalg.pinv` can be faster on hermitian inputs
----------------------------------------------------------------
These functions now accept a ``hermitian`` argument, matching the one added
to `linalg.matrix_rank` in 1.14.0.

divmod operation is now supported for two ``timedelta64`` operands
------------------------------------------------------------------
The divmod operator now handles two ``timedelta64`` operands, with
type signature ``mm->qm``.

`fromfile` now takes an ``offset`` argument
-------------------------------------------
This function now takes an ``offset`` keyword argument for binary files,
which specifics the offset (in bytes) from the file's current position.
Defaults to ``0``.

New mode "empty" for `pad`
--------------------------
This mode pads an array to a desired shape without initializing the new
entries.

`empty_like` and related functions now accept a ``shape`` argument
------------------------------------------------------------------
`empty_like`, `full_like`, `ones_like` and `zeros_like` now accept a ``shape``
keyword argument, which can be used to create a new array
as the prototype, overriding its shape as well. This is particularly useful
when combined with the ``__array_function__`` protocol, allowing the creation
of new arbitrary-shape arrays from NumPy-like libraries when such an array
is used as the prototype.

Floating point scalars implement ``as_integer_ratio`` to match the builtin float
--------------------------------------------------------------------------------
This returns a (numerator, denominator) pair, which can be used to construct a
`fractions.Fraction`.

Structured ``dtype`` objects can be indexed with multiple fields names
----------------------------------------------------------------------
``arr.dtype[['a', 'b']]`` now returns a dtype that is equivalent to
``arr[['a', 'b']].dtype``, for consistency with
``arr.dtype['a'] == arr['a'].dtype``.

Like the dtype of structured arrays indexed with a list of fields, this dtype
has the same ``itemsize`` as the original, but only keeps a subset of the fields.

This means that ``arr[['a', 'b']]`` and ``arr.view(arr.dtype[['a', 'b']])`` are
equivalent.

``.npy`` files support unicode field names
------------------------------------------
A new format version of 3.0 has been introduced, which enables structured types
with non-latin1 field names. This is used automatically when needed.


Improvements
============

Array comparison assertions include maximum differences
-------------------------------------------------------
Error messages from array comparison tests such as
`testing.assert_allclose` now include "max absolute difference" and
"max relative difference," in addition to the previous "mismatch" percentage.
This information makes it easier to update absolute and relative error
tolerances.

Replacement of the fftpack based `fft` module by the pocketfft library
----------------------------------------------------------------------
Both implementations have the same ancestor (Fortran77 FFTPACK by Paul N.
Swarztrauber), but pocketfft contains additional modifications which improve
both accuracy and performance in some circumstances. For FFT lengths containing
large prime factors, pocketfft uses Bluestein's algorithm, which maintains
:math:`O(N log N)` run time complexity instead of deteriorating towards
:math:`O(N*N)` for prime lengths. Also, accuracy for real valued FFTs with near
prime lengths has improved and is on par with complex valued FFTs.

Further improvements to ``ctypes`` support in `numpy.ctypeslib`
---------------------------------------------------------------
A new `numpy.ctypeslib.as_ctypes_type` function has been added, which can be
used to converts a `dtype` into a best-guess `ctypes` type. Thanks to this
new function, `numpy.ctypeslib.as_ctypes` now supports a much wider range of
array types, including structures, booleans, and integers of non-native
endianness.

`numpy.errstate` is now also a function decorator
-------------------------------------------------
Currently, if you have a function like::

 def foo():
     pass

and you want to wrap the whole thing in `errstate`, you have to rewrite it
like so::

 def foo():
     with np.errstate(...):
         pass

but with this change, you can do::

 np.errstate(...)
 def foo():
     pass

thereby saving a level of indentation

`numpy.exp` and `numpy.log` speed up for float32 implementation
---------------------------------------------------------------
float32 implementation of `exp` and `log` now benefit from AVX2/AVX512
instruction set which are detected during runtime. `exp` has a max ulp
error of 2.52 and `log` has a max ulp error or 3.83.

Improve performance of `numpy.pad`
----------------------------------
The performance of the function has been improved for most cases by filling in
a preallocated array with the desired padded shape instead of using
concatenation.

`numpy.interp` handles infinities more robustly
-----------------------------------------------
In some cases where `interp` would previously return `nan`, it now
returns an appropriate infinity.

Pathlib support for `fromfile`, `tofile` and `ndarray.dump`
-----------------------------------------------------------
`fromfile`, `ndarray.ndarray.tofile` and `ndarray.dump` now support
the `pathlib.Path` type for the ``file``/``fid`` parameter.

Specialized `isnan`, `isinf`, and `isfinite` ufuncs for bool and int types
--------------------------------------------------------------------------
The boolean and integer types are incapable of storing `nan` and `inf` values,
which allows us to provide specialized ufuncs that are up to 250x faster than
the previous approach.

`isfinite` supports ``datetime64`` and ``timedelta64`` types
-----------------------------------------------------------------
Previously, `isfinite` used to raise a `TypeError` on being used on these
two types.

New keywords added to `nan_to_num`
----------------------------------
`nan_to_num` now accepts keywords ``nan``, ``posinf`` and ``neginf``
allowing the user to define the value to replace the ``nan``, positive and
negative ``np.inf`` values respectively.

MemoryErrors caused by allocated overly large arrays are more descriptive
-------------------------------------------------------------------------
Often the cause of a MemoryError is incorrect broadcasting, which results in a
very large and incorrect shape. The message of the error now includes this
shape to help diagnose the cause of failure.

`floor`, `ceil`, and `trunc` now respect builtin magic methods
--------------------------------------------------------------
These ufuncs now call the ``__floor__``, ``__ceil__``, and ``__trunc__``
methods when called on object arrays, making them compatible with
`decimal.Decimal` and `fractions.Fraction` objects.

`quantile` now works on `fraction.Fraction` and `decimal.Decimal` objects
-------------------------------------------------------------------------
In general, this handles object arrays more gracefully, and avoids floating-
point operations if exact arithmetic types are used.

Support of object arrays in `matmul`
------------------------------------
It is now possible to use `matmul` (or the ` operator) with object arrays.
For instance, it is now possible to do::

 from fractions import Fraction
 a = np.array([[Fraction(1, 2), Fraction(1, 3)], [Fraction(1, 3), Fraction(1, 2)]])
 b = a  a


Changes
=======

`median` and `percentile` family of functions no longer warn about ``nan``
--------------------------------------------------------------------------
`numpy.median`, `numpy.percentile`, and `numpy.quantile` used to emit a
``RuntimeWarning`` when encountering an `nan`. Since they return the
``nan`` value, the warning is redundant and has been removed.

``timedelta64 % 0`` behavior adjusted to return ``NaT``
-------------------------------------------------------
The modulus operation with two ``np.timedelta64`` operands now returns
``NaT`` in the case of division by zero, rather than returning zero

NumPy functions now always support overrides with ``__array_function__``
------------------------------------------------------------------------
NumPy now always checks the ``__array_function__`` method to implement overrides
of NumPy functions on non-NumPy arrays, as described in `NEP 18`_. The feature
was available for testing with NumPy 1.16 if appropriate environment variables
are set, but is now always enabled.

.. _`NEP 18` : http://www.numpy.org/neps/nep-0018-array-function-protocol.html

``lib.recfunctions.structured_to_unstructured`` does not squeeze single-field views
-----------------------------------------------------------------------------------
Previously ``structured_to_unstructured(arr[['a']])`` would produce a squeezed
result inconsistent with ``structured_to_unstructured(arr[['a', b']])``. This
was accidental. The old behavior can be retained with
``structured_to_unstructured(arr[['a']]).squeeze(axis=-1)`` or far more simply,
``arr['a']``.

`clip` now uses a ufunc under the hood
--------------------------------------
This means that registering clip functions for custom dtypes in C via
``descr->f->fastclip`` is deprecated - they should use the ufunc registration
mechanism instead, attaching to the ``np.core.umath.clip`` ufunc.

It also means that ``clip`` accepts ``where`` and ``casting`` arguments,
and can be override with ``__array_ufunc__``.

A consequence of this change is that some behaviors of the old ``clip`` have
been deprecated:

* Passing ``nan`` to mean "do not clip" as one or both bounds. This didn't work
in all cases anyway, and can be better handled by passing infinities of the
appropriate sign.
* Using "unsafe" casting by default when an ``out`` argument is passed. Using
``casting="unsafe"`` explicitly will silence this warning.

Additionally, there are some corner cases with behavior changes:

* Padding ``max < min`` has changed to be more consistent across dtypes, but
should not be relied upon.
* Scalar ``min`` and ``max`` take part in promotion rules like they do in all
other ufuncs.

``__array_interface__`` offset now works as documented
------------------------------------------------------
The interface may use an ``offset`` value that was mistakenly ignored.

Pickle protocol in `savez` set to 3 for ``force zip64`` flag
-----------------------------------------------------------------
`savez` was not using the ``force_zip64`` flag, which limited the size of
the archive to 2GB. But using the flag requires us to use pickle protocol 3 to
write ``object`` arrays. The protocol used was bumped to 3, meaning the archive
will be unreadable by Python2.

Structured arrays indexed with non-existent fields raise ``KeyError`` not ``ValueError``
----------------------------------------------------------------------------------------
``arr['bad_field']`` on a structured type raises ``KeyError``, for consistency
with ``dict['bad_field']``.



=========================

1.16.4

==========================

The NumPy 1.16.4 release fixes bugs reported against the 1.16.3 release, and
also backports several enhancements from master that seem appropriate for a
release series that is the last to support Python 2.7. The wheels on PyPI are
linked with OpenBLAS v0.3.7-dev, which should fix issues on Skylake series
cpus.

Downstream developers building this release should use Cython >= 0.29.2 and,
if using OpenBLAS, OpenBLAS > v0.3.7. The supported Python versions are 2.7 and
3.5-3.7.


New deprecations
================
Writeable flag of C-API wrapped arrays
--------------------------------------
When an array is created from the C-API to wrap a pointer to data, the only
indication we have of the read-write nature of the data is the ``writeable``
flag set during creation. It is dangerous to force the flag to writeable.  In
the future it will not be possible to switch the writeable flag to ``True``
from python.  This deprecation should not affect many users since arrays
created in such a manner are very rare in practice and only available through
the NumPy C-API.


Compatibility notes
===================

Potential changes to the random stream
--------------------------------------
Due to bugs in the application of log to random floating point numbers,
the stream may change when sampling from ``np.random.beta``, ``np.random.binomial``,
``np.random.laplace``, ``np.random.logistic``, ``np.random.logseries`` or
``np.random.multinomial`` if a 0 is generated in the underlying MT19937 random stream.
There is a 1 in :math:`10^{53}` chance of this occurring, and so the probability that
the stream changes for any given seed is extremely small. If a 0 is encountered in the
underlying generator, then the incorrect value produced (either ``np.inf``
or ``np.nan``) is now dropped.


Changes
=======

`numpy.lib.recfunctions.structured_to_unstructured` does not squeeze single-field views
---------------------------------------------------------------------------------------
Previously ``structured_to_unstructured(arr[['a']])`` would produce a squeezed
result inconsistent with ``structured_to_unstructured(arr[['a', b']])``. This
was accidental. The old behavior can be retained with
``structured_to_unstructured(arr[['a']]).squeeze(axis=-1)`` or far more simply,
``arr['a']``.


Contributors
============

A total of 10 people contributed to this release.  People with a "+" by their
names contributed a patch for the first time.

* Charles Harris
* Eric Wieser
* Dennis Zollo +
* Hunter Damron +
* Jingbei Li +
* Kevin Sheppard
* Matti Picus
* Nicola Soranzo +
* Sebastian Berg
* Tyler Reddy


Pull requests merged
====================

A total of 16 pull requests were merged for this release.

* `13392 <https://github.com/numpy/numpy/pull/13392>`__: BUG: Some PyPy versions lack PyStructSequence_InitType2.
* `13394 <https://github.com/numpy/numpy/pull/13394>`__: MAINT, DEP: Fix deprecated ``assertEquals()``
* `13396 <https://github.com/numpy/numpy/pull/13396>`__: BUG: Fix structured_to_unstructured on single-field types (backport)
* `13549 <https://github.com/numpy/numpy/pull/13549>`__: BLD: Make CI pass again with pytest 4.5
* `13552 <https://github.com/numpy/numpy/pull/13552>`__: TST: Register markers in conftest.py.
* `13559 <https://github.com/numpy/numpy/pull/13559>`__: BUG: Removes ValueError for empty kwargs in arraymultiter_new
* `13560 <https://github.com/numpy/numpy/pull/13560>`__: BUG: Add TypeError to accepted exceptions in crackfortran.
* `13561 <https://github.com/numpy/numpy/pull/13561>`__: BUG: Handle subarrays in descr_to_dtype
* `13562 <https://github.com/numpy/numpy/pull/13562>`__: BUG: Protect generators from log(0.0)
* `13563 <https://github.com/numpy/numpy/pull/13563>`__: BUG: Always return views from structured_to_unstructured when...
* `13564 <https://github.com/numpy/numpy/pull/13564>`__: BUG: Catch stderr when checking compiler version
* `13565 <https://github.com/numpy/numpy/pull/13565>`__: BUG: longdouble(int) does not work
* `13587 <https://github.com/numpy/numpy/pull/13587>`__: BUG: distutils/system_info.py fix missing subprocess import (13523)
* `13620 <https://github.com/numpy/numpy/pull/13620>`__: BUG,DEP: Fix writeable flag setting for arrays without base
* `13641 <https://github.com/numpy/numpy/pull/13641>`__: MAINT: Prepare for the 1.16.4 release.
* `13644 <https://github.com/numpy/numpy/pull/13644>`__: BUG: special case object arrays when printing rel-, abs-error

1.16.3

==========================

The NumPy 1.16.3 release fixes bugs reported against the 1.16.2 release, and
also backports several enhancements from master that seem appropriate for a
release series that is the last to support Python 2.7. The wheels on PyPI are
linked with OpenBLAS v0.3.4+,  which should fix the known threading issues
found in previous OpenBLAS versions.

Downstream developers building this release should use Cython >= 0.29.2 and,
if using OpenBLAS, OpenBLAS > v0.3.4.

The most noticeable change in this release is that unpickling object arrays
when loading ``*.npy`` or ``*.npz`` files now requires an explicit opt-in.
This backwards incompatible change was made in response to
`CVE-2019-6446 <https://nvd.nist.gov/vuln/detail/CVE-2019-6446>`_.


Compatibility notes
===================

Unpickling while loading requires explicit opt-in
-------------------------------------------------
The functions ``np.load``, and ``np.lib.format.read_array`` take an
`allow_pickle` keyword which now defaults to ``False`` in response to
`CVE-2019-6446 <https://nvd.nist.gov/vuln/detail/CVE-2019-6446>`_.


Improvements
============

Covariance in `random.mvnormal` cast to double
----------------------------------------------
This should make the tolerance used when checking the singular values of the
covariance matrix more meaningful.


Changes
=======

``__array_interface__`` offset now works as documented
------------------------------------------------------
The interface may use an ``offset`` value that was previously mistakenly
ignored.



=========================

1.16.2

==========================

NumPy 1.16.2 is a quick release fixing several problems encountered on Windows.
The Python versions supported are 2.7 and 3.5-3.7. The Windows problems
addressed are:

- DLL load problems for NumPy wheels on Windows,
- distutils command line parsing on Windows.

There is also a regression fix correcting signed zeros produced by divmod, see
below for details.

Downstream developers building this release should use Cython >= 0.29.2 and, if
using OpenBLAS, OpenBLAS > v0.3.4.

If you are installing using pip, you may encounter a problem with older
installed versions of NumPy that pip did not delete becoming mixed with the
current version, resulting in an ``ImportError``. That problem is particularly
common on Debian derived distributions due to a modified pip.  The fix is to
make sure all previous NumPy versions installed by pip have been removed. See
`12736 <https://github.com/numpy/numpy/issues/12736>`__ for discussion of the
issue.


Compatibility notes
===================

Signed zero when using divmod
-----------------------------
Starting in version 1.12.0, numpy incorrectly returned a negatively signed zero
when using the ``divmod`` and ``floor_divide`` functions when the result was
zero. For example::

>>> np.zeros(10)//1
array([-0., -0., -0., -0., -0., -0., -0., -0., -0., -0.])

With this release, the result is correctly returned as a positively signed
zero::

>>> np.zeros(10)//1
array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.])


Contributors
============

A total of 5 people contributed to this release.  People with a "+" by their
names contributed a patch for the first time.

* Charles Harris
* Eric Wieser
* Matti Picus
* Tyler Reddy
* Tony LaTorre +


Pull requests merged
====================

A total of 7 pull requests were merged for this release.

* `12909 <https://github.com/numpy/numpy/pull/12909>`__: TST: fix vmImage dispatch in Azure
* `12923 <https://github.com/numpy/numpy/pull/12923>`__: MAINT: remove complicated test of multiarray import failure mode
* `13020 <https://github.com/numpy/numpy/pull/13020>`__: BUG: fix signed zero behavior in npy_divmod
* `13026 <https://github.com/numpy/numpy/pull/13026>`__: MAINT: Add functions to parse shell-strings in the platform-native...
* `13028 <https://github.com/numpy/numpy/pull/13028>`__: BUG: Fix regression in parsing of F90 and F77 environment variables
* `13038 <https://github.com/numpy/numpy/pull/13038>`__: BUG: parse shell escaping in extra_compile_args and extra_link_args
* `13041 <https://github.com/numpy/numpy/pull/13041>`__: BLD: Windows absolute path DLL loading


==========================

1.16.1

==========================

The NumPy 1.16.1 release fixes bugs reported against the 1.16.0 release, and
also backports several enhancements from master that seem appropriate for a
release series that is the last to support Python 2.7. The wheels on PyPI are
linked with OpenBLAS v0.3.4+,  which should fix the known threading issues
found in previous OpenBLAS versions.

Downstream developers building this release should use Cython >= 0.29.2 and, if
using OpenBLAS, OpenBLAS > v0.3.4.

If you are installing using pip, you may encounter a problem with older
installed versions of NumPy that pip did not delete becoming mixed with the
current version, resulting in an ``ImportError``. That problem is particularly
common on Debian derived distributions due to a modified pip.  The fix is to
make sure all previous NumPy versions installed by pip have been removed. See
`12736 <https://github.com/numpy/numpy/issues/12736>`__ for discussion of the
issue. Note that previously this problem resulted in an ``AttributeError``.


Contributors
============

A total of 16 people contributed to this release.  People with a "+" by their
names contributed a patch for the first time.

* Antoine Pitrou
* Arcesio Castaneda Medina +
* Charles Harris
* Chris Markiewicz +
* Christoph Gohlke
* Christopher J. Markiewicz +
* Daniel Hrisca +
* EelcoPeacs +
* Eric Wieser
* Kevin Sheppard
* Matti Picus
* OBATA Akio +
* Ralf Gommers
* Sebastian Berg
* Stephan Hoyer
* Tyler Reddy


Enhancements
============

* `12767 <https://github.com/numpy/numpy/pull/12767>`__: ENH: add mm->q floordiv
* `12768 <https://github.com/numpy/numpy/pull/12768>`__: ENH: port np.core.overrides to C for speed
* `12769 <https://github.com/numpy/numpy/pull/12769>`__: ENH: Add np.ctypeslib.as_ctypes_type(dtype), improve `np.ctypeslib.as_ctypes`
* `12773 <https://github.com/numpy/numpy/pull/12773>`__: ENH: add "max difference" messages to np.testing.assert_array_equal...
* `12820 <https://github.com/numpy/numpy/pull/12820>`__: ENH: Add mm->qm divmod
* `12890 <https://github.com/numpy/numpy/pull/12890>`__: ENH: add _dtype_ctype to namespace for freeze analysis


Compatibility notes
===================

* The changed error message emited by array comparison testing functions may
affect doctests. See below for detail.

* Casting from double and single denormals to float16 has been corrected.  In
some rare cases, this may result in results being rounded up instead of down,
changing the last bit (ULP) of the result.


New Features
============

divmod operation is now supported for two ``timedelta64`` operands
------------------------------------------------------------------
The divmod operator now handles two ``np.timedelta64`` operands, with
type signature ``mm->qm``.


Improvements
============

Further improvements to ``ctypes`` support in ``np.ctypeslib``
--------------------------------------------------------------
A new `numpy.ctypeslib.as_ctypes_type` function has been added, which can be
used to converts a `dtype` into a best-guess `ctypes` type. Thanks to this
new function, `numpy.ctypeslib.as_ctypes` now supports a much wider range of
array types, including structures, booleans, and integers of non-native
endianness.

Array comparison assertions include maximum differences
-------------------------------------------------------
Error messages from array comparison tests such as
`np.testing.assert_allclose` now include "max absolute difference" and
"max relative difference," in addition to the previous "mismatch" percentage.
This information makes it easier to update absolute and relative error
tolerances.


Changes
=======

``timedelta64 % 0`` behavior adjusted to return ``NaT``
-------------------------------------------------------
The modulus operation with two ``np.timedelta64`` operands now returns
``NaT`` in the case of division by zero, rather than returning zero





==========================

1.16.0

==========================

This NumPy release is the last one to support Python 2.7 and will be maintained
as a long term release with bug fixes until 2020.  Support for Python 3.4 been
dropped, the supported Python versions are 2.7 and 3.5-3.7. The wheels on PyPI
are linked with OpenBLAS v0.3.4+,  which should fix the known threading issues
found in previous OpenBLAS versions.

Downstream developers building this release should use Cython >= 0.29 and, if
using OpenBLAS, OpenBLAS > v0.3.4.

This release has seen a lot of refactoring and features many bug fixes, improved
code organization, and better cross platform compatibility. Not all of these
improvements will be visible to users, but they should help make maintenance
easier going forward.


Highlights
==========

* Experimental (opt-in only) support for overriding numpy functions,
see ``__array_function__`` below.

* The ``matmul`` function is now a ufunc. This provides better
performance and allows overriding with ``__array_ufunc__``.

* Improved support for the ARM and POWER architectures.

* Improved support for AIX and PyPy.

* Improved interop with ctypes.

* Improved support for PEP 3118.



New functions
=============

* New functions added to the `numpy.lib.recfuntions` module to ease the
structured assignment changes:

 * ``assign_fields_by_name``
 * ``structured_to_unstructured``
 * ``unstructured_to_structured``
 * ``apply_along_fields``
 * ``require_fields``

See the user guide at <https://docs.scipy.org/doc/numpy/user/basics.rec.html>
for more info.


New deprecations
================

* The type dictionaries `numpy.core.typeNA` and `numpy.core.sctypeNA` are
deprecated. They were buggy and not documented and will be removed in the
1.18 release. Use`numpy.sctypeDict` instead.

* The `numpy.asscalar` function is deprecated. It is an alias to the more
powerful `numpy.ndarray.item`, not tested, and fails for scalars.

* The `numpy.set_array_ops` and `numpy.get_array_ops` functions are deprecated.
As part of `NEP 15`, they have been deprecated along with the C-API functions
:c:func:`PyArray_SetNumericOps` and :c:func:`PyArray_GetNumericOps`. Users
who wish to override the inner loop functions in built-in ufuncs should use
:c:func:`PyUFunc_ReplaceLoopBySignature`.

* The `numpy.unravel_index` keyword argument ``dims`` is deprecated, use
``shape`` instead.

* The `numpy.histogram` ``normed`` argument is deprecated.  It was deprecated
previously, but no warning was issued.

* The ``positive`` operator (``+``) applied to non-numerical arrays is
deprecated. See below for details.

* Passing an iterator to the stack functions is deprecated


Expired deprecations
====================

* NaT comparisons now return ``False`` without a warning, finishing a
deprecation cycle begun in NumPy 1.11.

* ``np.lib.function_base.unique`` was removed, finishing a deprecation cycle
begun in NumPy 1.4. Use `numpy.unique` instead.

* multi-field indexing now returns views instead of copies, finishing a
deprecation cycle begun in NumPy 1.7. The change was previously attempted in
NumPy 1.14 but reverted until now.

* ``np.PackageLoader`` and ``np.pkgload`` have been removed. These were
deprecated in 1.10, had no tests, and seem to no longer work in 1.15.


Future changes
==============

* NumPy 1.17 will drop support for Python 2.7.


Compatibility notes
===================

f2py script on Windows
----------------------
On Windows, the installed script for running f2py is now an ``.exe`` file
rather than a ``*.py`` file and should be run from the command line as ``f2py``
whenever the ``Scripts`` directory is in the path. Running ``f2py`` as a module
``python -m numpy.f2py [...]`` will work without path modification in any
version of NumPy.

NaT comparisons
---------------
Consistent with the behavior of NaN, all comparisons other than inequality
checks with datetime64 or timedelta64 NaT ("not-a-time") values now always
return ``False``, and inequality checks with NaT now always return ``True``.
This includes comparisons beteween NaT values. For compatibility with the
old behavior, use ``np.isnat`` to explicitly check for NaT or convert
datetime64/timedelta64 arrays with ``.astype(np.int64)`` before making
comparisons.

complex64/128 alignment has changed
-----------------------------------
The memory alignment of complex types is now the same as a C-struct composed of
two floating point values, while before it was equal to the size of the type.
For many users (for instance on x64/unix/gcc) this means that complex64 is now
4-byte aligned instead of 8-byte aligned. An important consequence is that
aligned structured dtypes may now have a different size. For instance,
``np.dtype('c8,u1', align=True)`` used to have an itemsize of 16 (on x64/gcc)
but now it is 12.

More in detail, the complex64 type now has the same alignment as a C-struct
``struct {float r, i;}``, according to the compiler used to compile numpy, and
similarly for the complex128 and complex256 types.

nd_grid __len__ removal
-----------------------
``len(np.mgrid)`` and ``len(np.ogrid)`` are now considered nonsensical
and raise a ``TypeError``.

``np.unravel_index`` now accepts ``shape`` keyword argument
-----------------------------------------------------------
Previously, only the ``dims`` keyword argument was accepted
for specification of the shape of the array to be used
for unraveling. ``dims`` remains supported, but is now deprecated.

multi-field views return a view instead of a copy
-------------------------------------------------
Indexing a structured array with multiple fields, e.g., ``arr[['f1', 'f3']]``,
returns a view into the original array instead of a copy. The returned view
will often have extra padding bytes corresponding to intervening fields in the
original array, unlike before, which will affect code such as
``arr[['f1', 'f3']].view('float64')``. This change has been planned since numpy
1.7. Operations hitting this path have emitted ``FutureWarnings`` since then.
Additional ``FutureWarnings`` about this change were added in 1.12.

To help users update their code to account for these changes, a number of
functions have been added to the ``numpy.lib.recfunctions`` module which
safely allow such operations. For instance, the code above can be replaced
with ``structured_to_unstructured(arr[['f1', 'f3']], dtype='float64')``.
See the "accessing multiple fields" section of the
`user guide <https://docs.scipy.org/doc/numpy/user/basics.rec.htmlaccessing-multiple-fields>`__.


C API changes
=============

The :c:data:`NPY_API_VERSION` was incremented to 0x0000D, due to the addition
of:

* :c:member:`PyUFuncObject.core_dim_flags`
* :c:member:`PyUFuncObject.core_dim_sizes`
* :c:member:`PyUFuncObject.identity_value`
* :c:func:`PyUFunc_FromFuncAndDataAndSignatureAndIdentity`


New Features
============

Integrated squared error (ISE) estimator added to ``histogram``
---------------------------------------------------------------
This method (``bins='stone'``) for optimizing the bin number is a
generalization of the Scott's rule. The Scott's rule assumes the distribution
is approximately Normal, while the ISE_ is a non-parametric method based on
cross-validation.

.. _ISE: https://en.wikipedia.org/wiki/HistogramMinimizing_cross-validation_estimated_squared_error

``max_rows`` keyword added for ``np.loadtxt``
---------------------------------------------
New keyword ``max_rows`` in `numpy.loadtxt` sets the maximum rows of the
content to be read after ``skiprows``, as in `numpy.genfromtxt`.

modulus operator support added for ``np.timedelta64`` operands
--------------------------------------------------------------
The modulus (remainder) operator is now supported for two operands
of type ``np.timedelta64``. The operands may have different units
and the return value will match the type of the operands.


Improvements
============

no-copy pickling of numpy arrays
--------------------------------
Up to protocol 4, numpy array pickling created 2 spurious copies of the data
being serialized.  With pickle protocol 5, and the ``PickleBuffer`` API, a
large variety of numpy arrays can now be serialized without any copy using
out-of-band buffers, and with one less copy using in-band buffers. This
results, for large arrays, in an up to 66% drop in peak memory usage.

build shell independence
------------------------
NumPy builds should no longer interact with the host machine
shell directly. ``exec_command`` has been replaced with
``subprocess.check_output`` where appropriate.

`np.polynomial.Polynomial` classes render in LaTeX in Jupyter notebooks
-----------------------------------------------------------------------
When used in a front-end that supports it, `Polynomial` instances are now
rendered through LaTeX. The current format is experimental, and is subject to
change.

``randint`` and ``choice`` now work on empty distributions
----------------------------------------------------------
Even when no elements needed to be drawn, ``np.random.randint`` and
``np.random.choice`` raised an error when the arguments described an empty
distribution. This has been fixed so that e.g.
``np.random.choice([], 0) == np.array([], dtype=float64)``.

``linalg.lstsq``, ``linalg.qr``, and ``linalg.svd`` now work with empty arrays
------------------------------------------------------------------------------
Previously, a ``LinAlgError`` would be raised when an empty matrix/empty
matrices (with zero rows and/or columns) is/are passed in. Now outputs of
appropriate shapes are returned.

Chain exceptions to give better error messages for invalid PEP3118 format strings
---------------------------------------------------------------------------------
This should help track down problems.

Einsum optimization path updates and efficiency improvements
------------------------------------------------------------
Einsum was synchronized with the current upstream work.

`numpy.angle` and `numpy.expand_dims` now work on ``ndarray`` subclasses
------------------------------------------------------------------------
In particular, they now work for masked arrays.

``NPY_NO_DEPRECATED_API`` compiler warning suppression
------------------------------------------------------
Setting ``NPY_NO_DEPRECATED_API`` to a value of 0 will suppress the current compiler
warnings when the deprecated numpy API is used.

``np.diff`` Added kwargs prepend and append
-------------------------------------------
New kwargs ``prepend`` and ``append``, allow for values to be inserted on
either end of the differences.  Similar to options for `ediff1d`. Now the
inverse of `cumsum` can be obtained easily via ``prepend=0``.

ARM support updated
-------------------
Support for ARM CPUs has been updated to accommodate 32 and 64 bit targets,
and also big and little endian byte ordering. AARCH32 memory alignment issues
have been addressed. CI testing has been expanded to include AARCH64 targets
via the services of shippable.com.

Appending to build flags
------------------------
`numpy.distutils` has always overridden rather than appended to `LDFLAGS` and
other similar such environment variables for compiling Fortran extensions.
Now, if the `NPY_DISTUTILS_APPEND_FLAGS` environment variable is set to 1, the
behavior will be appending.  This applied to: `LDFLAGS`, `F77FLAGS`,
`F90FLAGS`, `FREEFLAGS`, `FOPT`, `FDEBUG`, and `FFLAGS`.  See gh-11525 for more
details.

Generalized ufunc signatures now allow fixed-size dimensions
------------------------------------------------------------
By using a numerical value in the signature of a generalized ufunc, one can
indicate that the given function requires input or output to have dimensions
with the given size. E.g., the signature of a function that converts a polar
angle to a two-dimensional cartesian unit vector would be ``()->(2)``; that
for one that converts two spherical angles to a three-dimensional unit vector
would be ``(),()->(3)``; and that for the cross product of two
three-dimensional vectors would be ``(3),(3)->(3)``.

Note that to the elementary function these dimensions are not treated any
differently from variable ones indicated with a name starting with a letter;
the loop still is passed the corresponding size, but it can now count on that
size being equal to the fixed one given in the signature.

Generalized ufunc signatures now allow flexible dimensions
----------------------------------------------------------
Some functions, in particular numpy's implementation of ` as ``matmul``,
are very similar to generalized ufuncs in that they operate over core
dimensions, but one could not present them as such because they were able to
deal with inputs in which a dimension is missing. To support this, it is now
allowed to postfix a dimension name with a question mark to indicate that the
dimension does not necessarily have to be present.

With this addition, the signature for ``matmul`` can be expressed as
``(m?,n),(n,p?)->(m?,p?)``.  This indicates that if, e.g., the second operand
has only one dimension, for the purposes of the elementary function it will be
treated as if that input has core shape ``(n, 1)``, and the output has the
corresponding core shape of ``(m, 1)``. The actual output array, however, has
the flexible dimension removed, i.e., it will have shape ``(..., m)``.
Similarly, if both arguments have only a single dimension, the inputs will be
presented as having shapes ``(1, n)`` and ``(n, 1)`` to the elementary
function, and the output as ``(1, 1)``, while the actual output array returned
will have shape ``()``. In this way, the signature allows one to use a
single elementary function for four related but different signatures,
``(m,n),(n,p)->(m,p)``, ``(n),(n,p)->(p)``, ``(m,n),(n)->(m)`` and
``(n),(n)->()``.

``np.clip`` and the ``clip`` method check for memory overlap
------------------------------------------------------------
The ``out`` argument to these functions is now always tested for memory overlap
to avoid corrupted results when memory overlap occurs.

New value ``unscaled`` for option ``cov`` in ``np.polyfit``
-----------------------------------------------------------
A further possible value has been added to the ``cov`` parameter of the
``np.polyfit`` function. With ``cov='unscaled'`` the scaling of the covariance
matrix is disabled completely (similar to setting ``absolute_sigma=True`` in
``scipy.optimize.curve_fit``). This would be useful in occasions, where the
weights are given by 1/sigma with sigma being the (known) standard errors of
(Gaussian distributed) data points, in which case the unscaled matrix is
already a correct estimate for the covariance matrix.

Detailed docstrings for scalar numeric types
--------------------------------------------
The ``help`` function, when applied to numeric types such as `numpy.intc`,
`numpy.int_`, and `numpy.longlong`, now lists all of the aliased names for that
type, distinguishing between platform -dependent and -independent aliases.

``__module__`` attribute now points to public modules
-----------------------------------------------------
The ``__module__`` attribute on most NumPy functions has been updated to refer
to the preferred public module from which to access a function, rather than
the module in which the function happens to be defined. This produces more
informative displays for functions in tools such as IPython, e.g., instead of
``<function 'numpy.core.fromnumeric.sum'>`` you now see
``<function 'numpy.sum'>``.

Large allocations marked as suitable for transparent hugepages
--------------------------------------------------------------
On systems that support transparent hugepages over the madvise system call
numpy now marks that large memory allocations can be backed by hugepages which
reduces page fault overhead and can in some fault heavy cases improve
performance significantly. On Linux the setting for huge pages to be used,
`/sys/kernel/mm/transparent_hugepage/enabled`, must be at least `madvise`.
Systems which already have it set to `always` will not see much difference as
the kernel will automatically use huge pages where appropriate.

Users of very old Linux kernels (~3.x and older) should make sure that
`/sys/kernel/mm/transparent_hugepage/defrag` is not set to `always` to avoid
performance problems due concurrency issues in the memory defragmentation.

Alpine Linux (and other musl c library distros) support
-------------------------------------------------------
We now default to use `fenv.h` for floating point status error reporting.
Previously we had a broken default that sometimes would not report underflow,
overflow, and invalid floating point operations. Now we can support non-glibc
distrubutions like Alpine Linux as long as they ship `fenv.h`.

Speedup ``np.block`` for large arrays
-------------------------------------
Large arrays (greater than ``512 * 512``) now use a blocking algorithm based on
copying the data directly into the appropriate slice of the resulting array.
This results in significant speedups for these large arrays, particularly for
arrays being blocked along more than 2 dimensions.

``arr.ctypes.data_as(...)`` holds a reference to arr
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Previously the caller was responsible for keeping the array alive for the
lifetime of the pointer.

Speedup ``np.take`` for read-only arrays
----------------------------------------
The implementation of ``np.take`` no longer makes an unnecessary copy of the
source array when its ``writeable`` flag is set to ``False``.

Support path-like objects for more functions
--------------------------------------------
The ``np.core.records.fromfile`` function now supports ``pathlib.Path``
and other path-like objects in addition to a file object. Furthermore, the
``np.load`` function now also supports path-like objects when using memory
mapping (``mmap_mode`` keyword argument).

Better behaviour of ufunc identities during reductions
------------------------------------------------------
Universal functions have an ``.identity`` which is used when ``.reduce`` is
called on an empty axis.

As of this release, the logical binary ufuncs, `logical_and`, `logical_or`,
and `logical_xor`, now have ``identity`` s of type `bool`, where previously they
were of type `int`. This restores the 1.14 behavior of getting ``bool`` s when
reducing empty object arrays with these ufuncs, while also keeping the 1.15
behavior of getting ``int`` s when reducing empty object arrays with arithmetic
ufuncs like ``add`` and ``multiply``.

Additionally, `logaddexp` now has an identity of ``-inf``, allowing it to be
called on empty sequences, where previously it could not be.

This is possible thanks to the new
:c:func:`PyUFunc_FromFuncAndDataAndSignatureAndIdentity`, which allows
arbitrary values to be used as identities now.

Improved conversion from ctypes objects
---------------------------------------
Numpy has always supported taking a value or type from ``ctypes`` and
converting it into an array or dtype, but only behaved correctly for simpler
types. As of this release, this caveat is lifted - now:

* The ``_pack_`` attribute of ``ctypes.Structure``, used to emulate C's
``__attribute__((packed))``, is respected.
* Endianness of all ctypes objects is preserved
* ``ctypes.Union`` is supported
* Non-representable constructs raise exceptions, rather than producing
dangerously incorrect results:

* Bitfields are no longer interpreted as sub-arrays
* Pointers are no longer replaced with the type that they point to

A new ``ndpointer.contents`` member
-----------------------------------
This matches the ``.contents`` member of normal ctypes arrays, and can be used
to construct an ``np.array`` around the pointers contents.  This replaces
``np.array(some_nd_pointer)``, which stopped working in 1.15.  As a side effect
of this change, ``ndpointer`` now supports dtypes with overlapping fields and
padding.

``matmul`` is now a ``ufunc``
-----------------------------
`numpy.matmul` is now a ufunc which means that both the function and the
``__matmul__`` operator can now be overridden by ``__array_ufunc__``. Its
implementation has also changed. It uses the same BLAS routines as
`numpy.dot`, ensuring its performance is similar for large matrices.

Start and stop arrays for ``linspace``, ``logspace`` and ``geomspace``
----------------------------------------------------------------------
These functions used to be limited to scalar stop and start values, but can
now take arrays, which will be properly broadcast and result in an output
which has one axis prepended.  This can be used, e.g., to obtain linearly
interpolated points between sets of points.

CI extended with additional services
------------------------------------
We now use additional free CI services, thanks to the companies that provide:

* Codecoverage testing via codecov.io
* Arm testing via shippable.com
* Additional test runs on azure pipelines

These are in addition to our continued use of travis, appveyor (for wheels) and
LGTM


Changes
=======

Comparison ufuncs will now error rather than return NotImplemented
------------------------------------------------------------------
Previously, comparison ufuncs such as ``np.equal`` would return
`NotImplemented` if their arg

@pyup-bot pyup-bot mentioned this pull request Sep 7, 2019
@coveralls
Copy link

Coverage Status

Coverage remained the same at 90.291% when pulling 66c9f2b on pyup-update-numpy-1.15.4-to-1.17.2 into bc783a4 on develop.

@pyup-bot
Copy link
Collaborator Author

Closing this in favor of #92

@pyup-bot pyup-bot closed this Oct 17, 2019
@boazmohar boazmohar deleted the pyup-update-numpy-1.15.4-to-1.17.2 branch October 17, 2019 17:32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants