Update dependency numpy to v1.20.1 - autoclosed #14
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR contains the following updates:
==1.19.5
->==1.20.1
Release Notes
numpy/numpy
v1.20.1
Compare Source
NumPy 1.20.1 Release Notes
NumPy 1.20.1 is a rapid bugfix release fixing several bugs and
regressions reported after the 1.20.0 release.
Highlights
fixed.
random.shuffle
regression is fixed.Contributors
A total of 8 people contributed to this release. People with a \"+\" by
their names contributed a patch for the first time.
Pull requests merged
A total of 15 pull requests were merged for this release.
numpy.__init__.py
Checksums
MD5
SHA256
v1.20.0
Compare Source
NumPy 1.20.0 Release Notes
This NumPy release is the largest so made to date, some 684 PRs
contributed by 184 people have been merged. See the list of highlights
below for more details. The Python versions supported for this release
are 3.7-3.9, support for Python 3.6 has been dropped. Highlights are
improvements can be expected pending feedback from users.
has been done in introducing universal functions that will ease use
of modern features across different hardware platforms. This work is
ongoing.
in order to provide an easier path to extending dtypes. This work is
ongoing but enough has been done to allow experimentation and
feedback.
This work is ongoing and part of the larger project to improve
NumPy\'s online presence and usefulness to new users.
readability and removes technical debt.
New functions
The random.Generator class has a new
permuted
function.The new function differs from
shuffle
andpermutation
in that thesubarrays indexed by an axis are permuted rather than the axis being
treated as a separate 1-D array for every combination of the other
indexes. For example, it is now possible to permute the rows or columns
of a 2-D array.
(gh-15121)
sliding_window_view
provides a sliding window view for numpy arraysnumpy.lib.stride\_tricks.sliding\_window\_view
constructsviews on numpy arrays that offer a sliding or moving window access to
the array. This allows for the simple implementation of certain
algorithms, such as running means.
(gh-17394)
[numpy.broadcast_shapes]{.title-ref} is a new user-facing function
numpy.broadcast\_shapes
gets the resulting shape frombroadcasting the given shape tuples against each other.
(gh-17535)
Deprecations
Using the aliases of builtin types like
np.int
is deprecatedFor a long time,
np.int
has been an alias of the builtinint
. Thisis repeatedly a cause of confusion for newcomers, and existed mainly for
historic reasons.
These aliases have been deprecated. The table below shows the full list
of deprecated aliases, along with their exact meaning. Replacing uses of
items in the first column with the contents of the second column will
work identically and silence the deprecation warning.
The third column lists alternative NumPy names which may occasionally be
preferential. See also
basics.types
{.interpreted-text role="ref"} foradditional details.
numpy.bool
bool
numpy.bool\_
numpy.int
int
numpy.int\_
(default),numpy.int64
, ornumpy.int32
numpy.float
float
numpy.float64
,numpy.float\_
,numpy.double
(equivalent)numpy.complex
complex
numpy.complex128
,numpy.complex\_
,numpy.cdouble
(equivalent)numpy.object
object
numpy.object\_
numpy.str
str
numpy.str\_
numpy.long
int
numpy.int\_
(Clong
),numpy.longlong
(largest integer type)numpy.unicode
str
numpy.unicode\_
To give a clear guideline for the vast majority of cases, for the types
bool
,object
,str
(andunicode
) using the plain version isshorter and clear, and generally a good replacement. For
float
andcomplex
you can usefloat64
andcomplex128
if you wish to be moreexplicit about the precision.
For
np.int
a direct replacement withnp.int_
orint
is also goodand will not change behavior, but the precision will continue to depend
on the computer and operating system. If you want to be more explicit
and review the current use, you have the following alternatives:
np.int64
ornp.int32
to specify the precision exactly. Thisensures that results cannot depend on the computer or operating
system.
np.int_
orint
(the default), but be aware that it depends onthe computer and operating system.
np.cint
(int),np.int_
(long),np.longlong
.np.intp
which is 32bit on 32bit machines 64bit on 64bit machines.This can be the best type to use for indexing.
When used with
np.dtype(...)
ordtype=...
changing it to the NumPyname as mentioned above will have no effect on the output. If used as a
scalar with:
changing it can subtly change the result. In this case, the Python
version
float(123)
orint(12.)
is normally preferable, although theNumPy version may be useful for consistency with NumPy arrays (for
example, NumPy behaves differently for things like division by zero).
(gh-14882)
Passing
shape=None
to functions with a non-optional shape argument is deprecatedPreviously, this was an alias for passing
shape=()
. This deprecationis emitted by
PyArray\_IntpConverter
in the C API. If yourAPI is intended to support passing
None
, then you should check forNone
prior to invoking the converter, so as to be able to distinguishNone
and()
.(gh-15886)
Indexing errors will be reported even when index result is empty
In the future, NumPy will raise an IndexError when an integer array
index contains out of bound values even if a non-indexed dimension is of
length 0. This will now emit a DeprecationWarning. This can happen when
the array is previously empty, or an empty slice is involved:
Previously the non-empty index
[20]
was not checked for correctness.It will now be checked causing a deprecation warning which will be
turned into an error. This also applies to assignments.
(gh-15900)
Inexact matches for
mode
andsearchside
are deprecatedInexact and case insensitive matches for
mode
andsearchside
werevalid inputs earlier and will give a DeprecationWarning now. For
example, below are some example usages which are now deprecated and will
give a DeprecationWarning:
mode: inexact match
searchside: inexact match
(gh-16056)
Deprecation of [numpy.dual]{.title-ref}
The module
numpy.dual
is deprecated. Instead of importingfunctions from
numpy.dual
, the functions should beimported directly from NumPy or SciPy.
(gh-16156)
outer
andufunc.outer
deprecated for matrixnp.matrix
use with\~numpy.outer
or generic ufunc outercalls such as
numpy.add.outer
. Previously, matrix was converted to anarray here. This will not be done in the future requiring a manual
conversion to arrays.
(gh-16232)
Further Numeric Style types Deprecated
The remaining numeric-style type codes
Bytes0
,Str0
,Uint32
,Uint64
, andDatetime64
have been deprecated. The lower-case variantsshould be used instead. For bytes and string
"S"
and"U"
are furtheralternatives.
(gh-16554)
The
ndincr
method ofndindex
is deprecatedThe documentation has warned against using this function since NumPy
1.8. Use
next(it)
instead ofit.ndincr()
.(gh-17233)
ArrayLike objects which do not define
__len__
and__getitem__
Objects which define one of the protocols
__array__
,__array_interface__
, or__array_struct__
but are not sequences(usually defined by having a
__len__
and__getitem__
) will behavedifferently during array-coercion in the future.
When nested inside sequences, such as
np.array([array_like])
, thesewere handled as a single Python object rather than an array. In the
future they will behave identically to:
This change should only have an effect if
np.array(array_like)
is not0-D. The solution to this warning may depend on the object:
the warning. The object can choose to expose the sequence protocol
to opt-in to the new behaviour.
shapely
will allow conversion to an array-like usingline.coords
rather thannp.asarray(line)
. Users may work aroundthe warning, or use the new convention when it becomes available.
Unfortunately, using the new behaviour can only be achieved by calling
np.array(array_like)
.If you wish to ensure that the old behaviour remains unchanged, please
create an object array and then fill it explicitly, for example:
This will ensure NumPy knows to not enter the array-like and use it as a
object instead.
(gh-17973)
Future Changes
Arrays cannot be using subarray dtypes
Array creation and casting using
np.array(arr, dtype)
andarr.astype(dtype)
will use different logic whendtype
is a subarraydtype such as
np.dtype("(2)i,")
.For such a
dtype
the following behaviour is true:But
res
is filled using the logic:which uses incorrect broadcasting (and often leads to an error). In the
future, this will instead cast each element individually, leading to the
same result as:
Which can normally be used to opt-in to the new behaviour.
This change does not affect
np.array(list, dtype="(2)i,")
unless thelist
itself includes at least one array. In particular, the behaviouris unchanged for a list of tuples.
(gh-17596)
Expired deprecations
The deprecation of numeric style type-codes
np.dtype("Complex64")
(with upper case spelling), is expired.
"Complex64"
correspondedto
"complex128"
and"Complex32"
corresponded to"complex64"
.The deprecation of
np.sctypeNA
andnp.typeNA
is expired. Bothhave been removed from the public API. Use
np.typeDict
instead.(gh-16554)
The 14-year deprecation of
np.ctypeslib.ctypes_load_library
isexpired. Use
~numpy.ctypeslib.load_library
{.interpreted-textrole="func"} instead, which is identical.
(gh-17116)
Financial functions removed
In accordance with NEP 32, the financial functions are removed from
NumPy 1.20. The functions that have been removed are
fv
,ipmt
,irr
,mirr
,nper
,npv
,pmt
,ppmt
,pv
, andrate
. Thesefunctions are available in the
numpy_financial library.
(gh-17067)
Compatibility notes
isinstance(dtype, np.dtype)
and nottype(dtype) is not np.dtype
NumPy dtypes are not direct instances of
np.dtype
anymore. Code thatmay have used
type(dtype) is np.dtype
will always returnFalse
andmust be updated to use the correct version
isinstance(dtype, np.dtype)
.This change also affects the C-side macro
PyArray_DescrCheck
ifcompiled against a NumPy older than 1.16.6. If code uses this macro and
wishes to compile against an older version of NumPy, it must replace the
macro (see also C API changes section).
Same kind casting in concatenate with
axis=None
When [~numpy.concatenate]{.title-ref} is called with
axis=None
, theflattened arrays were cast with
unsafe
. Any other axis choice uses\"same kind\". That different default has been deprecated and \"same
kind\" casting will be used instead. The new
casting
keyword argumentcan be used to retain the old behaviour.
(gh-16134)
NumPy Scalars are cast when assigned to arrays
When creating or assigning to arrays, in all relevant cases NumPy
scalars will now be cast identically to NumPy arrays. In particular this
changes the behaviour in some cases which previously raised an error:
will succeed and return an undefined result (usually the smallest
possible integer). This also affects assignments:
At this time, NumPy retains the behaviour for:
The above changes do not affect Python scalars:
remains unaffected (
np.nan
is a Pythonfloat
, not a NumPy one).Unlike signed integers, unsigned integers do not retain this special
case, since they always behaved more like casting. The following code
stops raising an error:
To avoid backward compatibility issues, at this time assignment from
datetime64
scalar to strings of too short length remains supported.This means that
np.asarray(np.datetime64("2020-10-10"), dtype="S5")
succeeds now, when it failed before. In the long term this may be
deprecated or the unsafe cast may be allowed generally to make
assignment of arrays and scalars behave consistently.
Array coercion changes when Strings and other types are mixed
When strings and other types are mixed, such as:
The results will change, which may lead to string dtypes with longer
strings in some cases. In particularly, if
dtype="S"
is not providedany numerical value will lead to a string results long enough to hold
all possible numerical values. (e.g. \"S32\" for floats). Note that you
should always provide
dtype="S"
when converting non-strings tostrings.
If
dtype="S"
is provided the results will be largely identical tobefore, but NumPy scalars (not a Python float like
1.0
), will stillenforce a uniform string length:
Previously the first version gave the same result as the second.
Array coercion restructure
Array coercion has been restructured. In general, this should not affect
users. In extremely rare corner cases where array-likes are nested:
Things will now be more consistent with:
This can subtly change output for some badly defined array-likes. One
example for this are array-like objects which are not also sequences of
matching shape. In NumPy 1.20, a warning will be given when an
array-like is not also a sequence (but behaviour remains identical, see
deprecations). If an array like is also a sequence (defines
__getitem__
and__len__
) NumPy will now only use the result given by__array__
,__array_interface__
, or__array_struct__
. This willresult in differences when the (nested) sequence describes a different
shape.
(gh-16200)
Writing to the result of
numpy.broadcast\_arrays
will export readonly buffersIn NumPy 1.17
numpy.broadcast\_arrays
started warning whenthe resulting array was written to. This warning was skipped when the
array was used through the buffer interface (e.g.
memoryview(arr)
).The same thing will now occur for the two protocols
__array_interface__
, and__array_struct__
returning read-onlybuffers instead of giving a warning.
(gh-16350)
Numeric-style type names have been removed from type dictionaries
To stay in sync with the deprecation for
np.dtype("Complex64")
andother numeric-style (capital case) types. These were removed from
np.sctypeDict
andnp.typeDict
. You should use the lower caseversions instead. Note that
"Complex64"
corresponds to"complex128"
and
"Complex32"
corresponds to"complex64"
. The numpy style (new)versions, denote the full size and not the size of the real/imaginary
part.
(gh-16554)
The
operator.concat
function now raises TypeError for array argumentsThe previous behavior was to fall back to addition and add the two
arrays, which was thought to be unexpected behavior for a concatenation
function.
(gh-16570)
nickname
attribute removed from ABCPolyBaseAn abstract property
nickname
has been removed fromABCPolyBase
asit was no longer used in the derived convenience classes. This may
affect users who have derived classes from
ABCPolyBase
and overriddenthe methods for representation and display, e.g.
__str__
,__repr__
,_repr_latex
, etc.(gh-16589)
float->timedelta
anduint64->timedelta
promotion will raise a TypeErrorFloat and timedelta promotion consistently raises a TypeError.
np.promote_types("float32", "m8")
aligns withnp.promote_types("m8", "float32")
now and both raise a TypeError.Previously,
np.promote_types("float32", "m8")
returned"m8"
whichwas considered a bug.
Uint64 and timedelta promotion consistently raises a TypeError.
np.promote_types("uint64", "m8")
aligns withnp.promote_types("m8", "uint64")
now and both raise a TypeError.Previously,
np.promote_types("uint64", "m8")
returned"m8"
which wasconsidered a bug.
(gh-16592)
numpy.genfromtxt
now correctly unpacks structured arraysPreviously,
numpy.genfromtxt
failed to unpack if it wascalled with
unpack=True
and a structured datatype was passed to thedtype
argument (ordtype=None
was passed and a structured datatypewas inferred). For example:
Structured arrays will now correctly unpack into a list of arrays, one
for each column:
(gh-16650)
mgrid
,r_
, etc. consistently return correct outputs for non-default precision inputPreviously,
np.mgrid[np.float32(0.1):np.float32(0.35):np.float32(0.1),]
andnp.r_[0:10:np.complex64(3j)]
failed to return meaningful output. Thisbug potentially affects [~numpy.mgrid]{.title-ref},
numpy.ogrid
,numpy.r\_
, andnumpy.c\_
when an input with dtype other than thedefault
float64
andcomplex128
and equivalent Python types wereused. The methods have been fixed to handle varying precision correctly.
(gh-16815)
Boolean array indices with mismatching shapes now properly give
IndexError
Previously, if a boolean array index matched the size of the indexed
array but not the shape, it was incorrectly allowed in some cases. In
other cases, it gave an error, but the error was incorrectly a
ValueError
with a message about broadcasting instead of the correctIndexError
.For example, the following used to incorrectly give
ValueError: operands could not be broadcast together with shapes (2,2) (1,4)
:And the following used to incorrectly return
array([], dtype=float64)
:Both now correctly give
IndexError: boolean index did not match indexed array along dimension 0; dimension is 2 but corresponding boolean dimension is 1
.(gh-17010)
Casting errors interrupt Iteration
When iterating while casting values, an error may stop the iteration
earlier than before. In any case, a failed casting operation always
returned undefined, partial results. Those may now be even more
undefined and partial. For users of the
NpyIter
C-API such cast errorswill now cause the [iternext()]{.title-ref} function to return 0 and
thus abort iteration. Currently, there is no API to detect such an error
directly. It is necessary to check
PyErr_Occurred()
, which may beproblematic in combination with
NpyIter_Reset
. These issues alwaysexisted, but new API could be added if required by users.
(gh-17029)
f2py generated code may return unicode instead of byte strings
Some byte strings previously returned by f2py generated code may now be
unicode strings. This results from the ongoing Python2 -> Python3
cleanup.
(gh-17068)
The first element of the
__array_interface__["data"]
tuple must be an integerThis has been the documented interface for many years, but there was
still code that would accept a byte string representation of the pointer
address. That code has been removed, passing the address as a byte
string will now raise an error.
(gh-17241)
poly1d respects the dtype of all-zero argument
Previously, constructing an instance of
poly1d
with all-zerocoefficients would cast the coefficients to
np.float64
. This affectedthe output dtype of methods which construct
poly1d
instancesinternally, such as
np.polymul
.(gh-17577)
The numpy.i file for swig is Python 3 only.
Uses of Python 2.7 C-API functions have been updated to Python 3 only.
Users who need the old version should take it from an older version of
NumPy.
(gh-17580)
Void dtype discovery in
np.array
In calls using
np.array(..., dtype="V")
,arr.astype("V")
, andsimilar a TypeError will now be correctly raised unless all elements
have the identical void length. An example for this is:
Which previously returned an array with dtype
"V2"
which cannotrepresent
b"1"
faithfully.(gh-17706)
C API changes
The
PyArray_DescrCheck
macro is modifiedThe
PyArray_DescrCheck
macro has been updated since NumPy 1.16.6 tobe:
Starting with NumPy 1.20 code that is compiled against an earlier
version will be API incompatible with NumPy 1.20. The fix is to either
compile against 1.16.6 (if the NumPy 1.16 release is the oldest release
you wish to support), or manually inline the macro by replacing it with
the new definition:
which is compatible with all NumPy versions.
Size of
np.ndarray
andnp.void_
changedThe size of the
PyArrayObject
andPyVoidScalarObject
structures havechanged. The following header definition has been removed:
since the size must not be considered a compile time constant: it will
change for different runtime versions of NumPy.
The most likely relevant use are potential subclasses written in C which
will have to be recompiled and should be updated. Please see the
documentation for :c
PyArrayObject
{.interpreted-text role="type"} formore details and contact the NumPy developers if you are affected by
this change.
NumPy will attempt to give a graceful error but a program expecting a
fixed structure size may have undefined behaviour and likely crash.
(gh-16938)
New Features
where
keyword argument fornumpy.all
andnumpy.any
functionsThe keyword argument
where
is added and allows to only considerspecified elements or subaxes from an array in the Boolean evaluation of
all
andany
. This new keyword is available to the functionsall
and
any
both vianumpy
directly or in the methods ofnumpy.ndarray
.Any broadcastable Boolean array or a scalar can be set as
where
. Itdefaults to
True
to evaluate the functions for all elements in anarray if
where
is not set by the user. Examples are given in thedocumentation of the functions.
where
keyword argument fornumpy
functionsmean
,std
,var
The keyword argument
where
is added and allows to limit the scope inthe calculation of
mean
,std
andvar
to only a subset of elements.It is available both via
numpy
directly or in the methods ofnumpy.ndarray
.Any broadcastable Boolean array or a scalar can be set as
where
. Itdefaults to
True
to evaluate the functions for all elements in anarray if
where
is not set by the user. Examples are given in thedocumentation of the functions.
(gh-15852)
norm=backward
,forward
keyword options fornumpy.fft
functionsThe keyword argument option
norm=backward
is added as an alias forNone
and acts as the default option; using it has the directtransforms unscaled and the inverse transforms scaled by
1/n
.Using the new keyword argument option
norm=forward
has the directtransforms scaled by
1/n
and the inverse transforms unscaled (i.e.exactly opposite to the default option
norm=backward
).(gh-16476)
NumPy is now typed
Type annotations have been added for large parts of NumPy. There is also
a new [numpy.typing]{.title-ref} module that contains useful types for
end-users. The currently available types are
ArrayLike
: for objects that can be coerced to an arrayDtypeLike
: for objects that can be coerced to a dtype(gh-16515)
numpy.typing
is accessible at runtimeThe types in
numpy.typing
can now be imported at runtime. Code likethe following will now work:
(gh-16558)
New
__f2py_numpy_version__
attribute for f2py generated modules.Because f2py is released together with NumPy,
__f2py_numpy_version__
provides a way to track the version f2py used to generate the module.
(gh-16594)
mypy
tests can be run via runtests.pyCurrently running mypy with the NumPy stubs configured requires either:
mypy.ini
Both options are somewhat inconvenient, so add a
--mypy
option toruntests that handles setting things up for you. This will also be
useful in the future for any typing codegen since it will ensure the
project is built before type checking.
(gh-17123)
Negation of user defined BLAS/LAPACK detection order
[~numpy.distutils]{.title-ref} allows negation of libraries when
determining BLAS/LAPACK libraries. This may be used to remove an item
from the library resolution phase, i.e. to disallow NetLIB libraries one
could do:
That will use any of the accelerated libraries instead.
(gh-17219)
Allow passing optimizations arguments to asv build
It is now possible to pass
-j
,--cpu-baseline
,--cpu-dispatch
and--disable-optimization
flags to ASV build when the--bench-compare
argument is used.
(gh-17284)
The NVIDIA HPC SDK nvfortran compiler is now supported
Support for the nvfortran compiler, a version of pgfortran, has been
added.
(gh-17344)
dtype
option forcov
andcorrcoef
The
dtype
option is now available for [numpy.cov]{.title-ref} and[numpy.corrcoef]{.title-ref}. It specifies which data-type the returned
result should have. By default the functions still return a
[numpy.float64]{.title-ref} result.
(gh-17456)
Improvements
Improved string representation for polynomials (
__str__
)The string representation (
__str__
) of all six polynomial types in[numpy.polynomial]{.title-ref} has been updated to give the polynomial
as a mathematical expression instead of an array of coefficients. Two
package-wide formats for the polynomial expressions are available - one
using Unicode characters for superscripts and subscripts, and another
using only ASCII characters.
(gh-15666)
Remove the Accelerate library as a candidate LAPACK library
Apple no longer supports Accelerate. Remove it.
(gh-15759)
Object arrays containing multi-line objects have a more readable
repr
If elements of an object array have a
repr
containing new lines, thenthe wrapped lines will be aligned by column. Notably, this improves the
repr
of nested arrays:(gh-15997)
Concatenate supports providing an output dtype
Support was added to [~numpy.concatenate]{.title-ref} to provide an
output
dtype
andcasting
using keyword arguments. Thedtype
argument cannot be provided in conjunction with the
out
one.(gh-16134)
Thread safe f2py callback functions
Callback functions in f2py are now thread safe.
(gh-16519)
[numpy.core.records.fromfile]{.title-ref} now supports file-like objects
[numpy.rec.fromfile]{.title-ref} can now use file-like objects, for
instance :py
io.BytesIO
{.interpreted-text role="class"}(gh-16675)
RPATH support on AIX added to distutils
This allows SciPy to be built on AIX.
(gh-16710)
Use f90 compiler specified by the command line args
The compiler command selection for Fortran Portland Group Compiler is
changed in [numpy.distutils.fcompiler]{.title-ref}. This only affects
the linking command. This forces the use of the executable provided by
the command line option (if provided) instead of the pgfortran
executable. If no executable is provided to the command line option it
defaults to the pgf90 executable, wich is an alias for pgfortran
according to the PGI documentation.
(gh-16730)
Add NumPy declarations for Cython 3.0 and later
The pxd declarations for Cython 3.0 were improved to avoid using
deprecated NumPy C-API features. Extension modules built with Cython
3.0+ that use NumPy can now set the C macro
NPY_NO_DEPRECATED_API=NPY_1_7_API_VERSION
to avoid C compiler warningsabout deprecated API usage.
(gh-16986)
Make the window functions exactly symmetric
Make sure the window functions provided by NumPy are symmetric. There
were previously small deviations from symmetry due to numerical
precision that are now avoided by better arrangement of the computation.
(gh-17195)
Performance improvements and changes
Enable multi-platform SIMD compiler optimizations
A series of improvements for NumPy infrastructure to pave the way to
NEP-38, that can be summarized as follow:
New Build Arguments
--cpu-baseline
to specify the minimal set of requiredoptimizations, default value is
min
which provides the minimumCPU features that can safely run on a wide range of users
platforms.
--cpu-dispatch
to specify the dispatched set of additionaloptimizations, default value is
max -xop -fma4
which enablesall CPU features, except for AMD legacy features.
--disable-optimization
to explicitly disable the whole newimprovements, It also adds a new C compiler #definition
called
NPY_DISABLE_OPTIMIZATION
which it can be used as guardfor any SIMD code.
Advanced CPU dispatcher
A flexible cross-architecture CPU dispatcher built on the top of
Python/Numpy distutils, support all common compilers with a wide
range of CPU features.
The new dispatcher requires a special file extension
*.dispatch.c
to mark the dispatch-able C sources. These sources have the
ability to be compiled multiple times so that each compilation
process represents certain CPU features and provides different
#definitions and flags that affect the code paths.
New auto-generated C header
``core/src/common/_cpu_dispatch.h``
This header is generated by the distutils module
ccompiler_opt
,and contains all the #definitions and headers of instruction sets,
that had been configured through command arguments
\'--cpu-baseline\' and \'--cpu-dispatch\'.
New C header ``core/src/common/npy_cpu_dispatch.h``
This header contains all utilities that required for the whole CPU
dispatching process, it also can be considered as a bridge linking
the new infrastructure work with NumPy CPU runtime detection.
Add new attributes to NumPy umath module(Python level)
__cpu_baseline__
a list contains the minimal set of requiredoptimizations that supported by the compiler and platform
according to the specified values to command argument
\'--cpu-baseline\'.
__cpu_dispatch__
a list contains the dispatched set ofadditional optimizations that supported by the compiler and
platform according to the specified values to command argument
\'--cpu-dispatch\'.
Print the supported CPU features during the run of PytestTester
(gh-13516)
Changes
Changed behavior of
divmod(1., 0.)
and related functionsThe changes also assure that different compiler versions have the same
behavior for nan or inf usages in these operations. This was previously
compiler dependent, we now force the invalid and divide by zero flags,
making the results the same across compilers. For example, gcc-5, gcc-8,
or gcc-9 now result in the same behavior. The changes are tabulated
below:
: Summary of New Behavior
(gh-16161)
np.linspace
on integers now uses floorWhen using a
int
dtype in [numpy.linspace]{.title-ref}, previouslyfloat values would be rounded towards zero. Now
[numpy.floor]{.title-ref} is used instead, which rounds toward
-inf
.This changes the results for negative values. For example, the following
would previously give:
and now results in:
The former result can still be obtained with:
(gh-16841)
Checksums
MD5
SHA256
Renovate configuration
📅 Schedule: At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻️ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
This PR has been generated by WhiteSource Renovate. View repository job log here.