Skip to content

Commit

Permalink
MAINT, DOC: discard repeated words
Browse files Browse the repository at this point in the history
  • Loading branch information
DimitriPapadopoulos committed Jan 13, 2022
1 parent 813a0c1 commit 58dbe26
Show file tree
Hide file tree
Showing 35 changed files with 43 additions and 44 deletions.
2 changes: 1 addition & 1 deletion doc/source/dev/gitwash/development_setup.rst
Expand Up @@ -112,7 +112,7 @@ Look it over
- the ``main`` branch you just cloned on your own machine
- the ``main`` branch from your fork on GitHub, which git named
``origin`` by default
- the ``main`` branch on the the main NumPy repo, which you named
- the ``main`` branch on the main NumPy repo, which you named
``upstream``.

::
Expand Down
2 changes: 1 addition & 1 deletion doc/source/reference/c-api/iterator.rst
Expand Up @@ -653,7 +653,7 @@ Construction and Destruction
may not be repeated. The following example is how normal broadcasting
applies to a 3-D array, a 2-D array, a 1-D array and a scalar.
**Note**: Before NumPy 1.8 ``oa_ndim == 0` was used for signalling that
**Note**: Before NumPy 1.8 ``oa_ndim == 0` was used for signalling
that ``op_axes`` and ``itershape`` are unused. This is deprecated and
should be replaced with -1. Better backward compatibility may be
achieved by using :c:func:`NpyIter_MultiNew` for this case.
Expand Down
2 changes: 1 addition & 1 deletion doc/source/reference/c-api/ufunc.rst
Expand Up @@ -171,7 +171,7 @@ Functions
`numpy.dtype.num` (built-in only) that the corresponding
function in the ``func`` array accepts. For instance, for a comparison
ufunc with three ``ntypes``, two ``nin`` and one ``nout``, where the
first function accepts `numpy.int32` and the the second
first function accepts `numpy.int32` and the second
`numpy.int64`, with both returning `numpy.bool_`, ``types`` would
be ``(char[]) {5, 5, 0, 7, 7, 0}`` since ``NPY_INT32`` is 5,
``NPY_INT64`` is 7, and ``NPY_BOOL`` is 0.
Expand Down
4 changes: 2 additions & 2 deletions doc/source/reference/random/parallel.rst
Expand Up @@ -28,8 +28,8 @@ streams.

`~SeedSequence` avoids these problems by using successions of integer hashes
with good `avalanche properties`_ to ensure that flipping any bit in the input
input has about a 50% chance of flipping any bit in the output. Two input seeds
that are very close to each other will produce initial states that are very far
has about a 50% chance of flipping any bit in the output. Two input seeds that
are very close to each other will produce initial states that are very far
from each other (with very high probability). It is also constructed in such
a way that you can provide arbitrary-sized integers or lists of integers.
`~SeedSequence` will take all of the bits that you provide and mix them
Expand Down
2 changes: 1 addition & 1 deletion doc/source/reference/swig.interface-file.rst
Expand Up @@ -904,7 +904,7 @@ Routines

* ``PyArrayObject* ary``, a NumPy array.

Require the given ``PyArrayObject`` to to be Fortran ordered. If
Require the given ``PyArrayObject`` to be Fortran ordered. If
the ``PyArrayObject`` is already Fortran ordered, do nothing.
Else, set the Fortran ordering flag and recompute the strides.

Expand Down
2 changes: 1 addition & 1 deletion doc/source/user/basics.subclassing.rst
Expand Up @@ -523,7 +523,7 @@ which inputs and outputs it converted. Hence, e.g.,
>>> a.info
{'inputs': [0, 1], 'outputs': [0]}

Note that another approach would be to to use ``getattr(ufunc,
Note that another approach would be to use ``getattr(ufunc,
methods)(*inputs, **kwargs)`` instead of the ``super`` call. For this example,
the result would be identical, but there is a difference if another operand
also defines ``__array_ufunc__``. E.g., lets assume that we evalulate
Expand Down
2 changes: 1 addition & 1 deletion doc/source/user/building.rst
Expand Up @@ -341,7 +341,7 @@ intended host and not the build system, set::

where ``${ARCH_TRIPLET}`` is an architecture-dependent suffix appropriate for
the host architecture. (This should be the name of a ``_sysconfigdata`` file,
without the ``.py`` extension, found in in the host Python library directory.)
without the ``.py`` extension, found in the host Python library directory.)

When using external linear algebra libraries, include and library directories
should be provided for the desired libraries in ``site.cfg`` as described
Expand Down
2 changes: 1 addition & 1 deletion doc/source/user/c-info.how-to-extend.rst
Expand Up @@ -111,7 +111,7 @@ Defining functions
==================
The second argument passed in to the Py_InitModule function is a
structure that makes it easy to to define functions in the module. In
structure that makes it easy to define functions in the module. In
the example given above, the mymethods structure would have been
defined earlier in the file (usually right before the init{name}
subroutine) to:
Expand Down
2 changes: 1 addition & 1 deletion numpy/core/_add_newdocs.py
Expand Up @@ -5253,7 +5253,7 @@
dtype : data-type code, optional
The data-type used to represent the intermediate results. Defaults
to the data-type of the output array if such is provided, or the
the data-type of the input array if no output array is provided.
data-type of the input array if no output array is provided.
out : ndarray, None, or tuple of ndarray and None, optional
A location into which the result is stored. If not provided or None,
a freshly-allocated array is returned. For consistency with
Expand Down
2 changes: 1 addition & 1 deletion numpy/core/numeric.py
Expand Up @@ -136,7 +136,7 @@ def zeros_like(a, dtype=None, order='K', subok=True, shape=None):
"""
res = empty_like(a, dtype=dtype, order=order, subok=subok, shape=shape)
# needed instead of a 0 to get same result as zeros for for string dtypes
# needed instead of a 0 to get same result as zeros for string dtypes
z = zeros(1, dtype=res.dtype)
multiarray.copyto(res, z, casting='unsafe')
return res
Expand Down
4 changes: 2 additions & 2 deletions numpy/core/src/multiarray/array_coercion.c
Expand Up @@ -67,8 +67,8 @@
*
* The code here avoid multiple conversion of array-like objects (including
* sequences). These objects are cached after conversion, which will require
* additional memory, but can drastically speed up coercion from from array
* like objects.
* additional memory, but can drastically speed up coercion from array like
* objects.
*/


Expand Down
4 changes: 2 additions & 2 deletions numpy/core/src/multiarray/common.c
Expand Up @@ -108,8 +108,8 @@ PyArray_DTypeFromObjectStringDiscovery(

/*
* This function is now identical to the new PyArray_DiscoverDTypeAndShape
* but only returns the the dtype. It should in most cases be slowly phased
* out. (Which may need some refactoring to PyArray_FromAny to make it simpler)
* but only returns the dtype. It should in most cases be slowly phased out.
* (Which may need some refactoring to PyArray_FromAny to make it simpler)
*/
NPY_NO_EXPORT int
PyArray_DTypeFromObject(PyObject *obj, int maxdims, PyArray_Descr **out_dtype)
Expand Down
2 changes: 1 addition & 1 deletion numpy/core/src/multiarray/convert_datatype.c
Expand Up @@ -3656,7 +3656,7 @@ PyArray_GetObjectToGenericCastingImpl(void)



/* Any object object is simple (could even use the default) */
/* Any object is simple (could even use the default) */
static NPY_CASTING
any_to_object_resolve_descriptors(
PyArrayMethodObject *NPY_UNUSED(self),
Expand Down
4 changes: 2 additions & 2 deletions numpy/core/src/multiarray/ctors.c
Expand Up @@ -1637,8 +1637,8 @@ PyArray_FromAny(PyObject *op, PyArray_Descr *newtype, int min_depth,
* Thus, we check if there is an array included, in that case we
* give a FutureWarning.
* When the warning is removed, PyArray_Pack will have to ensure
* that that it does not append the dimensions when creating the
* subarrays to assign `arr[0] = obj[0]`.
* that it does not append the dimensions when creating the subarrays
* to assign `arr[0] = obj[0]`.
*/
int includes_array = 0;
if (cache != NULL) {
Expand Down
4 changes: 2 additions & 2 deletions numpy/core/src/multiarray/dtype_transfer.c
Expand Up @@ -3393,8 +3393,8 @@ wrap_aligned_transferfunction(
* For casts between two dtypes with the same type (within DType casts)
* it also wraps the `copyswapn` function.
*
* This function is called called from `ArrayMethod.get_loop()` when a
* specialized cast function is missing.
* This function is called from `ArrayMethod.get_loop()` when a specialized
* cast function is missing.
*
* In general, the legacy cast functions do not support unaligned access,
* so an ArrayMethod using this must signal that. In a few places we do
Expand Down
2 changes: 1 addition & 1 deletion numpy/core/src/multiarray/nditer_constr.c
Expand Up @@ -992,7 +992,7 @@ npyiter_check_per_op_flags(npy_uint32 op_flags, npyiter_opitflags *op_itflags)
}

/*
* Prepares a a constructor operand. Assumes a reference to 'op'
* Prepares a constructor operand. Assumes a reference to 'op'
* is owned, and that 'op' may be replaced. Fills in 'op_dataptr',
* 'op_dtype', and may modify 'op_itflags'.
*
Expand Down
2 changes: 1 addition & 1 deletion numpy/core/src/npymath/npy_math_complex.c.src
Expand Up @@ -1696,7 +1696,7 @@ npy_catanh@c@(@ctype@ z)
if (ax < SQRT_3_EPSILON / 2 && ay < SQRT_3_EPSILON / 2) {
/*
* z = 0 was filtered out above. All other cases must raise
* inexact, but this is the only only that needs to do it
* inexact, but this is the only one that needs to do it
* explicitly.
*/
raise_inexact();
Expand Down
2 changes: 1 addition & 1 deletion numpy/core/src/umath/dispatching.c
Expand Up @@ -78,7 +78,7 @@ NPY_NO_EXPORT int
PyUFunc_AddLoop(PyUFuncObject *ufunc, PyObject *info, int ignore_duplicate)
{
/*
* Validate the info object, this should likely move to to a different
* Validate the info object, this should likely move to a different
* entry-point in the future (and is mostly unnecessary currently).
*/
if (!PyTuple_CheckExact(info) || PyTuple_GET_SIZE(info) != 2) {
Expand Down
2 changes: 1 addition & 1 deletion numpy/core/src/umath/ufunc_type_resolution.c
Expand Up @@ -416,7 +416,7 @@ PyUFunc_SimpleBinaryComparisonTypeResolver(PyUFuncObject *ufunc,
}
}
else {
/* Usually a failure, but let the the default version handle it */
/* Usually a failure, but let the default version handle it */
return PyUFunc_DefaultTypeResolver(ufunc, casting,
operands, type_tup, out_dtypes);
}
Expand Down
2 changes: 1 addition & 1 deletion numpy/core/tests/test_indexing.py
Expand Up @@ -1332,7 +1332,7 @@ def test_boolean_indexing_fast_path(self):


class TestArrayToIndexDeprecation:
"""Creating an an index from array not 0-D is an error.
"""Creating an index from array not 0-D is an error.
"""
def test_array_to_index_error(self):
Expand Down
2 changes: 1 addition & 1 deletion numpy/doc/ufuncs.py
Expand Up @@ -75,7 +75,7 @@
>>> np.add.reduce(np.arange(10).reshape(2,5),axis=1)
array([10, 35])
**.accumulate(arr)** applies the binary operator and generates an an
**.accumulate(arr)** applies the binary operator and generates an
equivalently shaped array that includes the accumulated amount for each
element of the array. A couple examples: ::
Expand Down
4 changes: 2 additions & 2 deletions numpy/lib/function_base.py
Expand Up @@ -3551,8 +3551,8 @@ def sinc(x):
Parameters
----------
x : ndarray
Array (possibly multi-dimensional) of values for which to to
calculate ``sinc(x)``.
Array (possibly multi-dimensional) of values for which to calculate
``sinc(x)``.
Returns
-------
Expand Down
4 changes: 2 additions & 2 deletions numpy/lib/histograms.py
Expand Up @@ -506,8 +506,8 @@ def histogram_bin_edges(a, bins=10, range=None, weights=None):
with non-normal datasets.
'scott'
Less robust estimator that that takes into account data
variability and data size.
Less robust estimator that takes into account data variability
and data size.
'stone'
Estimator based on leave-one-out cross-validation estimate of
Expand Down
5 changes: 2 additions & 3 deletions numpy/lib/nanfunctions.py
Expand Up @@ -188,9 +188,8 @@ def _divide_by_count(a, b, out=None):
"""
Compute a/b ignoring invalid results. If `a` is an array the division
is done in place. If `a` is a scalar, then its type is preserved in the
output. If out is None, then then a is used instead so that the
division is in place. Note that this is only called with `a` an inexact
type.
output. If out is None, then a is used instead so that the division
is in place. Note that this is only called with `a` an inexact type.
Parameters
----------
Expand Down
2 changes: 1 addition & 1 deletion numpy/polynomial/chebyshev.py
Expand Up @@ -1119,7 +1119,7 @@ def chebval(x, c, tensor=True):
If `x` is a list or tuple, it is converted to an ndarray, otherwise
it is left unchanged and treated as a scalar. In either case, `x`
or its elements must support addition and multiplication with
with themselves and with the elements of `c`.
themselves and with the elements of `c`.
c : array_like
Array of coefficients ordered so that the coefficients for terms of
degree n are contained in c[n]. If `c` is multidimensional the
Expand Down
2 changes: 1 addition & 1 deletion numpy/polynomial/hermite.py
Expand Up @@ -827,7 +827,7 @@ def hermval(x, c, tensor=True):
If `x` is a list or tuple, it is converted to an ndarray, otherwise
it is left unchanged and treated as a scalar. In either case, `x`
or its elements must support addition and multiplication with
with themselves and with the elements of `c`.
themselves and with the elements of `c`.
c : array_like
Array of coefficients ordered so that the coefficients for terms of
degree n are contained in c[n]. If `c` is multidimensional the
Expand Down
2 changes: 1 addition & 1 deletion numpy/polynomial/laguerre.py
Expand Up @@ -826,7 +826,7 @@ def lagval(x, c, tensor=True):
If `x` is a list or tuple, it is converted to an ndarray, otherwise
it is left unchanged and treated as a scalar. In either case, `x`
or its elements must support addition and multiplication with
with themselves and with the elements of `c`.
themselves and with the elements of `c`.
c : array_like
Array of coefficients ordered so that the coefficients for terms of
degree n are contained in c[n]. If `c` is multidimensional the
Expand Down
2 changes: 1 addition & 1 deletion numpy/polynomial/legendre.py
Expand Up @@ -857,7 +857,7 @@ def legval(x, c, tensor=True):
If `x` is a list or tuple, it is converted to an ndarray, otherwise
it is left unchanged and treated as a scalar. In either case, `x`
or its elements must support addition and multiplication with
with themselves and with the elements of `c`.
themselves and with the elements of `c`.
c : array_like
Array of coefficients ordered so that the coefficients for terms of
degree n are contained in c[n]. If `c` is multidimensional the
Expand Down
2 changes: 1 addition & 1 deletion numpy/random/_common.pyx
Expand Up @@ -65,7 +65,7 @@ cdef object random_raw(bitgen_t *bitgen, object lock, object size, object output

Notes
-----
This method directly exposes the the raw underlying pseudo-random
This method directly exposes the raw underlying pseudo-random
number generator. All values are returned as unsigned 64-bit
values irrespective of the number of bits produced by the PRNG.

Expand Down
2 changes: 1 addition & 1 deletion numpy/random/_examples/cython/extending.pyx
Expand Up @@ -31,7 +31,7 @@ def uniform_mean(Py_ssize_t n):
random_values = np.empty(n)
# Best practice is to acquire the lock whenever generating random values.
# This prevents other threads from modifying the state. Acquiring the lock
# is only necessary if if the GIL is also released, as in this example.
# is only necessary if the GIL is also released, as in this example.
with x.lock, nogil:
for i in range(n):
random_values[i] = rng.next_double(rng.state)
Expand Down
2 changes: 1 addition & 1 deletion numpy/random/bit_generator.pyx
Expand Up @@ -576,7 +576,7 @@ cdef class BitGenerator():
Notes
-----
This method directly exposes the the raw underlying pseudo-random
This method directly exposes the raw underlying pseudo-random
number generator. All values are returned as unsigned 64-bit
values irrespective of the number of bits produced by the PRNG.
Expand Down
2 changes: 1 addition & 1 deletion numpy/random/tests/test_generator_mt19937.py
Expand Up @@ -2563,7 +2563,7 @@ def test_three_arg_funcs(self):
def test_jumped(config):
# Each config contains the initial seed, a number of raw steps
# the sha256 hashes of the initial and the final states' keys and
# the position of of the initial and the final state.
# the position of the initial and the final state.
# These were produced using the original C implementation.
seed = config["seed"]
steps = config["steps"]
Expand Down
2 changes: 1 addition & 1 deletion numpy/testing/_private/parameterized.py
@@ -1,5 +1,5 @@
"""
tl;dr: all code code is licensed under simplified BSD, unless stated otherwise.
tl;dr: all code is licensed under simplified BSD, unless stated otherwise.
Unless stated otherwise in the source files, all code is copyright 2010 David
Wolever <david@wolever.net>. All rights reserved.
Expand Down
2 changes: 1 addition & 1 deletion tools/swig/README
Expand Up @@ -15,7 +15,7 @@ system used here, can be found in the NumPy reference guide.
Testing
-------
The tests are a good example of what we are trying to do with numpy.i.
The files related to testing are are in the test subdirectory::
The files related to testing are in the test subdirectory::

Vector.h
Vector.cxx
Expand Down
2 changes: 1 addition & 1 deletion tools/swig/numpy.i
Expand Up @@ -524,7 +524,7 @@
return success;
}

/* Require the given PyArrayObject to to be Fortran ordered. If the
/* Require the given PyArrayObject to be Fortran ordered. If the
* the PyArrayObject is already Fortran ordered, do nothing. Else,
* set the Fortran ordering flag and recompute the strides.
*/
Expand Down

0 comments on commit 58dbe26

Please sign in to comment.