Skip to content

Commit

Permalink
Merge pull request #24315 from F3eQnxN3RriK/doc-patch-2
Browse files Browse the repository at this point in the history
DOC: Fix some links in documents
  • Loading branch information
charris committed Aug 3, 2023
2 parents f0b2fca + 8e652f6 commit c23af89
Show file tree
Hide file tree
Showing 7 changed files with 20 additions and 18 deletions.
4 changes: 2 additions & 2 deletions doc/source/reference/arrays.scalars.rst
Original file line number Diff line number Diff line change
Expand Up @@ -364,7 +364,7 @@ are also provided.
.. attribute:: intp

Alias for the signed integer type (one of `numpy.byte`, `numpy.short`,
`numpy.intc`, `numpy.int_` and `np.longlong`) that is the same size as a
`numpy.intc`, `numpy.int_` and `numpy.longlong`) that is the same size as a
pointer.

Compatible with the C ``intptr_t``.
Expand All @@ -374,7 +374,7 @@ are also provided.
.. attribute:: uintp

Alias for the unsigned integer type (one of `numpy.ubyte`, `numpy.ushort`,
`numpy.uintc`, `numpy.uint` and `np.ulonglong`) that is the same size as a
`numpy.uintc`, `numpy.uint` and `numpy.ulonglong`) that is the same size as a
pointer.

Compatible with the C ``uintptr_t``.
Expand Down
2 changes: 1 addition & 1 deletion doc/source/user/basics.rec.rst
Original file line number Diff line number Diff line change
Expand Up @@ -576,7 +576,7 @@ So the following is also valid (note the ``'f4'`` dtype for the ``'a'`` field):
array([True, False])

To compare two structured arrays, it must be possible to promote them to a
common dtype as returned by `numpy.result_type` and `np.promote_types`.
common dtype as returned by `numpy.result_type` and `numpy.promote_types`.
This enforces that the number of fields, the field names, and the field titles
must match precisely.
When promotion is not possible, for example due to mismatching field names,
Expand Down
10 changes: 5 additions & 5 deletions doc/source/user/whatisnumpy.rst
Original file line number Diff line number Diff line change
Expand Up @@ -12,14 +12,14 @@ mathematical, logical, shape manipulation, sorting, selecting, I/O,
discrete Fourier transforms, basic linear algebra, basic statistical
operations, random simulation and much more.

At the core of the NumPy package, is the `ndarray` object. This
At the core of the NumPy package, is the `~numpy.ndarray` object. This
encapsulates *n*-dimensional arrays of homogeneous data types, with
many operations being performed in compiled code for performance.
There are several important differences between NumPy arrays and the
standard Python sequences:

- NumPy arrays have a fixed size at creation, unlike Python lists
(which can grow dynamically). Changing the size of an `ndarray` will
(which can grow dynamically). Changing the size of an `~numpy.ndarray` will
create a new array and delete the original.

- The elements in a NumPy array are all required to be of the same
Expand Down Expand Up @@ -79,7 +79,7 @@ array, for example, the C code (abridged as before) expands to
}

NumPy gives us the best of both worlds: element-by-element operations
are the "default mode" when an `ndarray` is involved, but the
are the "default mode" when an `~numpy.ndarray` is involved, but the
element-by-element operation is speedily executed by pre-compiled C
code. In NumPy

Expand Down Expand Up @@ -131,9 +131,9 @@ Who Else Uses NumPy?
--------------------

NumPy fully supports an object-oriented approach, starting, once
again, with `ndarray`. For example, `ndarray` is a class, possessing
again, with `~numpy.ndarray`. For example, `~numpy.ndarray` is a class, possessing
numerous methods and attributes. Many of its methods are mirrored by
functions in the outer-most NumPy namespace, allowing the programmer
to code in whichever paradigm they prefer. This flexibility has allowed the
NumPy array dialect and NumPy `ndarray` class to become the *de-facto* language
NumPy array dialect and NumPy `~numpy.ndarray` class to become the *de-facto* language
of multi-dimensional data interchange used in Python.
12 changes: 7 additions & 5 deletions numpy/core/_add_newdocs.py
Original file line number Diff line number Diff line change
Expand Up @@ -390,7 +390,8 @@
`WRITEBACKIFCOPY` flag. In this case `nditer` must be used as a
context manager or the `nditer.close` method must be called before
using the result. The temporary data will be written back to the
original data when the `__exit__` function is called but not before:
original data when the :meth:`~object.__exit__` function is called
but not before:
>>> a = np.arange(6, dtype='i4')[::-2]
>>> with np.nditer(a, [],
Expand Down Expand Up @@ -4486,7 +4487,8 @@
add_newdoc('numpy.core.multiarray', 'ndarray', ('tostring', r"""
a.tostring(order='C')
A compatibility alias for `tobytes`, with exactly the same behavior.
A compatibility alias for `~ndarray.tobytes`, with exactly the same
behavior.
Despite its name, it returns `bytes` not `str`\ s.
Expand Down Expand Up @@ -5619,7 +5621,7 @@
>>> int32 = np.dtype("int32")
>>> float32 = np.dtype("float32")
The typical ufunc call does not pass an output dtype. `np.add` has two
The typical ufunc call does not pass an output dtype. `numpy.add` has two
inputs and one output, so leave the output as ``None`` (not provided):
>>> np.add.resolve_dtypes((int32, float32, None))
Expand Down Expand Up @@ -5909,7 +5911,7 @@
`__array_interface__` attribute.
Warning: This attribute exists specifically for `__array_interface__`,
and passing it directly to `np.dtype` will not accurately reconstruct
and passing it directly to `numpy.dtype` will not accurately reconstruct
some dtypes (e.g., scalar and subarray dtypes).
Examples
Expand Down Expand Up @@ -6934,7 +6936,7 @@ def refer_to_array_attribute(attr, method=True):
add_newdoc('numpy.core.numerictypes', 'flexible',
"""
Abstract base class of all scalar types without predefined length.
The actual size of these types depends on the specific `np.dtype`
The actual size of these types depends on the specific `numpy.dtype`
instantiation.
""")
Expand Down
4 changes: 2 additions & 2 deletions numpy/lib/arrayterator.py
Original file line number Diff line number Diff line change
Expand Up @@ -139,11 +139,11 @@ def flat(self):
A 1-D flat iterator for Arrayterator objects.
This iterator returns elements of the array to be iterated over in
`Arrayterator` one by one. It is similar to `flatiter`.
`~lib.Arrayterator` one by one. It is similar to `flatiter`.
See Also
--------
Arrayterator
lib.Arrayterator
flatiter
Examples
Expand Down
4 changes: 2 additions & 2 deletions numpy/lib/recfunctions.py
Original file line number Diff line number Diff line change
Expand Up @@ -1183,7 +1183,7 @@ def apply_along_fields(func, arr):
"""
Apply function 'func' as a reduction across fields of a structured array.
This is similar to `apply_along_axis`, but treats the fields of a
This is similar to `numpy.apply_along_axis`, but treats the fields of a
structured array as an extra axis. The fields are all first cast to a
common type following the type-promotion rules from `numpy.result_type`
applied to the field's dtypes.
Expand All @@ -1192,7 +1192,7 @@ def apply_along_fields(func, arr):
----------
func : function
Function to apply on the "field" dimension. This function must
support an `axis` argument, like np.mean, np.sum, etc.
support an `axis` argument, like `numpy.mean`, `numpy.sum`, etc.
arr : ndarray
Structured array for which to apply func.
Expand Down
2 changes: 1 addition & 1 deletion numpy/typing/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -126,7 +126,7 @@
typing (see :pep:`646`) it is unfortunately not possible to make the
necessary distinction between 0D and >0D arrays. While thus not strictly
correct, all operations are that can potentially perform a 0D-array -> scalar
cast are currently annotated as exclusively returning an `ndarray`.
cast are currently annotated as exclusively returning an `~numpy.ndarray`.
If it is known in advance that an operation _will_ perform a
0D-array -> scalar cast, then one can consider manually remedying the
Expand Down

0 comments on commit c23af89

Please sign in to comment.