diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml
index d203f8bdf..c9a5fedd8 100644
--- a/.github/workflows/ci.yml
+++ b/.github/workflows/ci.yml
@@ -53,26 +53,6 @@ jobs:
if: always()
with:
files: ./**/coverage*.xml
- docs:
- runs-on: ubuntu-latest
- steps:
- - name: Checkout Repo
- uses: actions/checkout@v4
- - name: Set up Python
- uses: actions/setup-python@v5
- with:
- python-version: '3.10'
- cache: 'pip'
- - name: Install package
- run: |
- pip install -e '.[docs]'
- - name: Run tests
- run: |
- sphinx-build -W -b html docs/ _build/html
- - uses: actions/upload-artifact@v4
- with:
- name: Documentation
- path: _build/html
benchmarks:
runs-on: ubuntu-latest
steps:
diff --git a/.gitignore b/.gitignore
index bf3a829c1..4fa4affec 100644
--- a/.gitignore
+++ b/.gitignore
@@ -60,9 +60,8 @@ junit/
# Django stuff:
*.log
-# Sphinx documentation
-docs/_build/
-_build/
+# mkdocs documentation
+site/
# PyBuilder
target/
diff --git a/.readthedocs.yml b/.readthedocs.yml
index 5ff021e4c..2271c47cf 100644
--- a/.readthedocs.yml
+++ b/.readthedocs.yml
@@ -1,13 +1,20 @@
+# Read the Docs configuration file for MkDocs projects
+# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details
+
+# Required
version: 2
+# Set the version of Python and other tools you might need
build:
os: ubuntu-22.04
tools:
- python: "3.10"
+ python: "3.12"
-sphinx:
- configuration: docs/conf.py
+mkdocs:
+ configuration: mkdocs.yml
+ fail_on_warning: false
+# Optionally declare the Python requirements required to build your docs
python:
install:
- method: pip
diff --git a/docs/assets/images/logo.png b/docs/assets/images/logo.png
new file mode 100644
index 000000000..828740fc3
Binary files /dev/null and b/docs/assets/images/logo.png differ
diff --git a/docs/assets/images/logo.svg b/docs/assets/images/logo.svg
new file mode 100644
index 000000000..933164098
--- /dev/null
+++ b/docs/assets/images/logo.svg
@@ -0,0 +1 @@
+
diff --git a/docs/changelog.rst b/docs/changelog.md
similarity index 100%
rename from docs/changelog.rst
rename to docs/changelog.md
diff --git a/docs/conduct.rst b/docs/conduct.md
similarity index 82%
rename from docs/conduct.rst
rename to docs/conduct.md
index 32dd1f5da..37104b578 100644
--- a/docs/conduct.rst
+++ b/docs/conduct.md
@@ -1,8 +1,6 @@
-Contributor Covenant Code of Conduct
-====================================
+# Code of Conduct
-Our Pledge
-----------
+## Our Pledge
We as members, contributors, and leaders pledge to make participation in
our community a harassment-free experience for everyone, regardless of
@@ -14,8 +12,7 @@ race, religion, or sexual identity and orientation.
We pledge to act and interact in ways that contribute to an open,
welcoming, diverse, inclusive, and healthy community.
-Our Standards
--------------
+## Our Standards
Examples of behavior that contributes to a positive environment for our
community include:
@@ -40,8 +37,7 @@ Examples of unacceptable behavior include:
- Other conduct which could reasonably be considered inappropriate in a
professional setting
-Enforcement Responsibilities
-----------------------------
+## Enforcement Responsibilities
Community leaders are responsible for clarifying and enforcing our
standards of acceptable behavior and will take appropriate and fair
@@ -53,8 +49,7 @@ reject comments, commits, code, wiki edits, issues, and other
contributions that are not aligned to this Code of Conduct, and will
communicate reasons for moderation decisions when appropriate.
-Scope
------
+## Scope
This Code of Conduct applies within all community spaces, and also
applies when an individual is officially representing the community in
@@ -62,26 +57,24 @@ public spaces. Examples of representing our community include using an
official e-mail address, posting via an official social media account,
or acting as an appointed representative at an online or offline event.
-Enforcement
------------
+## Enforcement
+
Instances of abusive, harassing, or otherwise unacceptable behavior may
be reported to the community leaders responsible for enforcement at
-`hameerabbasi@yahoo.com `_. All complaints will be reviewed and
+[hameerabbasi@yahoo.com](mailto:hameerabbasi@yahoo.com). All complaints will be reviewed and
investigated promptly and fairly.
All community leaders are obligated to respect the privacy and security
of the reporter of any incident.
-Enforcement Guidelines
-----------------------
+## Enforcement Guidelines
Community leaders will follow these Community Impact Guidelines in
determining the consequences for any action they deem in violation of
this Code of Conduct:
-1. Correction
-~~~~~~~~~~~~~
+### 1. Correction
**Community Impact**: Use of inappropriate language or other behavior
deemed unprofessional or unwelcome in the community.
@@ -91,8 +84,7 @@ providing clarity around the nature of the violation and an explanation
of why the behavior was inappropriate. A public apology may be
requested.
-2. Warning
-~~~~~~~~~~
+### 2. Warning
**Community Impact**: A violation through a single incident or series of
actions.
@@ -104,8 +96,7 @@ time. This includes avoiding interactions in community spaces as well as
external channels like social media. Violating these terms may lead to a
temporary or permanent ban.
-3. Temporary Ban
-~~~~~~~~~~~~~~~~
+### 3. Temporary Ban
**Community Impact**: A serious violation of community standards,
including sustained inappropriate behavior.
@@ -117,8 +108,7 @@ unsolicited interaction with those enforcing the Code of Conduct, is
allowed during this period. Violating these terms may lead to a
permanent ban.
-4. Permanent Ban
-~~~~~~~~~~~~~~~~
+### 4. Permanent Ban
**Community Impact**: Demonstrating a pattern of violation of community
standards, including sustained inappropriate behavior, harassment of an
@@ -128,17 +118,17 @@ individuals.
**Consequence**: A permanent ban from any sort of public interaction
within the community.
-Attribution
------------
+## Attribution
+
-This Code of Conduct is adapted from the `Contributor
-Covenant `__, version 2.0,
+This Code of Conduct is adapted from the [Contributor
+Covenant](https://www.contributor-covenant.org), version 2.0,
available at
-https://www.contributor-covenant.org/version/2/0/code\_of\_conduct.html.
+[https://www.contributor-covenant.org/version/2/0/code\_of\_conduct.html](https://www.contributor-covenant.org/version/2/0/code\_of\_conduct.html).
-Community Impact Guidelines were inspired by `Mozilla's code of conduct
-enforcement ladder <:ghuser:`mozilla/diversity`>`__.
+Community Impact Guidelines were inspired by [Mozilla's code of conduct
+enforcement ladder](https://github.com/mozilla/inclusion).
For answers to common questions about this code of conduct, see the FAQ
-at https://www.contributor-covenant.org/faq. Translations are available
-at https://www.contributor-covenant.org/translations.
+at [https://www.contributor-covenant.org/faq](https://www.contributor-covenant.org/faq). Translations are available
+at [https://www.contributor-covenant.org/translations](https://www.contributor-covenant.org/translations).
diff --git a/docs/conf.py b/docs/conf.py
deleted file mode 100644
index 7725d6b09..000000000
--- a/docs/conf.py
+++ /dev/null
@@ -1,192 +0,0 @@
-#!/usr/bin/env python3
-#
-# sparse documentation build configuration file, created by
-# sphinx-quickstart on Fri Dec 29 20:58:03 2017.
-#
-# This file is execfile()d with the current directory set to its
-# containing dir.
-#
-# Note that not all possible configuration values are present in this
-# autogenerated file.
-#
-# All configuration values have a default; values that are commented out
-# serve to show the default.
-
-# If extensions (or modules to document with autodoc) are in another directory,
-# add these directories to sys.path here. If the directory is relative to the
-# documentation root, use os.path.abspath to make it absolute, like shown here.
-#
-import os
-import sys
-
-sys.path.insert(0, os.path.abspath(".."))
-from sparse import __version__ # noqa: E402
-
-# -- General configuration ------------------------------------------------
-
-# If your documentation needs a minimal Sphinx version, state it here.
-#
-# needs_sphinx = '1.0'
-
-# Add any Sphinx extension module names here, as strings. They can be
-# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
-# ones.
-
-extensions = [
- "sphinx.ext.autodoc",
- "sphinx.ext.doctest",
- "sphinx.ext.intersphinx",
- "sphinx.ext.coverage",
- "sphinx.ext.mathjax",
- "sphinx.ext.napoleon",
- "sphinx.ext.viewcode",
- "sphinx.ext.autosummary",
- "sphinx.ext.inheritance_diagram",
- "sphinx.ext.extlinks",
-]
-
-# Add any paths that contain templates here, relative to this directory.
-templates_path = ["_templates"]
-
-mathjax_path = "https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML"
-
-# The suffix(es) of source filenames.
-# You can specify multiple suffix as a list of string:
-#
-# source_suffix = ['.rst', '.md']
-source_suffix = ".rst"
-
-# The main toctree document.
-root_doc = "index"
-
-# General information about the project.
-project = "sparse"
-copyright = "2018, Sparse developers"
-author = "Sparse Developers"
-
-# The version info for the project you're documenting, acts as replacement for
-# |version| and |release|, also used in various other places throughout the
-# built documents.
-#
-# The short X.Y version.
-version = __version__
-# The full version, including alpha/beta/rc tags.
-release = __version__
-
-# The language for content autogenerated by Sphinx. Refer to documentation
-# for a list of supported languages.
-#
-# This is also used if you do content translation via gettext catalogs.
-# Usually you set "language" from the command line for these cases.
-language = "en"
-
-# List of patterns, relative to source directory, that match files and
-# directories to ignore when looking for source files.
-# This patterns also effect to html_static_path and html_extra_path
-exclude_patterns = ["_build", "**tests**", "**setup**", "**extern**", "**data**"]
-
-# The name of the Pygments (syntax highlighting) style to use.
-pygments_style = "sphinx"
-
-# If true, `todo` and `todoList` produce output, else they produce nothing.
-todo_include_todos = False
-
-autosummary_generate = True
-autosummary_generate_overwrite = False
-
-# -- Options for HTML output ----------------------------------------------
-
-# The theme to use for HTML and HTML Help pages. See the documentation for
-# a list of builtin themes.
-#
-html_theme = "sphinx_rtd_theme"
-html_logo = "logo.svg"
-html_favicon = "logo.png"
-
-# Theme options are theme-specific and customize the look and feel of a theme
-# further. For a list of options available for each theme, see the
-# documentation.
-#
-# html_theme_options = {}
-
-# Add any paths that contain custom static files (such as style sheets) here,
-# relative to this directory. They are copied after the builtin static files,
-# so a file named "default.css" will overwrite the builtin "default.css".
-# html_static_path = ['_static']
-
-# Custom sidebar templates, must be a dictionary that maps document names
-# to template names.
-#
-# This is required for the alabaster theme
-# refs: http://alabaster.readthedocs.io/en/latest/installation.html#sidebars
-# html_sidebars = {
-# '**': [
-# 'relations.html', # needs 'show_related': True theme option to display
-# 'searchbox.html',
-# ]
-# }
-
-# -- Options for HTMLHelp output ------------------------------------------
-
-# Output file base name for HTML help builder.
-htmlhelp_basename = "sparsedoc"
-
-# -- Options for LaTeX output ---------------------------------------------
-
-latex_elements = {
- # The paper size ('letterpaper' or 'a4paper').
- #
- # 'papersize': 'letterpaper',
- # The font size ('10pt', '11pt' or '12pt').
- #
- # 'pointsize': '10pt',
- # Additional stuff for the LaTeX preamble.
- #
- # 'preamble': '',
- # Latex figure (float) alignment
- #
- # 'figure_align': 'htbp',
-}
-
-# Grouping the document tree into LaTeX files. List of tuples
-# (source start file, target name, title,
-# author, documentclass [howto, manual, or own class]).
-latex_documents = [(root_doc, "sparse.tex", "sparse Documentation", "Sparse Developers", "manual")]
-
-# -- Options for manual page output ---------------------------------------
-
-# One entry per manual page. List of tuples
-# (source start file, name, description, authors, manual section).
-man_pages = [(root_doc, "sparse", "sparse Documentation", [author], 1)]
-
-# -- Options for Texinfo output -------------------------------------------
-
-# Grouping the document tree into Texinfo files. List of tuples
-# (source start file, target name, title, author,
-# dir menu entry, description, category)
-texinfo_documents = [
- (
- root_doc,
- "sparse",
- "sparse Documentation",
- author,
- "sparse",
- "One line description of project.",
- "Miscellaneous",
- )
-]
-
-# Example configuration for intersphinx: refer to the Python standard library.
-intersphinx_mapping = {
- "python": ("https://docs.python.org/3", None),
- "numpy": ("https://docs.scipy.org/doc/numpy", None),
- "scipy": ("https://docs.scipy.org/doc/scipy", None),
-}
-
-extlinks = {
- "issue": ("https://github.com/pydata/sparse/issues/%s", "Issue #%s"),
- "pr": ("https://github.com/pydata/sparse/pull/%s", "PR #%s"),
- "ghuser": ("https://github.com/%s", "@%s"),
- "commit": ("https://github.com/pydata/sparse/commit/%s", "%s"),
- "compare": ("https://github.com/pydata/sparse/commit/%s", "%s"),
-}
diff --git a/docs/construct.md b/docs/construct.md
new file mode 100644
index 000000000..e171e605a
--- /dev/null
+++ b/docs/construct.md
@@ -0,0 +1,237 @@
+# Construct Sparse Arrays
+
+## From coordinates and data
+
+You can construct [`sparse.COO`][] arrays from coordinates and value data.
+
+The `cords` parameter contains the indices where the data is nonzero,
+and the `data` parameter contains the data corresponding to those indices.
+For example, the following code will generate a $5 \times 5$ diagonal
+matrix:
+
+```python
+
+>>> import sparse
+
+>>> coords = [[0, 1, 2, 3, 4],
+... [0, 1, 2, 3, 4]]
+>>> data = [10, 20, 30, 40, 50]
+>>> s = sparse.COO(coords, data, shape=(5, 5))
+>>> s
+
+ 0 1 2 3 4
+ ┌ ┐
+0 │ 10 │
+1 │ 20 │
+2 │ 30 │
+3 │ 40 │
+4 │ 50 │
+ └ ┘
+```
+
+In general `coords` should be a `(ndim, nnz)` shaped
+array. Each row of `coords` contains one dimension of the
+desired sparse array, and each column contains the index
+corresponding to that nonzero element. `data` contains
+the nonzero elements of the array corresponding to the indices
+in `coords`. Its shape should be `(nnz,)`.
+
+If `data` is the same across all the coordinates, it can be passed
+in as a scalar. For example, the following produces the $4 \times 4$
+identity matrix:
+
+```python
+
+>>> import sparse
+
+>>> coords = [[0, 1, 2, 3],
+... [0, 1, 2, 3]]
+>>> data = 1
+>>> s = sparse.COO(coords, data, shape=(4, 4))
+>>> s
+
+ 0 1 2 3
+ ┌ ┐
+0 │ 1 │
+1 │ 1 │
+2 │ 1 │
+3 │ 1 │
+ └ ┘
+```
+
+You can, and should, pass in [`numpy.ndarray`][] objects for
+`coords` and `data`.
+
+In this case, the shape of the resulting array was determined from
+the maximum index in each dimension. If the array extends beyond
+the maximum index in `coords`, you should supply a shape
+explicitly. For example, if we did the following without the
+`shape` keyword argument, it would result in a
+$4 \times 5$ matrix, but maybe we wanted one that was actually
+$5 \times 5$.
+
+```python
+
+>>> coords = [[0, 3, 2, 1], [4, 1, 2, 0]]
+>>> data = [1, 4, 2, 1]
+>>> s = COO(coords, data, shape=(5, 5))
+>>> s
+
+ 0 1 2 3 4
+ ┌ ┐
+0 │ 1 │
+1 │ 1 │
+2 │ 2 │
+3 │ 4 │
+4 │ │
+ └ ┘
+```
+
+[`sparse.COO`][] arrays support arbitrary fill values. Fill values are the "default"
+value, or value to not store. This can be given a value other than zero. For
+example, the following builds a (bad) representation of a $2 \times 2$
+identity matrix. Note that not all operations are supported for operations
+with nonzero fill values.
+
+```python
+
+>>> coords = [[0, 1], [1, 0]]
+>>> data = [0, 0]
+>>> s = COO(coords, data, fill_value=1)
+>>> s
+
+ 0 1
+ ┌ ┐
+0 │ 0 │
+1 │ 0 │
+ └ ┘
+```
+
+## From [`scipy.sparse.spmatrix`][]
+
+To construct [`sparse.COO`][] array from [spmatrix][scipy.sparse.spmatrix]
+objects, you can use the [`sparse.COO.from_scipy_sparse`][] method. As an
+example, if `x` is a [scipy.sparse.spmatrix][], you can
+do the following to get an equivalent [`sparse.COO`][] array:
+
+```python
+
+s = COO.from_scipy_sparse(x)
+```
+
+## From [Numpy arrays][`numpy.ndarray`]
+
+To construct [`sparse.COO`][] arrays from [`numpy.ndarray`][]
+objects, you can use the [`sparse.COO.from_numpy`][] method. As an
+example, if `x` is a [`numpy.ndarray`][], you can
+do the following to get an equivalent [`sparse.COO`][] array:
+
+```python
+
+s = COO.from_numpy(x)
+```
+
+## Generating random [`sparse.COO`][] objects
+
+The [`sparse.random`][] method can be used to create random
+[`sparse.COO`][] arrays. For example, the following will generate
+a $10 \times 10$ matrix with $10$ nonzero entries,
+each in the interval $[0, 1)$.
+
+```python
+
+s = sparse.random((10, 10), density=0.1)
+```
+
+Building [`sparse.COO`][] Arrays from [`sparse.DOK`][] Arrays
+
+It's possible to build [`sparse.COO`][] arrays from [`sparse.DOK`][] arrays, if it is not
+easy to construct the `coords` and `data` in a simple way. [`sparse.DOK`][]
+arrays provide a simple builder interface to build [`sparse.COO`][] arrays, but at
+this time, they can do little else.
+
+You can get started by defining the shape (and optionally, datatype) of the
+[`sparse.DOK`][] array. If you do not specify a dtype, it is inferred from the value
+dictionary or is set to `dtype('float64')` if that is not present.
+
+```python
+
+s = DOK((6, 5, 2))
+s2 = DOK((2, 3, 4), dtype=np.uint8)
+```
+
+After this, you can build the array by assigning arrays or scalars to elements
+or slices of the original array. Broadcasting rules are followed.
+
+```python
+
+s[1:3, 3:1:-1] = [[6, 5]]
+```
+
+DOK arrays also support fancy indexing assignment if and only if all dimensions are indexed.
+
+```python
+
+s[[0, 2], [2, 1], [0, 1]] = 5
+s[[0, 3], [0, 4], [0, 1]] = [1, 5]
+```
+
+Alongside indexing assignment and retrieval, [`sparse.DOK`][] arrays support any arbitrary broadcasting function
+to any number of arguments where the arguments can be [`sparse.SparseArray`][] objects, [`scipy.sparse.spmatrix`][]
+objects, or [`numpy.ndarray`][].
+
+```python
+
+x = sparse.random((10, 10), 0.5, format="dok")
+y = sparse.random((10, 10), 0.5, format="dok")
+sparse.elemwise(np.add, x, y)
+```
+
+[`sparse.DOK`][] arrays also support standard ufuncs and operators, including comparison operators,
+in combination with other objects implementing the *numpy* *ndarray.\__array_ufunc\__* method. For example,
+the following code will perform elementwise equality comparison on the two arrays
+and return a new boolean [`sparse.DOK`][] array.
+
+```python
+
+x = sparse.random((10, 10), 0.5, format="dok")
+y = np.random.random((10, 10))
+x == y
+```
+
+[`sparse.DOK`][] arrays are returned from elemwise functions and standard ufuncs if and only if all
+[`sparse.SparseArray`][] objects are [`sparse.DOK`][] arrays. Otherwise, a [`sparse.COO`][] array or dense array are returned.
+
+At the end, you can convert the [`sparse.DOK`][] array to a [`sparse.COO`][] arrays.
+
+```python
+
+s3 = COO(s)
+```
+
+In addition, it is possible to access single elements and slices of the [`sparse.DOK`][] array
+using normal Numpy indexing, as well as fancy indexing if and only if all dimensions are indexed.
+Slicing and fancy indexing will always return a new DOK array.
+
+```python
+
+s[1, 2, 1] # 5
+s[5, 1, 1] # 0
+s[[0, 3], [0, 4], [0, 1]] #
+```
+
+## Converting [`sparse.COO`][] objects to other Formats
+
+[`sparse.COO`][] arrays can be converted to [Numpy arrays][numpy.ndarray],
+or to some [spmatrix][scipy.sparse.spmatrix] subclasses via the following
+methods:
+
+* [`sparse.COO.todense`][]: Converts to a [`numpy.ndarray`][] unconditionally.
+* [`sparse.COO.maybe_densify`][]: Converts to a [`numpy.ndarray`][] based on
+ certain constraints.
+* [`sparse.COO.to_scipy_sparse`][]: Converts to a [`scipy.sparse.coo_matrix`][] if
+ the array is two dimensional.
+* [`sparse.COO.tocsr`][]: Converts to a [`scipy.sparse.csr_matrix`][] if
+ the array is two dimensional.
+* [`sparse.COO.tocsc`][]: Converts to a [`scipy.sparse.csc_matrix`][] if
+ the array is two dimensional.
diff --git a/docs/construct.rst b/docs/construct.rst
deleted file mode 100644
index d12f50cbe..000000000
--- a/docs/construct.rst
+++ /dev/null
@@ -1,228 +0,0 @@
-.. currentmodule:: sparse
-
-Construct Sparse Arrays
-=======================
-
-From coordinates and data
--------------------------
-You can construct :obj:`COO` arrays from coordinates and value data.
-
-The :code:`coords` parameter contains the indices where the data is nonzero,
-and the :code:`data` parameter contains the data corresponding to those indices.
-For example, the following code will generate a :math:`5 \times 5` diagonal
-matrix:
-
-.. code-block:: python
-
- >>> import sparse
-
- >>> coords = [[0, 1, 2, 3, 4],
- ... [0, 1, 2, 3, 4]]
- >>> data = [10, 20, 30, 40, 50]
- >>> s = sparse.COO(coords, data, shape=(5, 5))
- >>> s
-
- 0 1 2 3 4
- ┌ ┐
- 0 │ 10 │
- 1 │ 20 │
- 2 │ 30 │
- 3 │ 40 │
- 4 │ 50 │
- └ ┘
-
-In general :code:`coords` should be a :code:`(ndim, nnz)` shaped
-array. Each row of :code:`coords` contains one dimension of the
-desired sparse array, and each column contains the index
-corresponding to that nonzero element. :code:`data` contains
-the nonzero elements of the array corresponding to the indices
-in :code:`coords`. Its shape should be :code:`(nnz,)`.
-
-If ``data`` is the same across all the coordinates, it can be passed
-in as a scalar. For example, the following produces the :math:`4 \times 4`
-identity matrix:
-
-.. code-block:: python
-
- >>> import sparse
-
- >>> coords = [[0, 1, 2, 3],
- ... [0, 1, 2, 3]]
- >>> data = 1
- >>> s = sparse.COO(coords, data, shape=(4, 4))
- >>> s
-
- 0 1 2 3
- ┌ ┐
- 0 │ 1 │
- 1 │ 1 │
- 2 │ 1 │
- 3 │ 1 │
- └ ┘
-
-You can, and should, pass in :obj:`numpy.ndarray` objects for
-:code:`coords` and :code:`data`.
-
-In this case, the shape of the resulting array was determined from
-the maximum index in each dimension. If the array extends beyond
-the maximum index in :code:`coords`, you should supply a shape
-explicitly. For example, if we did the following without the
-:code:`shape` keyword argument, it would result in a
-:math:`4 \times 5` matrix, but maybe we wanted one that was actually
-:math:`5 \times 5`.
-
-.. code-block:: python
-
- >>> coords = [[0, 3, 2, 1], [4, 1, 2, 0]]
- >>> data = [1, 4, 2, 1]
- >>> s = COO(coords, data, shape=(5, 5))
- >>> s
-
- 0 1 2 3 4
- ┌ ┐
- 0 │ 1 │
- 1 │ 1 │
- 2 │ 2 │
- 3 │ 4 │
- 4 │ │
- └ ┘
-
-:obj:`COO` arrays support arbitrary fill values. Fill values are the "default"
-value, or value to not store. This can be given a value other than zero. For
-example, the following builds a (bad) representation of a :math:`2 \times 2`
-identity matrix. Note that not all operations are supported for operations
-with nonzero fill values.
-
-.. code-block:: python
-
- >>> coords = [[0, 1], [1, 0]]
- >>> data = [0, 0]
- >>> s = COO(coords, data, fill_value=1)
- >>> s
-
- 0 1
- ┌ ┐
- 0 │ 0 │
- 1 │ 0 │
- └ ┘
-
-From :std:doc:`Scipy sparse matrices `
----------------------------------------------------------------------------------------
-To construct :obj:`COO` array from :obj:`spmatrix `
-objects, you can use the :obj:`COO.from_scipy_sparse` method. As an
-example, if :code:`x` is a :obj:`scipy.sparse.spmatrix`, you can
-do the following to get an equivalent :obj:`COO` array:
-
-.. code-block:: python
-
- s = COO.from_scipy_sparse(x)
-
-From :doc:`Numpy arrays `
-------------------------------------------------------------------
-To construct :obj:`COO` arrays from :obj:`numpy.ndarray`
-objects, you can use the :obj:`COO.from_numpy` method. As an
-example, if :code:`x` is a :obj:`numpy.ndarray`, you can
-do the following to get an equivalent :obj:`COO` array:
-
-.. code-block:: python
-
- s = COO.from_numpy(x)
-
-Generating random :obj:`COO` objects
-------------------------------------
-The :obj:`sparse.random` method can be used to create random
-:obj:`COO` arrays. For example, the following will generate
-a :math:`10 \times 10` matrix with :math:`10` nonzero entries,
-each in the interval :math:`[0, 1)`.
-
-.. code-block:: python
-
- s = sparse.random((10, 10), density=0.1)
-
-Building :obj:`COO` Arrays from :obj:`DOK` Arrays
--------------------------------------------------
-It's possible to build :obj:`COO` arrays from :obj:`DOK` arrays, if it is not
-easy to construct the :code:`coords` and :obj:`data` in a simple way. :obj:`DOK`
-arrays provide a simple builder interface to build :obj:`COO` arrays, but at
-this time, they can do little else.
-
-You can get started by defining the shape (and optionally, datatype) of the
-:obj:`DOK` array. If you do not specify a dtype, it is inferred from the value
-dictionary or is set to :code:`dtype('float64')` if that is not present.
-
-.. code-block:: python
-
- s = DOK((6, 5, 2))
- s2 = DOK((2, 3, 4), dtype=np.uint8)
-
-After this, you can build the array by assigning arrays or scalars to elements
-or slices of the original array. Broadcasting rules are followed.
-
-.. code-block:: python
-
- s[1:3, 3:1:-1] = [[6, 5]]
-
-DOK arrays also support fancy indexing assignment if and only if all dimensions are indexed.
-
-.. code-block:: python
-
- s[[0, 2], [2, 1], [0, 1]] = 5
- s[[0, 3], [0, 4], [0, 1]] = [1, 5]
-
-Alongside indexing assignment and retrieval, :obj:`DOK` arrays support any arbitrary broadcasting function
-to any number of arguments where the arguments can be :obj:`SparseArray` objects, :obj:`scipy.sparse.spmatrix`
-objects, or :obj:`numpy.ndarrays`.
-
-.. code-block:: python
-
- x = sparse.random((10, 10), 0.5, format="dok")
- y = sparse.random((10, 10), 0.5, format="dok")
- sparse.elemwise(np.add, x, y)
-
-:obj:`DOK` arrays also support standard ufuncs and operators, including comparison operators,
-in combination with other objects implementing the `numpy` `ndarray.__array_ufunc__` method. For example,
-the following code will perform elementwise equality comparison on the two arrays
-and return a new boolean :obj:`DOK` array.
-
-.. code-block:: python
-
- x = sparse.random((10, 10), 0.5, format="dok")
- y = np.random.random((10, 10))
- x == y
-
-:obj:`DOK` arrays are returned from elemwise functions and standard ufuncs if and only if all
-:obj:`SparseArray` objects are obj:`DOK` arrays. Otherwise, a :obj:`COO` array or dense array are returned.
-
-At the end, you can convert the :obj:`DOK` array to a :obj:`COO` arrays.
-
-.. code-block:: python
-
- s3 = COO(s)
-
-In addition, it is possible to access single elements and slices of the :obj:`DOK` array
-using normal Numpy indexing, as well as fancy indexing if and only if all dimensions are indexed.
-Slicing and fancy indexing will always return a new DOK array.
-
-.. code-block:: python
-
- s[1, 2, 1] # 5
- s[5, 1, 1] # 0
- s[[0, 3], [0, 4], [0, 1]] #
-
-.. _converting:
-
-Converting :obj:`COO` objects to other Formats
-----------------------------------------------
-:obj:`COO` arrays can be converted to :doc:`Numpy arrays `,
-or to some :obj:`spmatrix ` subclasses via the following
-methods:
-
-* :obj:`COO.todense`: Converts to a :obj:`numpy.ndarray` unconditionally.
-* :obj:`COO.maybe_densify`: Converts to a :obj:`numpy.ndarray` based on
- certain constraints.
-* :obj:`COO.to_scipy_sparse`: Converts to a :obj:`scipy.sparse.coo_matrix` if
- the array is two dimensional.
-* :obj:`COO.tocsr`: Converts to a :obj:`scipy.sparse.csr_matrix` if
- the array is two dimensional.
-* :obj:`COO.tocsc`: Converts to a :obj:`scipy.sparse.csc_matrix` if
- the array is two dimensional.
diff --git a/docs/contributing.md b/docs/contributing.md
new file mode 100644
index 000000000..1f6061617
--- /dev/null
+++ b/docs/contributing.md
@@ -0,0 +1,108 @@
+## Contributing
+
+## General Guidelines
+
+sparse is a community-driven project on GitHub. You can find our
+[repository on GitHub](https://github.com/pydata/sparse). Feel
+free to open issues for new features or bugs, or open a pull request
+to fix a bug or add a new feature.
+
+If you haven't contributed to open-source before, we recommend you read
+[this excellent guide by GitHub on how to contribute to open source](https://opensource.guide/how-to-contribute). The guide is long,
+so you can gloss over things you're familiar with.
+
+If you're not already familiar with it, we follow the [fork and pull model](https://help.github.com/articles/about-collaborative-development-models)
+on GitHub.
+
+## Filing Issues
+
+If you find a bug or would like a new feature, you might want to *consider
+filing a new issue* on [GitHub](https://github.com/pydata/sparse/issues). Before
+you open a new issue, please make sure of the following:
+
+* This should go without saying, but make sure what you are requesting is within
+ the scope of this project.
+* The bug/feature is still present/missing on the `main` branch on GitHub.
+* A similar issue or pull request isn't already open. If one already is, it's better
+ to contribute to the discussion there.
+
+## Contributing Code
+
+This project has a number of requirements for all code contributed.
+
+* We use `pre-commit` to automatically lint the code and maintain code style.
+* We use Numpy-style docstrings.
+* It's ideal if user-facing API changes or new features have documentation added.
+* 100% code coverage is recommended for all new code in any submitted PR. Doctests
+ count toward coverage.
+* Performance optimizations should have benchmarks added in `benchmarks`.
+
+## Setting up Your Development Environment
+
+The following bash script is all you need to set up your development environment,
+after forking and cloning the repository:
+
+```bash
+
+pip install -e .[all]
+```
+
+## Running/Adding Unit Tests
+
+It is best if all new functionality and/or bug fixes have unit tests added
+with each use-case.
+
+We use [pytest](https://docs.pytest.org/en/latest) as our unit testing framework,
+with the `pytest-cov` extension to check code coverage and `pytest-flake8` to
+check code style. You don't need to configure these extensions yourself. Once you've
+configured your environment, you can just `cd` to the root of your repository and run
+
+```bash
+
+pytest --pyargs sparse
+```
+
+This automatically checks code style and functionality, and prints code coverage,
+even though it doesn't fail on low coverage.
+
+Unit tests are automatically run on Travis CI for pull requests.
+
+## Coverage
+
+The `pytest` script automatically reports coverage, both on the terminal for
+missing line numbers, and in annotated HTML form in `htmlcov/index.html`.
+
+Coverage is automatically checked on CodeCov for pull requests.
+
+## Adding/Building the Documentation
+
+If a feature is stable and relatively finalized, it is time to add it to the
+documentation. If you are adding any private/public functions, it is best to
+add docstrings, to aid in reviewing code and also for the API reference.
+
+We use [Numpy style docstrings](https://numpydoc.readthedocs.io/en/latest/format.html)
+and [Material for MkDocs](https://squidfunk.github.io/mkdocs-material) to document this library.
+MkDocs, in turn, uses [Markdown](https://www.markdownguide.org)
+as its markup language for adding code.
+
+We use [mkdoctrings](https://mkdocstrings.github.io/recipes) with the
+[mkdocs-gen-files plugin](https://oprypin.github.io/mkdocs-gen-files)
+to generate API references.
+
+To build the documentation, you can run
+
+```bash
+
+mkdocs build
+mkdocs serve
+```
+
+After this, you can see a version of the documentation on your local server.
+
+Documentation for pull requests is automatically built on CircleCI and can be found in the build
+artifacts.
+
+## Adding and Running Benchmarks
+
+We use [Airspeed Velocity](https://asv.readthedocs.io/en/latest) to run benchmarks. We have it set
+up to use `conda`, but you can edit the configuration locally if you so wish.
diff --git a/docs/contributing.rst b/docs/contributing.rst
deleted file mode 100644
index 25567aca0..000000000
--- a/docs/contributing.rst
+++ /dev/null
@@ -1,119 +0,0 @@
-Contributing
-============
-
-General Guidelines
-------------------
-
-sparse is a community-driven project on GitHub. You can find our
-`repository on GitHub <:ghuser:`pydata/sparse`>`_. Feel
-free to open issues for new features or bugs, or open a pull request
-to fix a bug or add a new feature.
-
-If you haven't contributed to open-source before, we recommend you read
-`this excellent guide by GitHub on how to contribute to open source
-`_. The guide is long,
-so you can gloss over things you're familiar with.
-
-If you're not already familiar with it, we follow the `fork and pull model
-`_
-on GitHub.
-
-Filing Issues
--------------
-
-If you find a bug or would like a new feature, you might want to `consider
-filing a new issue on GitHub <:ghuser:`pydata/sparse/issues`>`_. Before
-you open a new issue, please make sure of the following:
-
-* This should go without saying, but make sure what you are requesting is within
- the scope of this project.
-* The bug/feature is still present/missing on the ``main`` branch on GitHub.
-* A similar issue or pull request isn't already open. If one already is, it's better
- to contribute to the discussion there.
-
-Contributing Code
------------------
-
-This project has a number of requirements for all code contributed.
-
-* We use ``pre-commit`` to automatically lint the code and maintain code style.
-* We use Numpy-style docstrings.
-* It's ideal if user-facing API changes or new features have documentation added.
-* 100% code coverage is recommended for all new code in any submitted PR. Doctests
- count toward coverage.
-* Performance optimizations should have benchmarks added in ``benchmarks``.
-
-Setting up Your Development Environment
----------------------------------------
-
-The following bash script is all you need to set up your development environment,
-after forking and cloning the repository:
-
-.. code-block:: bash
-
- pip install -e .[all]
-
-
-Running/Adding Unit Tests
--------------------------
-
-It is best if all new functionality and/or bug fixes have unit tests added
-with each use-case.
-
-We use `pytest `_ as our unit testing framework,
-with the ``pytest-cov`` extension to check code coverage and ``pytest-flake8`` to
-check code style. You don't need to configure these extensions yourself. Once you've
-configured your environment, you can just ``cd`` to the root of your repository and run
-
-.. code-block:: bash
-
- pytest --pyargs sparse
-
-This automatically checks code style and functionality, and prints code coverage,
-even though it doesn't fail on low coverage.
-
-Unit tests are automatically run on Travis CI for pull requests.
-
-Coverage
---------
-
-The ``pytest`` script automatically reports coverage, both on the terminal for
-missing line numbers, and in annotated HTML form in ``htmlcov/index.html``.
-
-Coverage is automatically checked on CodeCov for pull requests.
-
-Adding/Building the Documentation
----------------------------------
-
-If a feature is stable and relatively finalized, it is time to add it to the
-documentation. If you are adding any private/public functions, it is best to
-add docstrings, to aid in reviewing code and also for the API reference.
-
-We use `Numpy style docstrings `_
-and `Sphinx `_ to document this library.
-Sphinx, in turn, uses `reStructuredText `_
-as its markup language for adding code.
-
-We use the `Sphinx Autosummary extension `_
-to generate API references. In particular, you may want do look at the :code:`docs/generated`
-directory to see how these files look and where to add new functions, classes or modules.
-For example, if you add a new function to the :code:`sparse.COO` class, you would open up
-:code:`docs/generated/sparse.COO.rst`, and add in the name of the function where appropriate.
-
-To build the documentation, you can :code:`cd` into the :code:`docs` directory
-and run
-
-.. code-block:: bash
-
- sphinx-build -W -b html . _build/html
-
-After this, you can find an HTML version of the documentation in :code:`docs/_build/html/index.html`.
-
-Documentation for pull requests is automatically built on CircleCI and can be found in the build
-artifacts.
-
-Adding and Running Benchmarks
------------------------------
-
-We use `Airspeed Velocity `_ to run benchmarks. We have it set
-up to use ``conda``, but you can edit the configuration locally if you so wish.
diff --git a/docs/css/mkdocstrings.css b/docs/css/mkdocstrings.css
new file mode 100644
index 000000000..7a100aa12
--- /dev/null
+++ b/docs/css/mkdocstrings.css
@@ -0,0 +1,3 @@
+.md-tabs__link {
+ font-weight: bold;
+ }
diff --git a/docs/index.md b/docs/index.md
new file mode 100644
index 000000000..fcead3cec
--- /dev/null
+++ b/docs/index.md
@@ -0,0 +1,97 @@
+# Sparse
+
+This implements sparse arrays of arbitrary dimension on top of
+[numpy][] and
+[`scipy.sparse`][]. It generalizes the
+[`scipy.sparse.coo_matrix`][] and
+[`scipy.sparse.dok_matrix`][] layouts, but
+extends beyond just rows and columns to an arbitrary number of
+dimensions.
+
+Additionally, this project maintains compatibility with the
+[`numpy.ndarray`][] interface rather than the
+[`numpy.matrix`][] interface used in
+[`scipy.sparse`][]
+
+These differences make this project useful in certain situations where
+scipy.sparse matrices are not well suited, but it should not be
+considered a full replacement. The data structures in pydata/sparse
+complement and can be used in conjunction with the fast linear algebra
+routines inside [`scipy.sparse`][]. A format conversion or copy may be
+required.
+
+## Motivation
+
+Sparse arrays, or arrays that are mostly empty or filled with zeros, are
+common in many scientific applications. To save space we often avoid
+storing these arrays in traditional dense formats, and instead choose
+different data structures. Our choice of data structure can
+significantly affect our storage and computational costs when working
+with these arrays.
+
+## Design
+
+The main data structure in this library follows the [Coordinate List
+(COO)](https://en.wikipedia.org/wiki/Sparse_matrix#Coordinate_list_(COO))
+layout for sparse matrices, but extends it to multiple dimensions.
+
+The COO layout, which stores the row index, column index, and value of
+every element:
+
+
+| row | col | data |
+|-----|-----|------|
+| 0 | 0 | 10 |
+| 0 | 2 | 13 |
+| 1 | 3 | 9 |
+| 3 | 8 | 21 |
+
+It is straightforward to extend the COO layout to an arbitrary number of
+dimensions:
+
+
+| dim1 | dim2 | dim3 | \... | data |
+|------|------|------|------|------|
+| 0 | 0 | 0 | . | 10 |
+| 0 | 0 | 3 | . | 13 |
+| 0 | 2 | 2 | . | 9 |
+| 3 | 1 | 4 | . | 21 |
+
+This makes it easy to *store* a multidimensional sparse array, but we
+still need to reimplement all of the array operations like transpose,
+reshape, slicing, tensordot, reductions, etc., which can be challenging
+in general.
+
+This library also includes several other data structures. Similar to
+COO, the [Dictionary of Keys
+(DOK)](https://en.wikipedia.org/wiki/Sparse_matrix#Dictionary_of_keys_(DOK))
+format for sparse matrices generalizes well to an arbitrary number of
+dimensions. DOK is well-suited for writing and mutating. Most other
+operations are not supported for DOK. A common workflow may involve
+writing an array with DOK and then converting to another format for
+other operations.
+
+The [Compressed Sparse Row/Column
+(CSR/CSC)](https://en.wikipedia.org/wiki/Sparse_matrix#Compressed_sparse_column_(CSC_or_CCS))
+formats are widely used in scientific computing are now supported by
+pydata/sparse. The CSR/CSC formats excel at compression and mathematical
+operations. While these formats are restricted to two dimensions,
+pydata/sparse supports the GCXS sparse array format, based on [GCRS/GCCS
+from](https://ieeexplore.ieee.org/abstract/document/7237032/similar#similar)
+which generalizes CSR/CSC to n-dimensional arrays. Like their
+two-dimensional CSR/CSC counterparts, GCXS arrays compress well. Whereas
+the storage cost of COO depends heavily on the number of dimensions of
+the array, the number of dimensions only minimally affects the storage
+cost of GCXS arrays, which results in favorable compression ratios
+across many use cases.
+
+Together these formats cover a wide array of applications of sparsity.
+Additionally, with each format complying with the
+[`numpy.ndarray`][] interface and following
+the appropriate dispatching protocols, pydata/sparse arrays can interact
+with other array libraries and seamlessly take part in
+pydata-ecosystem-based workflows.
+
+## LICENSE
+
+This library is licensed under BSD-3.
diff --git a/docs/index.rst b/docs/index.rst
deleted file mode 100644
index 67b278896..000000000
--- a/docs/index.rst
+++ /dev/null
@@ -1,111 +0,0 @@
-Sparse
-======
-
-.. raw:: html
- :file: logo.svg
-
-This implements sparse arrays of arbitrary dimension on top of :obj:`numpy` and :obj:`scipy.sparse`.
-It generalizes the :obj:`scipy.sparse.coo_matrix` and :obj:`scipy.sparse.dok_matrix` layouts,
-but extends beyond just rows and columns to an arbitrary number of dimensions.
-
-Additionally, this project maintains compatibility with the :obj:`numpy.ndarray` interface
-rather than the :obj:`numpy.matrix` interface used in :obj:`scipy.sparse`
-
-These differences make this project useful in certain situations
-where scipy.sparse matrices are not well suited,
-but it should not be considered a full replacement.
-The data structures in pydata/sparse complement and can
-be used in conjunction with the fast linear algebra routines
-inside scipy.sparse. A format conversion or copy may be required.
-
-
-Motivation
-----------
-
-Sparse arrays, or arrays that are mostly empty or filled with zeros,
-are common in many scientific applications.
-To save space we often avoid storing these arrays in traditional dense formats,
-and instead choose different data structures.
-Our choice of data structure can significantly affect our storage and computational
-costs when working with these arrays.
-
-
-Design
-------
-
-The main data structure in this library follows the
-`Coordinate List (COO) `_
-layout for sparse matrices, but extends it to multiple dimensions.
-
-The COO layout, which stores the row index, column index, and value of every element:
-
-=== === ====
-row col data
-=== === ====
- 0 0 10
- 0 2 13
- 1 3 9
- 3 8 21
-=== === ====
-
-It is straightforward to extend the COO layout to an arbitrary number of
-dimensions:
-
-==== ==== ==== === ====
-dim1 dim2 dim3 ... data
-==== ==== ==== === ====
- 0 0 0 . 10
- 0 0 3 . 13
- 0 2 2 . 9
- 3 1 4 . 21
-==== ==== ==== === ====
-
-This makes it easy to *store* a multidimensional sparse array, but we still
-need to reimplement all of the array operations like transpose, reshape,
-slicing, tensordot, reductions, etc., which can be challenging in general.
-
-This library also includes several other data structures. Similar to COO,
-the `Dictionary of Keys (DOK) `_
-format for sparse matrices generalizes well to an arbitrary number of dimensions.
-DOK is well-suited for writing and mutating. Most other operations are not supported for DOK.
-A common workflow may involve writing an array with DOK and then converting to another
-format for other operations.
-
-The `Compressed Sparse Row/Column (CSR/CSC) `_
-formats are widely used in scientific computing are now supported
-by pydata/sparse. The CSR/CSC formats excel at compression and mathematical operations.
-While these formats are restricted to two dimensions, pydata/sparse supports
-the GCXS sparse array format, based on
-`GCRS/GCCS from `_
-which generalizes CSR/CSC to n-dimensional arrays.
-Like their two-dimensional CSR/CSC counterparts, GCXS arrays compress well.
-Whereas the storage cost of COO depends heavily on the number of dimensions of the array,
-the number of dimensions only minimally affects the storage cost of GCXS arrays,
-which results in favorable compression ratios across many use cases.
-
-Together these formats cover a wide array of applications of sparsity.
-Additionally, with each format complying with the :obj:`numpy.ndarray` interface and
-following the appropriate dispatching protocols,
-pydata/sparse arrays can interact with other array libraries and seamlessly
-take part in pydata-ecosystem-based workflows.
-
-LICENSE
--------
-
-This library is licensed under BSD-3
-
-.. toctree::
- :maxdepth: 3
- :hidden:
-
- install
- quickstart
- construct
- operations
- generated/sparse
- roadmap
- contributing
- changelog
- conduct
-
-.. _scipy.sparse: https://docs.scipy.org/doc/scipy/reference/sparse.html
diff --git a/docs/install.md b/docs/install.md
new file mode 100644
index 000000000..785d2600a
--- /dev/null
+++ b/docs/install.md
@@ -0,0 +1,21 @@
+# Install
+
+You can install this library with ``pip``:
+
+```bash
+pip install sparse
+```
+
+You can also install from source from GitHub, either by pip installing
+directly::
+```bash
+pip install git+https://github.com/pydata/sparse
+```
+Or by cloning the repository and installing locally:
+```bash
+git clone https://github.com/pydata/sparse.git
+cd sparse/
+pip install .
+```
+Note that this library is under active development and so some API churn should
+be expected.
diff --git a/docs/install.rst b/docs/install.rst
deleted file mode 100644
index bfc18b732..000000000
--- a/docs/install.rst
+++ /dev/null
@@ -1,24 +0,0 @@
-.. currentmodule:: sparse
-
-Install
-=======
-
-You can install this library with ``pip``:
-
-.. code-block:: bash
-
- pip install sparse
-
-You can also install from source from GitHub, either by pip installing
-directly::
-
- pip install git+https://github.com/pydata/sparse
-
-Or by cloning the repository and installing locally::
-
- git clone https://github.com/pydata/sparse.git
- cd sparse/
- pip install .
-
-Note that this library is under active development and so some API churn should
-be expected.
diff --git a/docs/javascripts/katex.js b/docs/javascripts/katex.js
new file mode 100644
index 000000000..25cfbab20
--- /dev/null
+++ b/docs/javascripts/katex.js
@@ -0,0 +1,10 @@
+document$.subscribe(({ body }) => {
+ renderMathInElement(body, {
+ delimiters: [
+ { left: "$$", right: "$$", display: true },
+ { left: "$", right: "$", display: false },
+ { left: "\\(", right: "\\)", display: false },
+ { left: "\\[", right: "\\]", display: true }
+ ],
+ })
+ })
diff --git a/docs/operations.md b/docs/operations.md
new file mode 100644
index 000000000..89d81691b
--- /dev/null
+++ b/docs/operations.md
@@ -0,0 +1,276 @@
+# Operations on [`sparse.COO`][] and [`sparse.GCXS`][] arrays
+
+## Operators
+
+[`sparse.COO`][] and [`sparse.GCXS`][] objects support a number of operations. They interact with scalars,
+[`sparse.COO`][] and [`sparse.GCXS`][] objects,
+[scipy.sparse.spmatrix][] objects, all following standard Python and Numpy
+conventions.
+
+For example, the following Numpy expression produces equivalent
+results for both Numpy arrays, COO arrays, or a mix of the two:
+
+```python
+
+np.log(X.dot(beta.T) + 1)
+```
+
+However some operations are not supported, like operations that
+implicitly cause dense structures, or numpy functions that are not
+yet implemented for sparse arrays.
+
+```python
+
+np.linalg.cholesky(x) # sparse cholesky not implemented
+```
+
+This page describes those valid operations, and their limitations.
+
+**[`sparse.elemwise`][]**
+
+This function allows you to apply any arbitrary broadcasting function to any number of arguments
+where the arguments can be [`sparse.SparseArray`][] objects or [`scipy.sparse.spmatrix`][] objects.
+For example, the following will add two arrays:
+
+```python
+
+sparse.elemwise(np.add, x, y)
+```
+
+!!! warning
+
+ Previously, [`sparse.elemwise`][] was a method of the [`sparse.COO`][] class. Now,
+ it has been moved to the [sparse][] module.
+
+
+**Auto-Densification**
+
+Operations that would result in dense matrices, such as
+operations with [Numpy arrays][`numpy.ndarray`]
+raises a [ValueError][]. For example, the following will raise a
+[ValueError][] if `x` is a [`numpy.ndarray`][]:
+
+```python
+
+x + y
+```
+
+However, all of the following are valid operations.
+
+```python
+
+x + 0
+x != y
+x + y
+x == 5
+5 * x
+x / 7.3
+x != 0
+x == 0
+~x
+x + 5
+```
+
+We also support operations with a nonzero fill value. These are operations
+that map zero values to nonzero values, such as `x + 1` or `~x`.
+In these cases, they will produce an output with a fill value of `1` or `True`,
+assuming the original array has a fill value of `0` or `False` respectively.
+
+If densification is needed, it must be explicit. In other words, you must call
+[`sparse.SparseArray.todense`][] on the [`sparse.SparseArray`][] object. If both operands are [sparse.SparseArray][],
+both must be densified.
+
+**Operations with NumPy arrays**
+
+In certain situations, operations with NumPy arrays are also supported. For example,
+the following will work if `x` is [`sparse.COO`][] and `y` is a NumPy array:
+
+```python
+
+x * y
+```
+
+The following conditions must be met when performing element-wise operations with
+NumPy arrays:
+
+* The operation must produce a consistent fill-values. In other words, the resulting
+ array must also be sparse.
+* Operating on the NumPy arrays must not increase the size when broadcasting the arrays.
+
+## Operations with [`scipy.sparse.spmatrix`][]
+
+Certain operations with [`scipy.sparse.spmatrix`][] are also supported.
+For example, the following are all allowed if `y` is a [`scipy.sparse.spmatrix`][]:
+
+```python
+
+x + y
+x - y
+x * y
+x > y
+x < y
+```
+
+In general, operating on a [`scipy.sparse.spmatrix`][] is the same as operating
+on [`sparse.COO`][] or [`sparse.GCXS`][], as long as it is to the right of the operator.
+
+!!! note
+
+ Results are not guaranteed if `x` is a [scipy.sparse.spmatrix][].
+ For this reason, we recommend that all Scipy sparse matrices should be explicitly
+ converted to [`sparse.COO`][] or [`sparse.GCXS`][] before any operations.
+
+
+## Broadcasting
+
+All binary operators support [broadcasting](https://numpy.org/doc/stable/user/basics.broadcasting.html).
+This means that (under certain conditions) you can perform binary operations
+on arrays with unequal shape. Namely, when the shape is missing a dimension,
+or when a dimension is `1`. For example, performing a binary operation
+on two `COO` arrays with shapes `(4,)` and `(5, 1)` yields
+an object of shape `(5, 4)`. The same happens with arrays of shape
+`(1, 4)` and `(5, 1)`. However, `(4, 1)` and `(5, 1)`
+will raise a [`ValueError`][].If densification is needed,
+
+
+## Element-wise Operations
+
+[`sparse.COO`][] and [`sparse.GCXS`][] arrays support a variety of element-wise operations. However, as
+with operators, operations that map zero to a nonzero value are not supported.
+
+To illustrate, the following are all possible, and will produce another
+[`sparse.SparseArray`][]:
+
+```python
+
+np.abs(x)
+np.sin(x)
+np.sqrt(x)
+np.conj(x)
+np.expm1(x)
+np.log1p(x)
+np.exp(x)
+np.cos(x)
+np.log(x)
+```
+
+As above, in the last three cases, an array with a nonzero fill value will be produced.
+
+Notice that you can apply any unary or binary [`sparse.COO`][]
+arrays, and [`numpy.ndarray`][] objects and scalars and it will work so
+long as the result is not dense. When applying to [`numpy.ndarray`][] objects,
+we check that operating on the array with zero would always produce a zero.
+
+
+## Reductions
+
+[`sparse.COO`][] and [`sparse.GCXS`][] objects support a number of reductions. However, not all important
+reductions are currently implemented (help welcome!). All of the following
+currently work:
+
+```python
+
+x.sum(axis=1)
+np.max(x)
+np.min(x, axis=(0, 2))
+x.prod()
+```
+
+[`sparse.SparseArray.reduce`][]
+
+This method can take an arbitrary [`numpy.ufunc`][] and performs a
+reduction using that method. For example, the following will perform
+a sum:
+
+```python
+
+x.reduce(np.add, axis=1)
+```
+
+!!! note
+
+ This library currently performs reductions by grouping together all
+ coordinates along the supplied axes and reducing those. Then, if the
+ number in a group is deficient, it reduces an extra time with zero.
+ As a result, if reductions can change by adding multiple zeros to
+ it, this method won't be accurate. However, it works in most cases.
+
+**Partial List of Supported Reductions**
+
+Although any binary [`numpy.ufunc`][] should work for reductions, when calling
+in the form `x.reduction()`, the following reductions are supported:
+
+* [`sparse.COO.sum`][]
+* [`sparse.COO.max`][]
+* [`sparse.COO.min`][]
+* [`sparse.COO.prod`][]
+
+
+## Indexing
+
+[`sparse.COO`][] and [`sparse.GCXS`][] arrays can be [indexed](https://numpy.org/doc/stable/user/basics.indexing.html)
+just like regular [`numpy.ndarray`][] objects. They support integer, slice and boolean indexing.
+However, currently, numpy advanced indexing is not properly supported. This
+means that all of the following work like in Numpy, except that they will produce
+[`sparse.SparseArray`][] arrays rather than [`numpy.ndarray`][] objects, and will produce
+scalars where expected. Assume that `z.shape` is `(5, 6, 7)`
+
+```python
+
+z[0]
+z[1, 3]
+z[1, 4, 3]
+z[:3, :2, 3]
+z[::-1, 1, 3]
+z[-1]
+```
+
+All of the following will raise an `IndexError`, like in Numpy 1.13 and later.
+
+```python
+
+z[6]
+z[3, 6]
+z[1, 4, 8]
+z[-6]
+```
+
+**Advanced Indexing**
+
+Advanced indexing (indexing arrays with other arrays) is supported, but only for indexing
+with a *single array*. Indexing a single array with multiple arrays is not supported at
+this time. As above, if `z.shape` is `(5, 6, 7)`, all of the following will
+work like NumPy:
+
+```python
+
+z[[0, 1, 2]]
+z[1, [3]]
+z[1, 4, [3, 6]]
+z[:3, :2, [1, 5]]
+```
+
+
+**Package Configuration**
+
+By default, when performing something like `np.array(COO)`, we allow the array
+to be converted into a dense one. To prevent this and raise a [`RuntimeError`][]
+instead, set the environment variable `SPARSE_AUTO_DENSIFY` to `0`.
+
+If it is desired to raise a warning if creating a sparse array that takes no less
+memory than an equivalent desne array, set the environment variable
+`SPARSE_WARN_ON_TOO_DENSE` to `1`.
+
+
+## Other Operations
+
+[`sparse.COO`][] and [`sparse.GCXS`][] arrays support a number of other common operations. Among them are
+[`sparse.dot`][], [`sparse.tensordot`][] [`sparse.einsum`][], [`sparse.concatenate`][]
+and [`sparse.stack`][], [`sparse.COO.transpose`][] and [`sparse.COO.reshape`][].
+You can view the full list on the [API reference page](../../api/BACKEND/).
+
+!!! note
+
+ Some operations require zero fill-values (such as [`sparse.COO.nonzero`][])
+ and others (such as [`sparse.concatenate`][]) require that all inputs have consistent fill-values.
+ For details, check the API reference.
diff --git a/docs/operations.rst b/docs/operations.rst
deleted file mode 100644
index 354304931..000000000
--- a/docs/operations.rst
+++ /dev/null
@@ -1,273 +0,0 @@
-.. currentmodule:: sparse
-
-Operations on :obj:`COO` and :obj:`GCXS` arrays
-===============================================
-
-.. _operations-operators:
-
-Operators
----------
-
-:obj:`COO` and :obj:`GCXS` objects support a number of operations. They interact with scalars,
-:doc:`Numpy arrays `, other :obj:`COO` and :obj:`GCXS` objects,
-and :obj:`scipy.sparse.spmatrix` objects, all following standard Python and Numpy
-conventions.
-
-For example, the following Numpy expression produces equivalent
-results for both Numpy arrays, COO arrays, or a mix of the two:
-
-.. code-block:: python
-
- np.log(X.dot(beta.T) + 1)
-
-However some operations are not supported, like operations that
-implicitly cause dense structures, or numpy functions that are not
-yet implemented for sparse arrays.
-
-.. code-block:: python
-
- np.linalg.cholesky(x) # sparse cholesky not implemented
-
-
-This page describes those valid operations, and their limitations.
-
-:obj:`elemwise`
-~~~~~~~~~~~~~~~
-This function allows you to apply any arbitrary broadcasting function to any number of arguments
-where the arguments can be :obj:`SparseArray` objects or :obj:`scipy.sparse.spmatrix` objects.
-For example, the following will add two arrays:
-
-.. code-block:: python
-
- sparse.elemwise(np.add, x, y)
-
-
-.. warning:: Previously, :obj:`elemwise` was a method of the :obj:`COO` class. Now,
- it has been moved to the :obj:`sparse` module.
-
-.. _operations-auto-densification:
-
-Auto-Densification
-~~~~~~~~~~~~~~~~~~
-Operations that would result in dense matrices, such as
-operations with :doc:`Numpy arrays `
-raises a :obj:`ValueError`. For example, the following will raise a
-:obj:`ValueError` if :code:`x` is a :obj:`numpy.ndarray`:
-
-.. code-block:: python
-
- x + y
-
-However, all of the following are valid operations.
-
-.. code-block:: python
-
- x + 0
- x != y
- x + y
- x == 5
- 5 * x
- x / 7.3
- x != 0
- x == 0
- ~x
- x + 5
-
-We also support operations with a nonzero fill value. These are operations
-that map zero values to nonzero values, such as :code:`x + 1` or :code:`~x`.
-In these cases, they will produce an output with a fill value of :code:`1` or :code:`True`,
-assuming the original array has a fill value of :code:`0` or :code:`False` respectively.
-
-If densification is needed, it must be explicit. In other words, you must call
-:obj:`SparseArray.todense` on the :obj:`SparseArray` object. If both operands are :obj:`SparseArray`,
-both must be densified.
-
-Operations with NumPy arrays
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-In certain situations, operations with NumPy arrays are also supported. For example,
-the following will work if :code:`x` is :obj:`COO` and :code:`y` is a NumPy array:
-
-.. code-block:: python
-
- x * y
-
-The following conditions must be met when performing element-wise operations with
-NumPy arrays:
-
-* The operation must produce a consistent fill-values. In other words, the resulting
- array must also be sparse.
-* Operating on the NumPy arrays must not increase the size when broadcasting the arrays.
-
-Operations with :obj:`scipy.sparse.spmatrix`
---------------------------------------------
-Certain operations with :obj:`scipy.sparse.spmatrix` are also supported.
-For example, the following are all allowed if :code:`y` is a :obj:`scipy.sparse.spmatrix`:
-
-.. code-block:: python
-
- x + y
- x - y
- x * y
- x > y
- x < y
-
-In general, operating on a :code:`scipy.sparse.spmatrix` is the same as operating
-on :obj:`COO` or :obj:`GCXS`, as long as it is to the right of the operator.
-
-.. note:: Results are not guaranteed if :code:`x` is a :obj:`scipy.sparse.spmatrix`.
- For this reason, we recommend that all Scipy sparse matrices should be explicitly
- converted to :obj:`COO` or :obj:`GCXS` before any operations.
-
-
-Broadcasting
-------------
-All binary operators support :doc:`broadcasting `.
-This means that (under certain conditions) you can perform binary operations
-on arrays with unequal shape. Namely, when the shape is missing a dimension,
-or when a dimension is :code:`1`. For example, performing a binary operation
-on two :obj:`COO` arrays with shapes :code:`(4,)` and :code:`(5, 1)` yields
-an object of shape :code:`(5, 4)`. The same happens with arrays of shape
-:code:`(1, 4)` and :code:`(5, 1)`. However, :code:`(4, 1)` and :code:`(5, 1)`
-will raise a :obj:`ValueError`.
-
-.. _operations-elemwise:
-
-Element-wise Operations
------------------------
-:obj:`COO` and :obj:`GCXS` arrays support a variety of element-wise operations. However, as
-with operators, operations that map zero to a nonzero value are not supported.
-
-To illustrate, the following are all possible, and will produce another
-:obj:`SparseArray`:
-
-.. code-block:: python
-
- np.abs(x)
- np.sin(x)
- np.sqrt(x)
- np.conj(x)
- np.expm1(x)
- np.log1p(x)
- np.exp(x)
- np.cos(x)
- np.log(x)
-
-As above, in the last three cases, an array with a nonzero fill value will be produced.
-
-Notice that you can apply any unary or binary :doc:`numpy.ufunc ` to :obj:`COO`
-arrays, and :obj:`numpy.ndarray` objects and scalars and it will work so
-long as the result is not dense. When applying to :obj:`numpy.ndarray` objects,
-we check that operating on the array with zero would always produce a zero.
-
-.. _operations-reductions:
-
-Reductions
-----------
-:obj:`COO` and :obj:`GCXS` objects support a number of reductions. However, not all important
-reductions are currently implemented (help welcome!) All of the following
-currently work:
-
-.. code-block:: python
-
- x.sum(axis=1)
- np.max(x)
- np.min(x, axis=(0, 2))
- x.prod()
-
-
-:obj:`SparseArray.reduce`
-~~~~~~~~~~~~~~~~~~~~~~~~~
-This method can take an arbitrary :doc:`numpy.ufunc ` and performs a
-reduction using that method. For example, the following will perform
-a sum:
-
-.. code-block:: python
-
- x.reduce(np.add, axis=1)
-
-.. note::
- This library currently performs reductions by grouping together all
- coordinates along the supplied axes and reducing those. Then, if the
- number in a group is deficient, it reduces an extra time with zero.
- As a result, if reductions can change by adding multiple zeros to
- it, this method won't be accurate. However, it works in most cases.
-
-Partial List of Supported Reductions
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-Although any binary :doc:`numpy.ufunc ` should work for reductions, when calling
-in the form :code:`x.reduction()`, the following reductions are supported:
-
-* :obj:`COO.sum`
-* :obj:`COO.max`
-* :obj:`COO.min`
-* :obj:`COO.prod`
-
-.. _operations-indexing:
-
-Indexing
---------
-:obj:`COO` and :obj:`GCXS` arrays can be :obj:`indexed ` just like regular
-:obj:`numpy.ndarray` objects. They support integer, slice and boolean indexing.
-However, currently, numpy advanced indexing is not properly supported. This
-means that all of the following work like in Numpy, except that they will produce
-:obj:`SparseArray` arrays rather than :obj:`numpy.ndarray` objects, and will produce
-scalars where expected. Assume that :code:`z.shape` is :code:`(5, 6, 7)`
-
-.. code-block:: python
-
- z[0]
- z[1, 3]
- z[1, 4, 3]
- z[:3, :2, 3]
- z[::-1, 1, 3]
- z[-1]
-
-All of the following will raise an :obj:`IndexError`, like in Numpy 1.13 and later.
-
-.. code-block:: python
-
- z[6]
- z[3, 6]
- z[1, 4, 8]
- z[-6]
-
-
-Advanced Indexing
-~~~~~~~~~~~~~~~~~
-
-Advanced indexing (indexing arrays with other arrays) is supported, but only for indexing
-with a *single array*. Indexing a single array with multiple arrays is not supported at
-this time. As above, if :code:`z.shape` is :code:`(5, 6, 7)`, all of the following will
-work like NumPy:
-
-.. code-block:: python
-
- z[[0, 1, 2]]
- z[1, [3]]
- z[1, 4, [3, 6]]
- z[:3, :2, [1, 5]]
-
-
-Package Configuration
----------------------
-
-By default, when performing something like ``np.array(COO)``, we allow the array
-to be converted into a dense one. To prevent this and raise a :obj:`RuntimeError`
-instead, set the environment variable ``SPARSE_AUTO_DENSIFY`` to ``0``.
-
-If it is desired to raise a warning if creating a sparse array that takes no less
-memory than an equivalent desne array, set the environment variable
-``SPARSE_WARN_ON_TOO_DENSE`` to ``1``.
-
-.. _operations-other:
-
-Other Operations
-----------------
-:obj:`COO` and :obj:`GCXS` arrays support a number of other common operations. Among them are
-:obj:`dot`, :obj:`tensordot`, :obj:`einsum`, :obj:`concatenate`
-and :obj:`stack`, :obj:`transpose ` and :obj:`reshape `.
-You can view the full list on the :doc:`API reference page `.
-
-.. note:: Some operations require zero fill-values (such as :obj:`nonzero `)
- and others (such as :obj:`concatenate`) require that all inputs have consistent fill-values.
- For details, check the API reference.
diff --git a/docs/quickstart.md b/docs/quickstart.md
new file mode 100644
index 000000000..670ea1b46
--- /dev/null
+++ b/docs/quickstart.md
@@ -0,0 +1,67 @@
+# Getting Started
+
+## Install
+
+If you haven't already, install the `sparse` library
+
+```bash
+pip install sparse
+```
+
+## Create
+
+To start, lets construct a sparse [`sparse.COO`][] array from a [`numpy.ndarray`][]:
+
+```python
+
+import numpy as np
+import sparse
+
+x = np.random.random((100, 100, 100))
+x[x < 0.9] = 0 # fill most of the array with zeros
+
+s = sparse.COO(x) # convert to sparse array
+```
+
+These store the same information and support many of the same operations,
+but the sparse version takes up less space in memory
+
+```python
+>>> x.nbytes
+8000000
+>>> s.nbytes
+1102706
+>>> s
+
+```
+
+For more efficient ways to construct sparse arrays,
+see documentation on [Construct sparse arrays][construct-sparse-arrays].
+
+## Compute
+
+Many of the normal Numpy operations work on [`sparse.COO`][] objects just like on [`numpy.ndarray`][] objects.
+This includes arithmetic, [`numpy.ufunc`][] operations, or functions like tensordot and transpose.
+
+```python
+>>> np.sin(s) + s.T * 1
+
+```
+
+However, operations which map zero elements to nonzero will usually change the fill-value
+instead of raising an error.
+
+```python
+>>> y = s + 5
+
+```
+
+However, if you're sure you want to convert a sparse array to a dense one,
+you can use the ``todense`` method (which will result in a [`numpy.ndarray`][]):
+
+```python
+y = s.todense() + 5
+```
+
+For more operations see the [operations][operators]
+or the [API reference page](../../api/backend).
diff --git a/docs/quickstart.rst b/docs/quickstart.rst
deleted file mode 100644
index c645582b5..000000000
--- a/docs/quickstart.rst
+++ /dev/null
@@ -1,72 +0,0 @@
-.. currentmodule:: sparse
-
-Getting Started
-===============
-
-Install
--------
-
-If you haven't already, install the ``sparse`` library
-
-.. code-block:: bash
-
- pip install sparse
-
-Create
-------
-
-To start, lets construct a sparse :obj:`COO` array from a :obj:`numpy.ndarray`:
-
-.. code-block:: python
-
- import numpy as np
- import sparse
-
- x = np.random.random((100, 100, 100))
- x[x < 0.9] = 0 # fill most of the array with zeros
-
- s = sparse.COO(x) # convert to sparse array
-
-These store the same information and support many of the same operations,
-but the sparse version takes up less space in memory
-
-.. code-block:: python
-
- >>> x.nbytes
- 8000000
- >>> s.nbytes
- 1102706
- >>> s
-
-
-For more efficient ways to construct sparse arrays,
-see documentation on :doc:`Constructing Arrays `.
-
-Compute
--------
-
-Many of the normal Numpy operations work on :obj:`COO` objects just like on :obj:`numpy.ndarray` objects.
-This includes arithmetic, :doc:`numpy.ufunc ` operations, or functions like tensordot and transpose.
-
-.. code-block:: python
-
- >>> np.sin(s) + s.T * 1
-
-
-However, operations which map zero elements to nonzero will usually change the fill-value
-instead of raising an error.
-
-.. code-block:: python
-
- >>> y = s + 5
-
-
-However, if you're sure you want to convert a sparse array to a dense one,
-you can use the ``todense`` method (which will result in a :obj:`numpy.ndarray`):
-
-.. code-block:: python
-
- y = s.todense() + 5
-
-For more operations see the :doc:`Operations documentation `
-or the :doc:`API reference `.
diff --git a/docs/roadmap.rst b/docs/roadmap.md
similarity index 69%
rename from docs/roadmap.rst
rename to docs/roadmap.md
index f5e64b499..1b0be9b3f 100644
--- a/docs/roadmap.rst
+++ b/docs/roadmap.md
@@ -1,23 +1,21 @@
-Roadmap
-=======
+# Roadmap
For a brochure version of this roadmap, see
-`this link `_.
+[this link](https://docs.wixstatic.com/ugd/095d2c_ac81d19db47047c79a55da7a6c31cf66.pdf).
-Background
-----------
+
+## Background
The aim of PyData/Sparse is to create sparse containers that implement the ndarray
interface. Traditionally in the PyData ecosystem, sparse arrays have been provided
-by the ``scipy.sparse`` submodule. All containers there depend on and emulate the
-``numpy.matrix`` interface. This means that they are limited to two dimensions and also
-don’t work well in places where ``numpy.ndarray`` would work.
+by the `scipy.sparse` submodule. All containers there depend on and emulate the
+`numpy.matrix` interface. This means that they are limited to two dimensions and also
+don’t work well in places where `numpy.ndarray` would work.
-PyData/Sparse is well on its way to replacing ``scipy.sparse`` as the de-facto sparse array
+PyData/Sparse is well on its way to replacing `scipy.sparse` as the de-facto sparse array
implementation in the PyData ecosystem.
-Topics
-------
+## Topics
* More storage formats
* Better performance/algorithms
@@ -27,8 +25,7 @@ Topics
* CuPy integration for GPU-acceleration
* Maintenance and General Improvements
-More Storage Formats
---------------------
+## More Storage Formats
In the sparse domain, you have to make a choice of format when representing your array in
memory, and different formats have different trade-offs. For example:
@@ -41,42 +38,37 @@ memory, and different formats have different trade-offs. For example:
The most important formats are, of course, CSR and CSC, because they allow zero-copy interaction
with a number of libraries including MKL, LAPACK and others. This will allow PyData/Sparse to
-quickly reach the functionality of ``scipy.sparse``, accelerating the path to its replacement.
+quickly reach the functionality of `scipy.sparse`, accelerating the path to its replacement.
-Better Performance/Algorithms
------------------------------
+## Better Performance/Algorithms
There are a few places in scipy.sparse where algorithms are sub-optimal, sometimes due to reliance
on NumPy which doesn’t have these algorithms. We intend to both improve the algorithms in NumPy,
giving the broader community a chance to use them; as well as in PyData/Sparse, to reach optimal
efficiency in the broadest use-cases.
-Covering More of the NumPy API
-------------------------------
+## Covering More of the NumPy API
Our eventual aim is to cover all areas of NumPy where algorithms exist that give sparse arrays an edge
over dense arrays. Currently, PyData/Sparse supports reductions, element-wise functions and other common
functions such as stacking, concatenating and tensor products. Common uses of sparse arrays include
linear algebra and graph theoretic subroutines, so we plan on covering those first.
-SciPy Integration
------------------
+## SciPy Integration
PyData/Sparse aims to build containers and elementary operations on them, such as element-wise operations,
-reductions and so on. We plan on modifying the current graph theoretic subroutines in ``scipy.sparse.csgraph``
-to support PyData/Sparse arrays. The same applies for linear algebra and ``scipy.sparse.linalg``.
+reductions and so on. We plan on modifying the current graph theoretic subroutines in `scipy.sparse.csgraph`
+to support PyData/Sparse arrays. The same applies for linear algebra and `scipy.sparse.linalg`.
-CuPy integration for GPU-acceleration
--------------------------------------
+## CuPy integration for GPU-acceleration
CuPy is a project that implements a large portion of NumPy’s ndarray interface on GPUs. We plan to integrate
with CuPy so that it’s possible to accelerate sparse arrays on GPUs.
-Completed Tasks
-===============
+[](){#completed}
+# Completed Tasks
-Dask Integration for High Scalability
--------------------------------------
+## Dask Integration for High Scalability
Dask is a project that takes ndarray style containers and then allows them to scale across multiple cores or
clusters. We plan on tighter integration and cooperation with the Dask team to ensure the highest amount of
@@ -85,16 +77,14 @@ Dask functionality works with sparse arrays.
Currently, integration with Dask is supported via array protocols. When more of the NumPy API (e.g. array
creation functions) becomes available through array protocols, it will be automatically be supported by Dask.
-(Partial) SciPy Integration
----------------------------
+## (Partial) SciPy Integration
-Support for ``scipy.sparse.linalg`` has been completed. We hope to add support for ``scipy.sparse.csgraph``
+Support for `scipy.sparse.linalg` has been completed. We hope to add support for `scipy.sparse.csgraph`
in the future.
-More Storage Formats
---------------------
+## More Storage Formats
GCXS, a compressed n-dimensional array format based on the GCRS/GCCS formats of
-`Shaikh and Hasan 2015 `_, has been added.
+[Shaikh and Hasan 2015](https://ieeexplore.ieee.org/document/7237032), has been added.
In conjunction with this work, the CSR/CSC matrix formats have been are now a part of pydata/sparse.
We plan to add better-performing algorithms for many of the operations currently supported.
diff --git a/mkdocs.yml b/mkdocs.yml
new file mode 100644
index 000000000..df93faff8
--- /dev/null
+++ b/mkdocs.yml
@@ -0,0 +1,76 @@
+site_name: sparse
+repo_url: https://github.com/pydata/sparse.git
+edit_uri: edit/main/docs/
+theme:
+ name: material
+ palette:
+ primary: black
+ font: false #avoid Google Fonts to adhere to data privacy regulations
+ logo: assets/images/logo.png
+ favicon: assets/images/logo.svg
+ features:
+ - navigation.tracking
+ - navigation.instant
+ - navigation.instant.progress
+ - navigation.prune
+ - navigation.footer
+ - navigation.indexes
+ - content.code.copy
+
+markdown_extensions:
+ - tables
+ - admonition # This line, pymdownx.details and pymdownx.superfences are used by warings
+ - pymdownx.details
+ - pymdownx.superfences
+ - codehilite
+ - toc:
+ toc_depth: 2
+ - pymdownx.arithmatex: # To display math content with KaTex
+ generic: true
+ - attr_list # To be able to link to a header on another page
+
+extra_javascript:
+ - javascripts/katex.js
+ - https://unpkg.com/katex@0/dist/katex.min.js
+ - https://unpkg.com/katex@0/dist/contrib/auto-render.min.js
+
+extra_css:
+ - https://unpkg.com/katex@0/dist/katex.min.css
+ - css/mkdocstrings.css
+
+plugins:
+- search
+- section-index
+- autorefs
+- gen-files:
+ scripts:
+ - scripts/gen_ref_pages.py
+- literate-nav
+- mkdocstrings:
+ handlers:
+ python:
+ import:
+ - https://numpy.org/doc/stable/objects.inv
+ - https://docs.python.org/3/objects.inv
+ - https://docs.scipy.org/doc/scipy/objects.inv
+ options:
+ inherited_members: yes
+ show_root_members_full_path: false
+ show_if_no_docstring: true
+ members_order: source
+ docstring_style: numpy
+ filters: ["!^_"]
+
+nav:
+- index.md
+- install.md
+- quickstart.md
+- construct.md
+- operations.md
+- API:
+ - api/*
+- roadmap.md
+# - completed-tasks.md
+- contributing.md
+- changelog.md
+- conduct.md
diff --git a/pyproject.toml b/pyproject.toml
index 5d9201dfc..5630509ae 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -27,7 +27,15 @@ classifiers = [
]
[project.optional-dependencies]
-docs = ["sphinx", "sphinx_rtd_theme", "scipy"]
+docs = [
+ "mkdocs-material",
+ "mkdocstrings",
+ "mkdocstrings[python]",
+ "mkdocs-gen-files",
+ "mkdocs-literate-nav",
+ "mkdocs-section-index",
+ "scipy",
+]
tests = [
"dask[array]",
"pytest>=3.5",
diff --git a/scripts/gen_ref_pages.py b/scripts/gen_ref_pages.py
new file mode 100644
index 000000000..f6ab7f238
--- /dev/null
+++ b/scripts/gen_ref_pages.py
@@ -0,0 +1,20 @@
+"""Generate the code reference pages."""
+
+from pathlib import Path
+
+import sparse
+
+import mkdocs_gen_files
+
+nav = mkdocs_gen_files.Nav()
+
+root = Path(__file__).parent.parent
+
+for item in dir(sparse):
+ if item.startswith("_") or not getattr(getattr(sparse, item), "__module__", "").startswith("sparse"):
+ continue
+ full_doc_path = Path("api/" + item + ".md")
+ with mkdocs_gen_files.open(Path("api", f"{item}.md"), "w") as fd:
+ print(f"# {item}", file=fd)
+ print("::: " + f"sparse.{item}", file=fd)
+ mkdocs_gen_files.set_edit_path(full_doc_path, root)
diff --git a/sparse/numba_backend/_common.py b/sparse/numba_backend/_common.py
index 19c0a5630..6cf6ce3fa 100644
--- a/sparse/numba_backend/_common.py
+++ b/sparse/numba_backend/_common.py
@@ -91,12 +91,12 @@ def check_class_nan(test):
def tensordot(a, b, axes=2, *, return_type=None):
"""
- Perform the equivalent of :obj:`numpy.tensordot`.
+ Perform the equivalent of [`numpy.tensordot`][].
Parameters
----------
a, b : Union[SparseArray, np.ndarray, scipy.sparse.spmatrix]
- The arrays to perform the :code:`tensordot` operation on.
+ The arrays to perform the `tensordot` operation on.
axes : tuple[Union[int, tuple[int], Union[int, tuple[int]], optional
The axes to match when performing the sum.
return_type : {None, COO, np.ndarray}, optional
@@ -114,7 +114,7 @@ def tensordot(a, b, axes=2, *, return_type=None):
See Also
--------
- numpy.tensordot : NumPy equivalent function
+ - [`numpy.tensordot`][] : NumPy equivalent function
"""
from ._compressed import GCXS
@@ -202,12 +202,12 @@ def tensordot(a, b, axes=2, *, return_type=None):
def matmul(a, b):
- """Perform the equivalent of :obj:`numpy.matmul` on two arrays.
+ """Perform the equivalent of [`numpy.matmul`][] on two arrays.
Parameters
----------
a, b : Union[SparseArray, np.ndarray, scipy.sparse.spmatrix]
- The arrays to perform the :code:`matmul` operation on.
+ The arrays to perform the `matmul` operation on.
Returns
-------
@@ -221,8 +221,8 @@ def matmul(a, b):
See Also
--------
- numpy.matmul : NumPy equivalent function.
- COO.__matmul__ : Equivalent function for COO objects.
+ - [`numpy.matmul`][] : NumPy equivalent function.
+ - `COO.__matmul__`: Equivalent function for COO objects.
"""
check_zero_fill_value(a, b)
if not hasattr(a, "ndim") or not hasattr(b, "ndim"):
@@ -281,12 +281,12 @@ def _matmul_recurser(a, b):
def dot(a, b):
"""
- Perform the equivalent of :obj:`numpy.dot` on two arrays.
+ Perform the equivalent of [`numpy.dot`][] on two arrays.
Parameters
----------
a, b : Union[SparseArray, np.ndarray, scipy.sparse.spmatrix]
- The arrays to perform the :code:`dot` operation on.
+ The arrays to perform the `dot` operation on.
Returns
-------
@@ -300,8 +300,8 @@ def dot(a, b):
See Also
--------
- numpy.dot : NumPy equivalent function.
- COO.dot : Equivalent function for COO objects.
+ - [`numpy.dot`][] : NumPy equivalent function.
+ - [`sparse.COO.dot`][] : Equivalent function for COO objects.
"""
check_zero_fill_value(a, b)
if not hasattr(a, "ndim") or not hasattr(b, "ndim"):
@@ -1384,7 +1384,7 @@ def _einsum_single(lhs, rhs, operand):
def einsum(*operands, **kwargs):
"""
- Perform the equivalent of :obj:`numpy.einsum`.
+ Perform the equivalent of [`numpy.einsum`][].
Parameters
----------
@@ -1397,7 +1397,7 @@ def einsum(*operands, **kwargs):
These are the arrays for the operation.
dtype : data-type, optional
If provided, forces the calculation to use the data type specified.
- Default is ``None``.
+ Default is `None`.
**kwargs : dict, optional
Any additional arguments to pass to the function.
@@ -1482,11 +1482,11 @@ def stack(arrays, axis=0, compressed_axes=None):
Raises
------
ValueError
- If all elements of :code:`arrays` don't have the same fill-value.
+ If all elements of `arrays` don't have the same fill-value.
See Also
--------
- numpy.stack : NumPy equivalent function
+ [`numpy.stack`][]: NumPy equivalent function
"""
from ._compressed import GCXS
@@ -1521,11 +1521,11 @@ def concatenate(arrays, axis=0, compressed_axes=None):
Raises
------
ValueError
- If all elements of :code:`arrays` don't have the same fill-value.
+ If all elements of `arrays` don't have the same fill-value.
See Also
--------
- numpy.concatenate : NumPy equivalent function
+ [`numpy.concatenate`][] : NumPy equivalent function
"""
from ._compressed import GCXS
@@ -1876,13 +1876,18 @@ def outer(a, b, out=None):
def asnumpy(a, dtype=None, order=None):
"""Returns a dense numpy array from an arbitrary source array.
- Args:
- a: Arbitrary object that can be converted to :class:`numpy.ndarray`.
- order ({'C', 'F', 'A'}): The desired memory layout of the output
- array. When ``order`` is 'A', it uses 'F' if ``a`` is
- fortran-contiguous and 'C' otherwise.
- Returns:
- numpy.ndarray: Converted array on the host memory.
+ Parameters
+ ----------
+ a: array_like
+ Arbitrary object that can be converted to [`numpy.ndarray`][].
+ order: ({'C', 'F', 'A'})
+ The desired memory layout of the output
+ array. When ``order`` is 'A', it uses 'F' if ``a`` is
+ fortran-contiguous and 'C' otherwise.
+
+ Returns
+ -------
+ numpy.ndarray: Converted array on the host memory.
"""
from ._sparse_array import SparseArray
@@ -1944,7 +1949,7 @@ def moveaxis(a, source, destination):
def pad(array, pad_width, mode="constant", **kwargs):
"""
- Performs the equivalent of :obj:`numpy.pad` for :obj:`SparseArray`. Note that
+ Performs the equivalent of [`sparse.SparseArray`][]. Note that
this function returns a new array instead of a view.
Parameters
@@ -1978,7 +1983,7 @@ def pad(array, pad_width, mode="constant", **kwargs):
See Also
--------
- :obj:`numpy.pad` : NumPy equivalent function
+ [`numpy.pad`][] : NumPy equivalent function
"""
if not isinstance(array, SparseArray):
diff --git a/sparse/numba_backend/_compressed/compressed.py b/sparse/numba_backend/_compressed/compressed.py
index 314ca9f9d..85bce095c 100644
--- a/sparse/numba_backend/_compressed/compressed.py
+++ b/sparse/numba_backend/_compressed/compressed.py
@@ -78,12 +78,12 @@ def _from_coo(x, compressed_axes=None, idx_dtype=None):
class GCXS(SparseArray, NDArrayOperatorsMixin):
- """
+ r"""
A sparse multidimensional array.
This is stored in GCXS format, a generalization of the GCRS/GCCS formats
- from 'Efficient storage scheme for n-dimensional sparse array: GCRS/GCCS':
- https://ieeexplore.ieee.org/document/7237032. GCXS generalizes the CRS/CCS
+ from [Efficient storage scheme for n-dimensional sparse array: GCRS/GCCS](
+ https://ieeexplore.ieee.org/document/7237032). GCXS generalizes the CRS/CCS
sparse matrix formats.
For arrays with ndim == 2, GCXS is the same CSR/CSC.
@@ -117,7 +117,7 @@ class GCXS(SparseArray, NDArrayOperatorsMixin):
Attributes
----------
data : numpy.ndarray (nnz,)
- An array holding the nonzero values corresponding to :obj:`GCXS.indices`.
+ An array holding the nonzero values corresponding to `indices`.
indices : numpy.ndarray (nnz,)
An array holding the coordinates of every nonzero element along uncompressed dimensions.
indptr : numpy.ndarray
@@ -127,7 +127,7 @@ class GCXS(SparseArray, NDArrayOperatorsMixin):
See Also
--------
- DOK : A mostly write-only sparse array.
+ [`sparse.DOK`][] : A mostly write-only sparse array.
"""
__array_priority__ = 12
@@ -235,8 +235,8 @@ def dtype(self):
See Also
--------
- numpy.ndarray.dtype : Numpy equivalent property.
- scipy.sparse.csr_matrix.dtype : Scipy equivalent property.
+ - [`numpy.ndarray.dtype`][] : Numpy equivalent property.
+ - [`scipy.sparse.csr_matrix.dtype`][] : Scipy equivalent property.
"""
return self.data.dtype
@@ -252,10 +252,10 @@ def nnz(self):
See Also
--------
- COO.nnz : Equivalent :obj:`COO` array property.
- DOK.nnz : Equivalent :obj:`DOK` array property.
- numpy.count_nonzero : A similar Numpy function.
- scipy.sparse.csr_matrix.nnz : The Scipy equivalent property.
+ - [`sparse.COO.nnz`][] : Equivalent [`sparse.COO`][] array property.
+ - [`sparse.DOK.nnz`][] : Equivalent [`sparse.DOK`][] array property.
+ - [`numpy.count_nonzero`][] : A similar Numpy function.
+ - [`scipy.sparse.coo_matrix.nnz`][] : The Scipy equivalent property.
"""
return self.data.shape[0]
@@ -263,13 +263,16 @@ def nnz(self):
def format(self):
"""
The storage format of this array.
+
Returns
-------
str
The storage format of this array.
+
See Also
-------
- scipy.sparse.dok_matrix.format : The Scipy equivalent property.
+ [`scipy.sparse.dok_matrix.format`][] : The Scipy equivalent property.
+
Examples
-------
>>> import sparse
@@ -295,7 +298,7 @@ def nbytes(self):
See Also
--------
- numpy.ndarray.nbytes : The equivalent Numpy property.
+ [`numpy.ndarray.nbytes`][] : The equivalent Numpy property.
"""
return self.data.nbytes + self.indices.nbytes + self.indptr.nbytes
@@ -418,7 +421,7 @@ def change_compressed_axes(self, new_compressed_axes):
def tocoo(self):
"""
- Convert this :obj:`GCXS` array to a :obj:`COO`.
+ Convert this [`sparse.GCXS`][] array to a [`sparse.COO`][].
Returns
-------
@@ -455,8 +458,8 @@ def tocoo(self):
def todense(self):
"""
- Convert this :obj:`GCXS` array to a dense :obj:`numpy.ndarray`. Note that
- this may take a large amount of memory if the :obj:`GCXS` object's :code:`shape`
+ Convert this [`sparse.GCXS`][] array to a dense [`numpy.ndarray`][]. Note that
+ this may take a large amount of memory if the [`sparse.GCXS`][] object's `shape`
is large.
Returns
@@ -466,9 +469,10 @@ def todense(self):
See Also
--------
- DOK.todense : Equivalent :obj:`DOK` array method.
- COO.todense : Equivalent :obj:`COO` array method.
- scipy.sparse.coo_matrix.todense : Equivalent Scipy method.
+ - [`sparse.DOK.todense`][] : Equivalent [`sparse.DOK`][] array method.
+ - [`sparse.COO.todense`][] : Equivalent [`sparse.COO`][] array method.
+ - [`scipy.sparse.coo_matrix.todense`][] : Equivalent Scipy method.
+
"""
if self.compressed_axes is None:
out = np.full(self.shape, self.fill_value, self.dtype)
@@ -487,7 +491,7 @@ def todok(self):
def to_scipy_sparse(self, accept_fv=None):
"""
- Converts this :obj:`GCXS` object into a :obj:`scipy.sparse.csr_matrix` or `scipy.sparse.csc_matrix`.
+ Converts this [`sparse.GCXS`][] object into a [`scipy.sparse.csr_matrix`][] or [`scipy.sparse.csc_matrix`][].
Parameters
----------
@@ -496,7 +500,7 @@ def to_scipy_sparse(self, accept_fv=None):
Returns
-------
- :obj:`scipy.sparse.csr_matrix` or `scipy.sparse.csc_matrix`
+ scipy.sparse.csr_matrix or scipy.sparse.csc_matrix
The converted Scipy sparse matrix.
Raises
@@ -562,22 +566,26 @@ def asformat(self, format, **kwargs):
def maybe_densify(self, max_size=1000, min_density=0.25):
"""
- Converts this :obj:`GCXS` array to a :obj:`numpy.ndarray` if not too
+ Converts this [`sparse.GCXS`][] array to a [`numpy.ndarray`][] if not too
costly.
+
Parameters
----------
max_size : int
Maximum number of elements in output
min_density : float
Minimum density of output
+
Returns
-------
numpy.ndarray
The dense array.
+
See Also
--------
- sparse.GCXS.todense: Converts to Numpy function without checking the cost.
- sparse.COO.maybe_densify: The equivalent COO function.
+ - [sparse.GCXS.todense][]: Converts to Numpy function without checking the cost.
+ - [sparse.COO.maybe_densify][]: The equivalent COO function.
+
Raises
-------
ValueError
@@ -591,7 +599,7 @@ def maybe_densify(self, max_size=1000, min_density=0.25):
def flatten(self, order="C"):
"""
- Returns a new :obj:`GCXS` array that is a flattened version of this array.
+ Returns a new [`sparse.GCXS`][] array that is a flattened version of this array.
Returns
-------
@@ -600,7 +608,7 @@ def flatten(self, order="C"):
Notes
-----
- The :code:`order` parameter is provided just for compatibility with
+ The `order` parameter is provided just for compatibility with
Numpy and isn't actually supported.
"""
if order not in {"C", None}:
@@ -610,7 +618,7 @@ def flatten(self, order="C"):
def reshape(self, shape, order="C", compressed_axes=None):
"""
- Returns a new :obj:`GCXS` array that is a reshaped version of this array.
+ Returns a new [`sparse.GCXS`][] array that is a reshaped version of this array.
Parameters
----------
@@ -627,12 +635,12 @@ def reshape(self, shape, order="C", compressed_axes=None):
See Also
--------
- numpy.ndarray.reshape : The equivalent Numpy function.
- sparse.COO.reshape : The equivalent COO function.
+ - [`numpy.ndarray.reshape`][] : The equivalent Numpy function.
+ - [sparse.COO.reshape][] : The equivalent COO function.
Notes
-----
- The :code:`order` parameter is provided just for compatibility with
+ The `order` parameter is provided just for compatibility with
Numpy and isn't actually supported.
"""
@@ -694,8 +702,8 @@ def transpose(self, axes=None, compressed_axes=None):
See Also
--------
- :obj:`GCXS.T` : A quick property to reverse the order of the axes.
- numpy.ndarray.transpose : Numpy equivalent function.
+ - [`sparse.GCXS.T`][] : A quick property to reverse the order of the axes.
+ - [`numpy.ndarray.transpose`][] : Numpy equivalent function.
"""
if axes is None:
axes = list(reversed(range(self.ndim)))
@@ -758,7 +766,7 @@ def _2d_transpose(self):
def dot(self, other):
"""
- Performs the equivalent of :code:`x.dot(y)` for :obj:`GCXS`.
+ Performs the equivalent of `x.dot(y)` for [`sparse.GCXS`][].
Parameters
----------
@@ -778,9 +786,9 @@ def dot(self, other):
See Also
--------
- dot : Equivalent function for two arguments.
- :obj:`numpy.dot` : Numpy equivalent function.
- scipy.sparse.csr_matrix.dot : Scipy equivalent function.
+ - [`sparse.dot`][] : Equivalent function for two arguments.
+ - [`numpy.dot`][] : Numpy equivalent function.
+ - [`scipy.sparse.coo_matrix.dot`][] : Scipy equivalent function.
"""
from .._common import dot
diff --git a/sparse/numba_backend/_coo/common.py b/sparse/numba_backend/_coo/common.py
index 3fcdd0e32..9ef5f5a91 100644
--- a/sparse/numba_backend/_coo/common.py
+++ b/sparse/numba_backend/_coo/common.py
@@ -21,7 +21,7 @@
def asCOO(x, name="asCOO", check=True):
"""
- Convert the input to :obj:`COO`. Passes through :obj:`COO` objects as-is.
+ Convert the input to [`sparse.COO`][]. Passes through [`sparse.COO`][] objects as-is.
Parameters
----------
@@ -35,12 +35,12 @@ def asCOO(x, name="asCOO", check=True):
Returns
-------
COO
- The converted :obj:`COO` array.
+ The converted [`sparse.COO`][] array.
Raises
------
ValueError
- If ``check`` is true and a dense input is supplied.
+ If `check` is true and a dense input is supplied.
"""
from .._common import _is_sparse
from .core import COO
@@ -149,11 +149,11 @@ def concatenate(arrays, axis=0):
Raises
------
ValueError
- If all elements of :code:`arrays` don't have the same fill-value.
+ If all elements of `arrays` don't have the same fill-value.
See Also
--------
- numpy.concatenate : NumPy equivalent function
+ [`numpy.concatenate`][] : NumPy equivalent function
"""
from .core import COO
@@ -212,11 +212,11 @@ def stack(arrays, axis=0):
Raises
------
ValueError
- If all elements of :code:`arrays` don't have the same fill-value.
+ If all elements of `arrays` don't have the same fill-value.
See Also
--------
- numpy.stack : NumPy equivalent function
+ [`numpy.stack`][] : NumPy equivalent function
"""
from .core import COO
@@ -270,11 +270,11 @@ def triu(x, k=0):
Raises
------
ValueError
- If :code:`x` doesn't have zero fill-values.
+ If `x` doesn't have zero fill-values.
See Also
--------
- numpy.triu : NumPy equivalent function
+ - [`numpy.triu`][] : NumPy equivalent function
"""
from .core import COO
@@ -311,11 +311,11 @@ def tril(x, k=0):
Raises
------
ValueError
- If :code:`x` doesn't have zero fill-values.
+ If `x` doesn't have zero fill-values.
See Also
--------
- numpy.tril : NumPy equivalent function
+ - [`numpy.tril`][] : NumPy equivalent function
"""
from .core import COO
@@ -354,8 +354,8 @@ def nansum(x, axis=None, keepdims=False, dtype=None, out=None):
See Also
--------
- :obj:`COO.sum` : Function without ``NaN`` skipping.
- numpy.nansum : Equivalent Numpy function.
+ - [`sparse.COO.sum`][] : Function without ``NaN`` skipping.
+ - [`numpy.nansum`][] : Equivalent Numpy function.
"""
assert out is None
x = asCOO(x, name="nansum")
@@ -364,7 +364,7 @@ def nansum(x, axis=None, keepdims=False, dtype=None, out=None):
def nanmean(x, axis=None, keepdims=False, dtype=None, out=None):
"""
- Performs a ``NaN`` skipping mean operation along the given axes. Uses all axes by default.
+ Performs a `NaN` skipping mean operation along the given axes. Uses all axes by default.
Parameters
----------
@@ -384,8 +384,8 @@ def nanmean(x, axis=None, keepdims=False, dtype=None, out=None):
See Also
--------
- :obj:`COO.mean` : Function without ``NaN`` skipping.
- numpy.nanmean : Equivalent Numpy function.
+ - [`sparse.COO.mean`][] : Function without `NaN` skipping.
+ - [`numpy.nanmean`][] : Equivalent Numpy function.
"""
assert out is None
x = asCOO(x, name="nanmean")
@@ -418,7 +418,7 @@ def nanmean(x, axis=None, keepdims=False, dtype=None, out=None):
def nanmax(x, axis=None, keepdims=False, dtype=None, out=None):
"""
- Maximize along the given axes, skipping ``NaN`` values. Uses all axes by default.
+ Maximize along the given axes, skipping `NaN` values. Uses all axes by default.
Parameters
----------
@@ -438,8 +438,8 @@ def nanmax(x, axis=None, keepdims=False, dtype=None, out=None):
See Also
--------
- :obj:`COO.max` : Function without ``NaN`` skipping.
- numpy.nanmax : Equivalent Numpy function.
+ - [`sparse.COO.max`][] : Function without `NaN` skipping.
+ - [`numpy.nanmax`][] : Equivalent Numpy function.
"""
assert out is None
x = asCOO(x, name="nanmax")
@@ -474,8 +474,8 @@ def nanmin(x, axis=None, keepdims=False, dtype=None, out=None):
See Also
--------
- :obj:`COO.min` : Function without ``NaN`` skipping.
- numpy.nanmin : Equivalent Numpy function.
+ - [`sparse.COO.min`][] : Function without `NaN` skipping.
+ - [`numpy.nanmin`][] : Equivalent Numpy function.
"""
assert out is None
x = asCOO(x, name="nanmin")
@@ -490,7 +490,7 @@ def nanmin(x, axis=None, keepdims=False, dtype=None, out=None):
def nanprod(x, axis=None, keepdims=False, dtype=None, out=None):
"""
- Performs a product operation along the given axes, skipping ``NaN`` values.
+ Performs a product operation along the given axes, skipping `NaN` values.
Uses all axes by default.
Parameters
@@ -511,8 +511,8 @@ def nanprod(x, axis=None, keepdims=False, dtype=None, out=None):
See Also
--------
- :obj:`COO.prod` : Function without ``NaN`` skipping.
- numpy.nanprod : Equivalent Numpy function.
+ - [`sparse.COO.prod`][] : Function without `NaN` skipping.
+ - [`numpy.nanprod`][] : Equivalent Numpy function.
"""
assert out is None
x = asCOO(x)
@@ -525,7 +525,7 @@ def where(condition, x=None, y=None):
If ``x`` and ``y`` are not given, returns indices where ``condition``
is nonzero.
- Performs the equivalent of :obj:`numpy.where`.
+ Performs the equivalent of [`numpy.where`][].
Parameters
----------
@@ -540,18 +540,18 @@ def where(condition, x=None, y=None):
Returns
-------
COO
- The output array with selected values if ``x`` and ``y`` are given;
+ The output array with selected values if `x` and `y` are given;
else where the array is nonzero.
Raises
------
ValueError
If the operation would produce a dense result; or exactly one of
- ``x`` and ``y`` are given.
+ `x` and `y` are given.
See Also
--------
- numpy.where : Equivalent Numpy function.
+ [`numpy.where`][] : Equivalent Numpy function.
"""
from .._umath import elemwise
@@ -584,7 +584,7 @@ def argwhere(a):
See Also
--------
- :obj:`where`, :obj:`COO.nonzero`
+ [`sparse.where`][], [`sparse.COO.nonzero`][]
Examples
--------
@@ -683,8 +683,8 @@ def _replace_nan(array, value):
def nanreduce(x, method, identity=None, axis=None, keepdims=False, **kwargs):
"""
- Performs an ``NaN`` skipping reduction on this array. See the documentation
- on :obj:`COO.reduce` for examples.
+ Performs an `NaN` skipping reduction on this array. See the documentation
+ on [`sparse.COO.reduce`][] for examples.
Parameters
----------
@@ -714,7 +714,7 @@ def nanreduce(x, method, identity=None, axis=None, keepdims=False, **kwargs):
See Also
--------
- COO.reduce : Similar method without ``NaN`` skipping functionality.
+ [`sparse.COO.reduce`][] : Similar method without `NaN` skipping functionality.
"""
arr = _replace_nan(x, method.identity if identity is None else identity)
return arr.reduce(method, axis, keepdims, **kwargs)
@@ -802,7 +802,7 @@ def roll(a, shift, axis=None):
def diagonal(a, offset=0, axis1=0, axis2=1):
"""
- Extract diagonal from a COO array. The equivalent of :obj:`numpy.diagonal`.
+ Extract diagonal from a COO array. The equivalent of [`numpy.diagonal`][].
Parameters
----------
@@ -847,7 +847,7 @@ def diagonal(a, offset=0, axis1=0, axis2=1):
See Also
--------
- :obj:`numpy.diagonal` : NumPy equivalent function
+ [`numpy.diagonal`][] : NumPy equivalent function
"""
from .core import COO
@@ -870,8 +870,10 @@ def diagonalize(a, axis=0):
"""
Diagonalize a COO array. The new dimension is appended at the end.
- .. WARNING:: :obj:`diagonalize` is not :obj:`numpy` compatible as there is no direct :obj:`numpy` equivalent. The
- API may change in the future.
+ !!! warning
+
+ [`sparse.diagonalize`][] is not [numpy][] compatible as there is no direct [numpy][] equivalent. The
+ API may change in the future.
Parameters
----------
@@ -894,7 +896,7 @@ def diagonalize(a, axis=0):
>>> x_diag.shape
(2, 3, 4, 3)
- :obj:`diagonalize` is the inverse of :obj:`diagonal`
+ [`sparse.diagonalize`][] is the inverse of [`sparse.diagonal`][]
>>> a = sparse.random((3, 3, 3, 3, 3), density=0.3)
>>> a_diag = sparse.diagonalize(a, axis=2)
@@ -908,7 +910,7 @@ def diagonalize(a, axis=0):
See Also
--------
- :obj:`numpy.diag` : NumPy equivalent for 1D array
+ [`numpy.diag`][] : NumPy equivalent for 1D array
"""
from .core import COO, as_coo
@@ -922,7 +924,7 @@ def diagonalize(a, axis=0):
def isposinf(x, out=None):
"""
- Test element-wise for positive infinity, return result as sparse ``bool`` array.
+ Test element-wise for positive infinity, return result as sparse `bool` array.
Parameters
----------
@@ -940,7 +942,7 @@ def isposinf(x, out=None):
See Also
--------
- numpy.isposinf : The NumPy equivalent
+ [`numpy.isposinf`][] : The NumPy equivalent
"""
from .core import elemwise
@@ -949,7 +951,7 @@ def isposinf(x, out=None):
def isneginf(x, out=None):
"""
- Test element-wise for negative infinity, return result as sparse ``bool`` array.
+ Test element-wise for negative infinity, return result as sparse `bool` array.
Parameters
----------
@@ -967,7 +969,7 @@ def isneginf(x, out=None):
See Also
--------
- numpy.isneginf : The NumPy equivalent
+ [`numpy.isneginf`][] : The NumPy equivalent
"""
from .core import elemwise
@@ -980,7 +982,7 @@ def result_type(*arrays_and_dtypes):
See Also
--------
- numpy.result_type : The NumPy equivalent
+ [`numpy.result_type`][] : The NumPy equivalent
"""
return np.result_type(*(_as_result_type_arg(x) for x in arrays_and_dtypes))
diff --git a/sparse/numba_backend/_coo/core.py b/sparse/numba_backend/_coo/core.py
index 575cb483f..71271932f 100644
--- a/sparse/numba_backend/_coo/core.py
+++ b/sparse/numba_backend/_coo/core.py
@@ -37,23 +37,23 @@ class COO(SparseArray, NDArrayOperatorsMixin): # lgtm [py/missing-equals]
Should have shape (number of dimensions, number of non-zeros).
data : numpy.ndarray (COO.nnz,)
An array of Values. A scalar can also be supplied if the data is the same across
- all coordinates. If not given, defers to :obj:`as_coo`.
+ all coordinates. If not given, defers to [`sparse.as_coo`][].
shape : tuple[int] (COO.ndim,)
The shape of the array.
has_duplicates : bool, optional
- A value indicating whether the supplied value for :code:`coords` has
- duplicates. Note that setting this to `False` when :code:`coords` does have
- duplicates may result in undefined behaviour. See :obj:`COO.sum_duplicates`
+ A value indicating whether the supplied value for [`sparse.COO.coords`][] has
+ duplicates. Note that setting this to `False` when `coords` does have
+ duplicates may result in undefined behaviour.
sorted : bool, optional
A value indicating whether the values in `coords` are sorted. Note
- that setting this to `True` when :code:`coords` isn't sorted may
- result in undefined behaviour. See :obj:`COO.sort_indices`.
+ that setting this to `True` when [`sparse.COO.coords`][] isn't sorted may
+ result in undefined behaviour.
prune : bool, optional
A flag indicating whether or not we should prune any fill-values present in
- ``data``.
+ `data`.
cache : bool, optional
Whether to enable cacheing for various operations. See
- :obj:`COO.enable_caching`
+ [`sparse.COO.enable_caching`][].
fill_value: scalar, optional
The fill value for this array.
@@ -62,18 +62,18 @@ class COO(SparseArray, NDArrayOperatorsMixin): # lgtm [py/missing-equals]
coords : numpy.ndarray (ndim, nnz)
An array holding the coordinates of every nonzero element.
data : numpy.ndarray (nnz,)
- An array holding the values corresponding to :obj:`COO.coords`.
+ An array holding the values corresponding to [`sparse.COO.coords`][].
shape : tuple[int] (ndim,)
The dimensions of this array.
See Also
--------
- DOK : A mostly write-only sparse array.
- as_coo : Convert any given format to :obj:`COO`.
+ - [`sparse.DOK`][]: A mostly write-only sparse array.
+ - [`sparse.as_coo`][]: Convert any given format to [`sparse.COO`][].
Examples
--------
- You can create :obj:`COO` objects from Numpy arrays.
+ You can create [`sparse.COO`][] objects from Numpy arrays.
>>> x = np.eye(4, dtype=np.uint8)
>>> x[2, 3] = 5
@@ -86,7 +86,7 @@ class COO(SparseArray, NDArrayOperatorsMixin): # lgtm [py/missing-equals]
array([[0, 1, 2, 2, 3],
[0, 1, 2, 3, 3]])
- :obj:`COO` objects support basic arithmetic and binary operations.
+ [`sparse.COO`][] objects support basic arithmetic and binary operations.
>>> x2 = np.eye(4, dtype=np.uint8)
>>> x2[3, 2] = 5
@@ -113,12 +113,12 @@ class COO(SparseArray, NDArrayOperatorsMixin): # lgtm [py/missing-equals]
[0, 0, 1, 5],
[0, 0, 0, 0]], dtype=uint8)
- :obj:`COO` objects also support dot products and reductions.
+ [`sparse.COO`][] objects also support dot products and reductions.
>>> s.dot(s.T).sum(axis=0).todense() # doctest: +NORMALIZE_WHITESPACE
array([ 1, 1, 31, 6], dtype=uint64)
- You can use Numpy :code:`ufunc` operations on :obj:`COO` arrays as well.
+ You can use Numpy `ufunc` operations on [`sparse.COO`][] arrays as well.
>>> np.sum(s, axis=1).todense() # doctest: +NORMALIZE_WHITESPACE
array([1, 1, 6, 1], dtype=uint64)
@@ -134,7 +134,7 @@ class COO(SparseArray, NDArrayOperatorsMixin): # lgtm [py/missing-equals]
>>> np.exp(s)
- You can also create :obj:`COO` arrays from coordinates and data.
+ You can also create [`sparse.COO`][] arrays from coordinates and data.
>>> coords = [[0, 0, 0, 1, 1], [0, 1, 2, 0, 3], [0, 3, 2, 0, 1]]
>>> data = [1, 2, 3, 4, 5]
@@ -175,7 +175,7 @@ class COO(SparseArray, NDArrayOperatorsMixin): # lgtm [py/missing-equals]
array([[4, 0],
[0, 2]])
- You can convert :obj:`DOK` arrays to :obj:`COO` arrays.
+ You can convert [`sparse.DOK`][] arrays to [`sparse.COO`][] arrays.
>>> from sparse import DOK
>>> s6 = DOK((5, 5), dtype=np.int64)
@@ -345,14 +345,14 @@ def enable_caching(self):
@classmethod
def from_numpy(cls, x, fill_value=None, idx_dtype=None):
"""
- Convert the given :obj:`numpy.ndarray` to a :obj:`COO` object.
+ Convert the given [`sparse.COO`][] object.
Parameters
----------
x : np.ndarray
The dense array to convert.
fill_value : scalar
- The fill value of the constructed :obj:`COO` array. Zero if
+ The fill value of the constructed [`sparse.COO`][] array. Zero if
unspecified.
Returns
@@ -390,8 +390,8 @@ def from_numpy(cls, x, fill_value=None, idx_dtype=None):
def todense(self):
"""
- Convert this :obj:`COO` array to a dense :obj:`numpy.ndarray`. Note that
- this may take a large amount of memory if the :obj:`COO` object's :code:`shape`
+ Convert this [`sparse.COO`][] array to a dense [`numpy.ndarray`][]. Note that
+ this may take a large amount of memory if the `COO` object's `shape`
is large.
Returns
@@ -401,8 +401,8 @@ def todense(self):
See Also
--------
- DOK.todense : Equivalent :obj:`DOK` array method.
- scipy.sparse.coo_matrix.todense : Equivalent Scipy method.
+ - [`sparse.DOK.todense`][] : Equivalent `DOK` array method.
+ - [`scipy.sparse.coo_matrix.todense`][] : Equivalent Scipy method.
Examples
--------
@@ -428,7 +428,7 @@ def todense(self):
@classmethod
def from_scipy_sparse(cls, x, /, *, fill_value=None):
"""
- Construct a :obj:`COO` array from a :obj:`scipy.sparse.spmatrix`
+ Construct a [`sparse.COO`][] array from a [`scipy.sparse.spmatrix`][]
Parameters
----------
@@ -440,7 +440,7 @@ def from_scipy_sparse(cls, x, /, *, fill_value=None):
Returns
-------
COO
- The converted :obj:`COO` object.
+ The converted [`sparse.COO`][] object.
Examples
--------
@@ -465,13 +465,13 @@ def from_scipy_sparse(cls, x, /, *, fill_value=None):
@classmethod
def from_iter(cls, x, shape=None, fill_value=None, dtype=None):
"""
- Converts an iterable in certain formats to a :obj:`COO` array. See examples
+ Converts an iterable in certain formats to a [`sparse.COO`][] array. See examples
for details.
Parameters
----------
x : Iterable or Iterator
- The iterable to convert to :obj:`COO`.
+ The iterable to convert to [`sparse.COO`][].
shape : tuple[int], optional
The shape of the array.
fill_value : scalar
@@ -482,11 +482,11 @@ def from_iter(cls, x, shape=None, fill_value=None, dtype=None):
Returns
-------
out : COO
- The output :obj:`COO` array.
+ The output [`sparse.COO`][] array.
Examples
--------
- You can convert items of the format ``[((i, j, k), value), ((i, j, k), value)]`` to :obj:`COO`.
+ You can convert items of the format ``[`sparse.COO`][].
Here, the first part represents the coordinate and the second part represents the value.
>>> x = [((0, 0), 1), ((1, 1), 1)]
@@ -511,7 +511,7 @@ def from_iter(cls, x, shape=None, fill_value=None, dtype=None):
array([[1, 0],
[0, 1]])
- You can also pass in a :obj:`collections.Iterator` object.
+ You can also pass in a [collections.abc.Iterator][] object.
>>> x = [((0, 0), 1), ((1, 1), 1)].__iter__()
>>> s = COO.from_iter(x)
@@ -560,8 +560,8 @@ def dtype(self):
See Also
--------
- numpy.ndarray.dtype : Numpy equivalent property.
- scipy.sparse.coo_matrix.dtype : Scipy equivalent property.
+ - [`numpy.ndarray.dtype`][] : Numpy equivalent property.
+ - [`scipy.sparse.coo_matrix.dtype`][] : Scipy equivalent property.
Examples
--------
@@ -578,7 +578,7 @@ def dtype(self):
def nnz(self):
"""
The number of nonzero elements in this array. Note that any duplicates in
- :code:`coords` are counted multiple times. To avoid this, call :obj:`COO.sum_duplicates`.
+ `coords` are counted multiple times.
Returns
-------
@@ -587,9 +587,9 @@ def nnz(self):
See Also
--------
- DOK.nnz : Equivalent :obj:`DOK` array property.
- numpy.count_nonzero : A similar Numpy function.
- scipy.sparse.coo_matrix.nnz : The Scipy equivalent property.
+ - [`sparse.DOK.nnz`][] : Equivalent [`sparse.DOK`][] array property.
+ - [`numpy.count_nonzero`][] : A similar Numpy function.
+ - [`scipy.sparse.coo_matrix.nnz`][] : The Scipy equivalent property.
Examples
--------
@@ -613,8 +613,8 @@ def format(self):
str
The storage format of this array.
See Also
- -------
- scipy.sparse.dok_matrix.format : The Scipy equivalent property.
+ --------
+ [`scipy.sparse.dok_matrix.format`][] : The Scipy equivalent property.
Examples
-------
>>> import sparse
@@ -640,7 +640,7 @@ def nbytes(self):
See Also
--------
- numpy.ndarray.nbytes : The equivalent Numpy property.
+ [`numpy.ndarray.nbytes`][] : The equivalent Numpy property.
Examples
--------
@@ -735,12 +735,12 @@ def transpose(self, axes=None):
See Also
--------
- :obj:`COO.T` : A quick property to reverse the order of the axes.
- numpy.ndarray.transpose : Numpy equivalent function.
+ - [`sparse.COO.T`][] : A quick property to reverse the order of the axes.
+ - [`numpy.ndarray.transpose`][] : Numpy equivalent function.
Examples
--------
- We can change the order of the dimensions of any :obj:`COO` array with this
+ We can change the order of the dimensions of any [`sparse.COO`][] array with this
function.
>>> x = np.add.outer(np.arange(5), np.arange(5)[::-1])
@@ -814,14 +814,14 @@ def T(self):
See Also
--------
- :obj:`COO.transpose` :
+ - [`sparse.COO.transpose`][] :
A method where you can specify the order of the axes.
- numpy.ndarray.T :
+ - [`numpy.ndarray.T`][] :
Numpy equivalent property.
Examples
--------
- We can change the order of the dimensions of any :obj:`COO` array with this
+ We can change the order of the dimensions of any [`sparse.COO`][] array with this
function.
>>> x = np.add.outer(np.arange(5), np.arange(5)[::-1])
@@ -888,7 +888,7 @@ def swapaxes(self, axis1, axis2):
def dot(self, other):
"""
- Performs the equivalent of :code:`x.dot(y)` for :obj:`COO`.
+ Performs the equivalent of `x.dot(y)` for [`sparse.COO`][].
Parameters
----------
@@ -908,9 +908,9 @@ def dot(self, other):
See Also
--------
- dot : Equivalent function for two arguments.
- :obj:`numpy.dot` : Numpy equivalent function.
- scipy.sparse.coo_matrix.dot : Scipy equivalent function.
+ - [`sparse.dot`][] : Equivalent function for two arguments.
+ - [`numpy.dot`][] : Numpy equivalent function.
+ - [`scipy.sparse.coo_matrix.dot`][] : Scipy equivalent function.
Examples
--------
@@ -952,7 +952,7 @@ def linear_loc(self):
See Also
--------
- :obj:`numpy.flatnonzero` : Equivalent Numpy function.
+ [`numpy.flatnonzero`][] : Equivalent Numpy function.
Examples
--------
@@ -969,7 +969,7 @@ def linear_loc(self):
def flatten(self, order="C"):
"""
- Returns a new :obj:`COO` array that is a flattened version of this array.
+ Returns a new [`sparse.COO`][] array that is a flattened version of this array.
Returns
-------
@@ -978,7 +978,7 @@ def flatten(self, order="C"):
Notes
-----
- The :code:`order` parameter is provided just for compatibility with
+ The `order` parameter is provided just for compatibility with
Numpy and isn't actually supported.
Examples
@@ -995,7 +995,7 @@ def flatten(self, order="C"):
def reshape(self, shape, order="C"):
"""
- Returns a new :obj:`COO` array that is a reshaped version of this array.
+ Returns a new [`sparse.COO`][] array that is a reshaped version of this array.
Parameters
----------
@@ -1009,11 +1009,11 @@ def reshape(self, shape, order="C"):
See Also
--------
- numpy.ndarray.reshape : The equivalent Numpy function.
+ [`numpy.ndarray.reshape`][] : The equivalent Numpy function.
Notes
-----
- The :code:`order` parameter is provided just for compatibility with
+ The `order` parameter is provided just for compatibility with
Numpy and isn't actually supported.
Examples
@@ -1133,7 +1133,7 @@ def resize(self, *args, refcheck=True, coords_dtype=np.intp):
See Also
--------
- numpy.ndarray.resize : The equivalent Numpy function.
+ [`numpy.ndarray.resize`][] : The equivalent Numpy function.
"""
warnings.warn("resize is deprecated on all SpraseArray objects.", DeprecationWarning, stacklevel=1)
@@ -1171,7 +1171,7 @@ def resize(self, *args, refcheck=True, coords_dtype=np.intp):
def to_scipy_sparse(self, /, *, accept_fv=None):
"""
- Converts this :obj:`COO` object into a :obj:`scipy.sparse.coo_matrix`.
+ Converts this [`sparse.COO`][] object into a [`scipy.sparse.coo_matrix`][].
Parameters
----------
@@ -1180,7 +1180,7 @@ def to_scipy_sparse(self, /, *, accept_fv=None):
Returns
-------
- :obj:`scipy.sparse.coo_matrix`
+ scipy.sparse.coo_matrix
The converted Scipy sparse matrix.
Raises
@@ -1192,8 +1192,8 @@ def to_scipy_sparse(self, /, *, accept_fv=None):
See Also
--------
- COO.tocsr : Convert to a :obj:`scipy.sparse.csr_matrix`.
- COO.tocsc : Convert to a :obj:`scipy.sparse.csc_matrix`.
+ - [`sparse.COO.tocsr`][] : Convert to a [`scipy.sparse.csr_matrix`][].
+ - [`sparse.COO.tocsc`][] : Convert to a [`scipy.sparse.csc_matrix`][].
"""
import scipy.sparse
@@ -1221,7 +1221,7 @@ def _tocsr(self):
def tocsr(self):
"""
- Converts this array to a :obj:`scipy.sparse.csr_matrix`.
+ Converts this array to a [`scipy.sparse.csr_matrix`][].
Returns
-------
@@ -1237,9 +1237,9 @@ def tocsr(self):
See Also
--------
- COO.tocsc : Convert to a :obj:`scipy.sparse.csc_matrix`.
- COO.to_scipy_sparse : Convert to a :obj:`scipy.sparse.coo_matrix`.
- scipy.sparse.coo_matrix.tocsr : Equivalent Scipy function.
+ - [`sparse.COO.tocsc`][] : Convert to a [`scipy.sparse.csc_matrix`][].
+ - [`sparse.COO.to_scipy_sparse`][] : Convert to a [`scipy.sparse.coo_matrix`][].
+ - [`scipy.sparse.coo_matrix.tocsr`][] : Equivalent Scipy function.
"""
check_zero_fill_value(self)
@@ -1261,7 +1261,7 @@ def tocsr(self):
def tocsc(self):
"""
- Converts this array to a :obj:`scipy.sparse.csc_matrix`.
+ Converts this array to a [`scipy.sparse.csc_matrix`][].
Returns
-------
@@ -1277,9 +1277,9 @@ def tocsc(self):
See Also
--------
- COO.tocsr : Convert to a :obj:`scipy.sparse.csr_matrix`.
- COO.to_scipy_sparse : Convert to a :obj:`scipy.sparse.coo_matrix`.
- scipy.sparse.coo_matrix.tocsc : Equivalent Scipy function.
+ - [`sparse.COO.tocsr`][] : Convert to a [`scipy.sparse.csr_matrix`][].
+ - [`sparse.COO.to_scipy_sparse`][] : Convert to a [`scipy.sparse.coo_matrix`][].
+ - [`scipy.sparse.coo_matrix.tocsc`][] : Equivalent Scipy function.
"""
check_zero_fill_value(self)
@@ -1381,7 +1381,7 @@ def _prune(self):
def broadcast_to(self, shape):
"""
- Performs the equivalent of :obj:`numpy.broadcast_to` for :obj:`COO`. Note that
+ Performs the equivalent of [`sparse.COO`][]. Note that
this function returns a new array instead of a view.
Parameters
@@ -1401,13 +1401,13 @@ def broadcast_to(self, shape):
See Also
--------
- :obj:`numpy.broadcast_to` : NumPy equivalent function
+ [`numpy.broadcast_to`][] : NumPy equivalent function
"""
return broadcast_to(self, shape)
def maybe_densify(self, max_size=1000, min_density=0.25):
"""
- Converts this :obj:`COO` array to a :obj:`numpy.ndarray` if not too
+ Converts this [`sparse.COO`][] array to a [`numpy.ndarray`][] if not too
costly.
Parameters
@@ -1459,12 +1459,12 @@ def nonzero(self):
Returns
-------
- idx : tuple[numpy.ndarray]
+ idx : tuple[`numpy.ndarray`]
The indices where this array is nonzero.
See Also
--------
- :obj:`numpy.ndarray.nonzero` : NumPy equivalent function
+ [`numpy.ndarray.nonzero`][] : NumPy equivalent function
Raises
------
@@ -1556,7 +1556,7 @@ def isnan(self):
def as_coo(x, shape=None, fill_value=None, idx_dtype=None):
"""
- Converts any given format to :obj:`COO`. See the "See Also" section for details.
+ Converts any given format to [`sparse.COO`][]. See the "See Also" section for details.
Parameters
----------
@@ -1568,18 +1568,18 @@ def as_coo(x, shape=None, fill_value=None, idx_dtype=None):
Returns
-------
out : COO
- The converted :obj:`COO` array.
+ The converted [`sparse.COO`][] array.
See Also
--------
- SparseArray.asformat :
+ - [`sparse.SparseArray.asformat`][] :
A utility function to convert between formats in this library.
- COO.from_numpy :
- Convert a Numpy array to :obj:`COO`.
- COO.from_scipy_sparse :
- Convert a SciPy sparse matrix to :obj:`COO`.
- COO.from_iter :
- Convert an iterable to :obj:`COO`.
+ - [`sparse.COO.from_numpy`][] :
+ Convert a Numpy array to [`sparse.COO`][].
+ - [`sparse.COO.from_scipy_sparse`][] :
+ Convert a SciPy sparse matrix to [`sparse.COO`][].
+ - [`sparse.COO.from_iter`][] :
+ Convert an iterable to [`sparse.COO`][].
"""
from .._common import _is_scipy_sparse_obj
diff --git a/sparse/numba_backend/_dok.py b/sparse/numba_backend/_dok.py
index f61ac3b70..9c4e601d1 100644
--- a/sparse/numba_backend/_dok.py
+++ b/sparse/numba_backend/_dok.py
@@ -28,7 +28,7 @@ class DOK(SparseArray, NDArrayOperatorsMixin):
Attributes
----------
dtype : numpy.dtype
- The datatype of this array. Can be :code:`None` if no elements
+ The datatype of this array. Can be `None` if no elements
have been set yet.
shape : tuple[int]
The shape of this array.
@@ -38,11 +38,11 @@ class DOK(SparseArray, NDArrayOperatorsMixin):
See Also
--------
- COO : A read-only sparse array.
+ [`sparse.COO`][] : A read-only sparse array.
Examples
--------
- You can create :obj:`DOK` objects from Numpy arrays.
+ You can create [`sparse.DOK`][] objects from Numpy arrays.
>>> x = np.eye(5, dtype=np.uint8)
>>> x[2, 3] = 5
@@ -57,7 +57,7 @@ class DOK(SparseArray, NDArrayOperatorsMixin):
>>> s2
- You can convert :obj:`DOK` arrays to :obj:`COO` arrays, or :obj:`numpy.ndarray`
+ You can convert [`sparse.DOK`][] arrays to [`sparse.COO`][] arrays, or [`numpy.ndarray`][]
objects.
>>> from sparse import COO
@@ -78,7 +78,7 @@ class DOK(SparseArray, NDArrayOperatorsMixin):
>>> s5
- You can also create :obj:`DOK` arrays from a shape and a dict of
+ You can also create [`sparse.DOK`][] arrays from a shape and a dict of
values. Zeros are automatically ignored.
>>> values = {
@@ -133,7 +133,7 @@ def __init__(self, shape, data=None, dtype=None, fill_value=None):
@classmethod
def from_scipy_sparse(cls, x, /, *, fill_value=None):
"""
- Create a :obj:`DOK` array from a :obj:`scipy.sparse.spmatrix`.
+ Create a [`sparse.DOK`][] array from a [`scipy.sparse.spmatrix`][].
Parameters
----------
@@ -145,7 +145,7 @@ def from_scipy_sparse(cls, x, /, *, fill_value=None):
Returns
-------
DOK
- The equivalent :obj:`DOK` array.
+ The equivalent [`sparse.DOK`][] array.
Examples
--------
@@ -161,7 +161,7 @@ def from_scipy_sparse(cls, x, /, *, fill_value=None):
@classmethod
def from_coo(cls, x):
"""
- Get a :obj:`DOK` array from a :obj:`COO` array.
+ Get a [`sparse.DOK`][] array from a [`sparse.COO`][] array.
Parameters
----------
@@ -171,7 +171,7 @@ def from_coo(cls, x):
Returns
-------
DOK
- The equivalent :obj:`DOK` array.
+ The equivalent [`sparse.DOK`][] array.
Examples
--------
@@ -190,12 +190,12 @@ def from_coo(cls, x):
def to_coo(self):
"""
- Convert this :obj:`DOK` array to a :obj:`COO` array.
+ Convert this [`sparse.DOK`][] array to a [`sparse.COO`][] array.
Returns
-------
COO
- The equivalent :obj:`COO` array.
+ The equivalent [`sparse.COO`][] array.
Examples
--------
@@ -214,7 +214,7 @@ def to_coo(self):
@classmethod
def from_numpy(cls, x):
"""
- Get a :obj:`DOK` array from a Numpy array.
+ Get a [`sparse.DOK`][] array from a Numpy array.
Parameters
----------
@@ -224,7 +224,7 @@ def from_numpy(cls, x):
Returns
-------
DOK
- The equivalent :obj:`DOK` array.
+ The equivalent [`sparse.DOK`][] array.
Examples
--------
@@ -255,9 +255,9 @@ def nnz(self):
See Also
--------
- COO.nnz : Equivalent :obj:`COO` array property.
- numpy.count_nonzero : A similar Numpy function.
- scipy.sparse.dok_matrix.nnz : The Scipy equivalent property.
+ - [`sparse.COO.nnz`][] : Equivalent [`sparse.COO`][] array property.
+ - [`numpy.count_nonzero`][] : A similar Numpy function.
+ - [`scipy.sparse.coo_matrix.nnz`][] : The Scipy equivalent property.
Examples
--------
@@ -281,7 +281,7 @@ def format(self):
The storage format of this array.
See Also
-------
- scipy.sparse.dok_matrix.format : The Scipy equivalent property.
+ [`scipy.sparse.dok_matrix.format`][] : The Scipy equivalent property.
Examples
-------
>>> import sparse
@@ -307,7 +307,7 @@ def nbytes(self):
See Also
--------
- numpy.ndarray.nbytes : The equivalent Numpy property.
+ [`numpy.ndarray.nbytes`][] : The equivalent Numpy property.
Examples
--------
@@ -439,7 +439,7 @@ def __str__(self):
def todense(self):
"""
- Convert this :obj:`DOK` array into a Numpy array.
+ Convert this [`sparse.DOK`][] array into a Numpy array.
Returns
-------
@@ -448,8 +448,8 @@ def todense(self):
See Also
--------
- COO.todense : Equivalent :obj:`COO` array method.
- scipy.sparse.dok_matrix.todense : Equivalent Scipy method.
+ - [`sparse.COO.todense`][] : Equivalent `COO` array method.
+ - [`scipy.sparse.coo_matrix.todense`][] : Equivalent Scipy method.
Examples
--------
@@ -511,7 +511,7 @@ def asformat(self, format, **kwargs):
def reshape(self, shape, order="C"):
"""
- Returns a new :obj:`DOK` array that is a reshaped version of this array.
+ Returns a new [`sparse.DOK`][] array that is a reshaped version of this array.
Parameters
----------
@@ -525,11 +525,11 @@ def reshape(self, shape, order="C"):
See Also
--------
- numpy.ndarray.reshape : The equivalent Numpy function.
+ [`numpy.ndarray.reshape`][] : The equivalent Numpy function.
Notes
-----
- The :code:`order` parameter is provided just for compatibility with
+ The `order` parameter is provided just for compatibility with
Numpy and isn't actually supported.
Examples
diff --git a/sparse/numba_backend/_io.py b/sparse/numba_backend/_io.py
index 84b873d50..24d9f1db5 100644
--- a/sparse/numba_backend/_io.py
+++ b/sparse/numba_backend/_io.py
@@ -5,17 +5,17 @@
def save_npz(filename, matrix, compressed=True):
- """Save a sparse matrix to disk in numpy's ``.npz`` format.
- Note: This is not binary compatible with scipy's ``save_npz()``.
+ """Save a sparse matrix to disk in numpy's `.npz` format.
+ Note: This is not binary compatible with scipy's `save_npz()`.
This binary format is not currently stable. Will save a file
- that can only be opend with this package's ``load_npz()``.
+ that can only be opend with this package's `load_npz()`.
Parameters
----------
filename : string or file
Either the file name (string) or an open file (file-like object)
where the data will be saved. If file is a string or a Path, the
- ``.npz`` extension will be appended to the file name if it is not
+ `.npz` extension will be appended to the file name if it is not
already there
matrix : SparseArray
The matrix to save to disk
@@ -41,11 +41,11 @@ def save_npz(filename, matrix, compressed=True):
See Also
--------
- load_npz
- scipy.sparse.save_npz
- scipy.sparse.load_npz
- numpy.savez
- numpy.load
+ - [`sparse.load_npz`][]
+ - [`scipy.sparse.save_npz`][]
+ - [`scipy.sparse.load_npz`][]
+ - [`numpy.savez`][]
+ - [`numpy.load`][]
"""
@@ -69,8 +69,8 @@ def save_npz(filename, matrix, compressed=True):
def load_npz(filename):
- """Load a sparse matrix in numpy's ``.npz`` format from disk.
- Note: This is not binary compatible with scipy's ``save_npz()``
+ """Load a sparse matrix in numpy's `.npz` format from disk.
+ Note: This is not binary compatible with scipy's `save_npz()`
output. This binary format is not currently stable.
Will only load files saved by this package.
@@ -78,24 +78,24 @@ def load_npz(filename):
----------
filename : file-like object, string, or pathlib.Path
The file to read. File-like objects must support the
- ``seek()`` and ``read()`` methods.
+ `seek()` and `read()` methods.
Returns
-------
SparseArray
- The sparse matrix at path ``filename``.
+ The sparse matrix at path `filename`.
Examples
--------
- See :obj:`save_npz` for usage examples.
+ See [`sparse.save_npz`][] for usage examples.
See Also
--------
- save_npz
- scipy.sparse.save_npz
- scipy.sparse.load_npz
- numpy.savez
- numpy.load
+ - [`sparse.save_npz`][]
+ - [`scipy.sparse.save_npz`][]
+ - [`scipy.sparse.load_npz`][]
+ - [`numpy.savez`][]
+ - [`numpy.load`][]
"""
diff --git a/sparse/numba_backend/_sparse_array.py b/sparse/numba_backend/_sparse_array.py
index 763a779e4..b402f22a6 100644
--- a/sparse/numba_backend/_sparse_array.py
+++ b/sparse/numba_backend/_sparse_array.py
@@ -63,7 +63,7 @@ def to_device(self, device, /, *, stream=None):
def nnz(self):
"""
The number of nonzero elements in this array. Note that any duplicates in
- :code:`coords` are counted multiple times. To avoid this, call :obj:`COO.sum_duplicates`.
+ `coords` are counted multiple times.
Returns
-------
@@ -72,9 +72,9 @@ def nnz(self):
See Also
--------
- DOK.nnz : Equivalent :obj:`DOK` array property.
- numpy.count_nonzero : A similar Numpy function.
- scipy.sparse.coo_matrix.nnz : The Scipy equivalent property.
+ - [`sparse.DOK.nnz`][] : Equivalent [`sparse.DOK`][] array property.
+ - [`numpy.count_nonzero`][] : A similar Numpy function.
+ - [`scipy.sparse.coo_matrix.nnz`][] : The Scipy equivalent property.
Examples
--------
@@ -102,8 +102,8 @@ def ndim(self):
See Also
--------
- DOK.ndim : Equivalent property for :obj:`DOK` arrays.
- numpy.ndarray.ndim : Numpy equivalent property.
+ - [`sparse.DOK.ndim`][] : Equivalent property for [`sparse.DOK`][] arrays.
+ - [`numpy.ndarray.ndim`][] : Numpy equivalent property.
Examples
--------
@@ -130,7 +130,7 @@ def size(self):
See Also
--------
- numpy.ndarray.size : Numpy equivalent property.
+ [`numpy.ndarray.size`][] : Numpy equivalent property.
Examples
--------
@@ -157,8 +157,8 @@ def density(self):
See Also
--------
- COO.size : Number of elements.
- COO.nnz : Number of nonzero elements.
+ - [`sparse.COO.size`][] : Number of elements.
+ - [`sparse.COO.nnz`][] : Number of nonzero elements.
Examples
--------
@@ -240,7 +240,7 @@ def asformat(self, format):
@abstractmethod
def todense(self):
"""
- Convert this :obj:`SparseArray` array to a dense :obj:`numpy.ndarray`. Note that
+ Convert this [`sparse.SparseArray`][] array to a dense [`numpy.ndarray`][]. Note that
this may take a large amount of memory and time.
Returns
@@ -250,9 +250,9 @@ def todense(self):
See Also
--------
- DOK.todense : Equivalent :obj:`DOK` array method.
- COO.todense : Equivalent :obj:`COO` array method.
- scipy.sparse.coo_matrix.todense : Equivalent Scipy method.
+ - [`sparse.DOK.todense`][] : Equivalent `DOK` array method.
+ - [`sparse.COO.todense`][] : Equivalent `COO` array method.
+ - [`scipy.sparse.coo_matrix.todense`][] : Equivalent Scipy method.
Examples
--------
@@ -384,9 +384,9 @@ def reduce(self, method, axis=(0,), keepdims=False, **kwargs):
See Also
--------
- numpy.ufunc.reduce : A similar Numpy method.
- COO.reduce : This method implemented on COO arrays.
- GCXS.reduce : This method implemented on GCXS arrays.
+ - [`numpy.ufunc.reduce`][] : A similar Numpy method.
+ - [`sparse.COO.reduce`][] : This method implemented on COO arrays.
+ - [`sparse.GCXS.reduce`][] : This method implemented on GCXS arrays.
"""
axis = normalize_axis(axis, self.ndim)
zero_reduce_result = method.reduce([self.fill_value, self.fill_value], **kwargs)
@@ -450,8 +450,8 @@ def sum(self, axis=None, keepdims=False, dtype=None, out=None):
See Also
--------
- :obj:`numpy.sum` : Equivalent numpy function.
- scipy.sparse.coo_matrix.sum : Equivalent Scipy function.
+ - [`numpy.sum`][] : Equivalent numpy function.
+ - [`scipy.sparse.coo_matrix.sum`][] : Equivalent Scipy function.
"""
return np.add.reduce(self, out=out, axis=axis, keepdims=keepdims, dtype=dtype)
@@ -475,8 +475,8 @@ def max(self, axis=None, keepdims=False, out=None):
See Also
--------
- :obj:`numpy.max` : Equivalent numpy function.
- scipy.sparse.coo_matrix.max : Equivalent Scipy function.
+ - [`numpy.max`][] : Equivalent numpy function.
+ - [`scipy.sparse.coo_matrix.max`][] : Equivalent Scipy function.
"""
return np.maximum.reduce(self, out=out, axis=axis, keepdims=keepdims)
@@ -500,7 +500,7 @@ def any(self, axis=None, keepdims=False, out=None):
See Also
--------
- :obj:`numpy.any` : Equivalent numpy function.
+ [`numpy.any`][] : Equivalent numpy function.
"""
return np.logical_or.reduce(self, out=out, axis=axis, keepdims=keepdims)
@@ -522,7 +522,7 @@ def all(self, axis=None, keepdims=False, out=None):
See Also
--------
- :obj:`numpy.all` : Equivalent numpy function.
+ [`numpy.all`][] : Equivalent numpy function.
"""
return np.logical_and.reduce(self, out=out, axis=axis, keepdims=keepdims)
@@ -546,8 +546,8 @@ def min(self, axis=None, keepdims=False, out=None):
See Also
--------
- :obj:`numpy.min` : Equivalent numpy function.
- scipy.sparse.coo_matrix.min : Equivalent Scipy function.
+ - [`numpy.min`][] : Equivalent numpy function.
+ - [`scipy.sparse.coo_matrix.min`][] : Equivalent Scipy function.
"""
return np.minimum.reduce(self, out=out, axis=axis, keepdims=keepdims)
@@ -573,7 +573,7 @@ def prod(self, axis=None, keepdims=False, dtype=None, out=None):
See Also
--------
- :obj:`numpy.prod` : Equivalent numpy function.
+ [`numpy.prod`][] : Equivalent numpy function.
"""
return np.multiply.reduce(self, out=out, axis=axis, keepdims=keepdims, dtype=dtype)
@@ -583,9 +583,9 @@ def round(self, decimals=0, out=None):
See Also
--------
- :obj:`numpy.round` :
+ - [`numpy.round`][] :
NumPy equivalent ufunc.
- :obj:`COO.elemwise` :
+ - [`sparse.elemwise`][] :
Apply an arbitrary element-wise function to one or two
arguments.
"""
@@ -604,8 +604,8 @@ def clip(self, min=None, max=None, out=None):
See Also
--------
- sparse.clip : For full documentation and more details.
- numpy.clip : Equivalent NumPy function.
+ - [sparse.clip][] : For full documentation and more details.
+ - [`numpy.clip`][] : Equivalent NumPy function.
"""
if min is None and max is None:
raise ValueError("One of max or min must be given.")
@@ -619,11 +619,11 @@ def astype(self, dtype, casting="unsafe", copy=True):
See Also
--------
- scipy.sparse.coo_matrix.astype :
+ - [`scipy.sparse.coo_matrix.astype`][] :
SciPy sparse equivalent function
- numpy.ndarray.astype :
+ - [`numpy.ndarray.astype`][] :
NumPy equivalent ufunc.
- :obj:`COO.elemwise` :
+ - [`sparse.elemwise`][] :
Apply an arbitrary element-wise function to one or two
arguments.
"""
@@ -652,19 +652,17 @@ def mean(self, axis=None, keepdims=False, dtype=None, out=None):
See Also
--------
- numpy.ndarray.mean : Equivalent numpy method.
- scipy.sparse.coo_matrix.mean : Equivalent Scipy method.
+ - [`numpy.ndarray.mean`][] : Equivalent numpy method.
+ - [`scipy.sparse.coo_matrix.mean`][] : Equivalent Scipy method.
Notes
-----
- * This function internally calls :obj:`COO.sum_duplicates` to bring the
- array into canonical form.
- * The :code:`out` parameter is provided just for compatibility with
+ * The `out` parameter is provided just for compatibility with
Numpy and isn't actually supported.
Examples
--------
- You can use :obj:`COO.mean` to compute the mean of an array across any
+ You can use [`sparse.COO.mean`][] to compute the mean of an array across any
dimension.
>>> from sparse import COO
@@ -674,7 +672,7 @@ def mean(self, axis=None, keepdims=False, dtype=None, out=None):
>>> s2.todense() # doctest: +SKIP
array([0.5, 1.5, 0., 0.])
- You can also use the :code:`keepdims` argument to keep the dimensions
+ You can also use the `keepdims` argument to keep the dimensions
after the mean.
>>> s3 = s.mean(axis=0, keepdims=True)
@@ -740,16 +738,11 @@ def var(self, axis=None, dtype=None, out=None, ddof=0, keepdims=False):
See Also
--------
- numpy.ndarray.var : Equivalent numpy method.
-
- Notes
- -----
- * This function internally calls :obj:`COO.sum_duplicates` to bring the
- array into canonical form.
+ [`numpy.ndarray.var`][] : Equivalent numpy method.
Examples
--------
- You can use :obj:`COO.var` to compute the variance of an array across any
+ You can use [`sparse.COO.var`][] to compute the variance of an array across any
dimension.
>>> from sparse import COO
@@ -759,7 +752,7 @@ def var(self, axis=None, dtype=None, out=None, ddof=0, keepdims=False):
>>> s2.todense() # doctest: +SKIP
array([0.6875, 0.1875])
- You can also use the :code:`keepdims` argument to keep the dimensions
+ You can also use the `keepdims` argument to keep the dimensions
after the variance.
>>> s3 = s.var(axis=0, keepdims=True)
@@ -837,16 +830,11 @@ def std(self, axis=None, dtype=None, out=None, ddof=0, keepdims=False):
See Also
--------
- numpy.ndarray.std : Equivalent numpy method.
-
- Notes
- -----
- * This function internally calls :obj:`COO.sum_duplicates` to bring the
- array into canonical form.
+ [`numpy.ndarray.std`][] : Equivalent numpy method.
Examples
--------
- You can use :obj:`COO.std` to compute the standard deviation of an array
+ You can use [`sparse.COO.std`][] to compute the standard deviation of an array
across any dimension.
>>> from sparse import COO
@@ -856,7 +844,7 @@ def std(self, axis=None, dtype=None, out=None, ddof=0, keepdims=False):
>>> s2.todense() # doctest: +SKIP
array([0.8291562, 0.4330127])
- You can also use the :code:`keepdims` argument to keep the dimensions
+ You can also use the `keepdims` argument to keep the dimensions
after the standard deviation.
>>> s3 = s.std(axis=0, keepdims=True)
@@ -901,8 +889,8 @@ def real(self):
See Also
--------
- numpy.ndarray.real : NumPy equivalent attribute.
- numpy.real : NumPy equivalent function.
+ - [`numpy.ndarray.real`][] : NumPy equivalent attribute.
+ - [`numpy.real`][] : NumPy equivalent function.
"""
return self.__array_ufunc__(np.real, "__call__", self)
@@ -928,8 +916,8 @@ def imag(self):
See Also
--------
- numpy.ndarray.imag : NumPy equivalent attribute.
- numpy.imag : NumPy equivalent function.
+ - [`numpy.ndarray.imag`][] : NumPy equivalent attribute.
+ - [`numpy.imag`][] : NumPy equivalent function.
"""
return self.__array_ufunc__(np.imag, "__call__", self)
@@ -956,8 +944,8 @@ def conj(self):
See Also
--------
- numpy.ndarray.conj : NumPy equivalent method.
- numpy.conj : NumPy equivalent function.
+ - [`numpy.ndarray.conj`][] : NumPy equivalent method.
+ - [`numpy.conj`][] : NumPy equivalent function.
"""
return np.conj(self)
diff --git a/sparse/numba_backend/_umath.py b/sparse/numba_backend/_umath.py
index 786f062f6..776a2b470 100644
--- a/sparse/numba_backend/_umath.py
+++ b/sparse/numba_backend/_umath.py
@@ -19,8 +19,8 @@ def elemwise(func, *args, **kwargs):
func : Callable
The function to apply. Must support broadcasting.
*args : tuple, optional
- The arguments to the function. Can be :obj:`SparseArray` objects
- or :obj:`scipy.sparse.spmatrix` objects.
+ The arguments to the function. Can be [`sparse.SparseArray`][] objects
+ or [`scipy.sparse.spmatrix`][] objects.
**kwargs : dict, optional
Any additional arguments to pass to the function.
@@ -37,14 +37,14 @@ def elemwise(func, *args, **kwargs):
See Also
--------
- :obj:`numpy.ufunc` :
- A similar Numpy construct. Note that any :code:`ufunc` can be used
- as the :code:`func` input to this function.
+ [`numpy.ufunc`][] :
+ A similar Numpy construct. Note that any `ufunc` can be used
+ as the `func` input to this function.
Notes
-----
Previously, operations with Numpy arrays were sometimes supported. Now,
- it is necessary to convert Numpy arrays to :obj:`COO` objects.
+ it is necessary to convert Numpy arrays to [`sparse.COO`][] objects.
"""
return _Elemwise(func, *args, **kwargs).get_result()
@@ -313,7 +313,7 @@ def _get_matching_coords(coords, params):
Parameters
----------
- coords : list[numpy.ndarray]
+ coords : list[`numpy.ndarray`]
The input coordinates.
params : list[Union[bool, none]]
The broadcast parameters.
@@ -343,7 +343,7 @@ def _get_matching_coords(coords, params):
def broadcast_to(x, shape):
"""
- Performs the equivalent of :obj:`numpy.broadcast_to` for :obj:`COO`. Note that
+ Performs the equivalent of `numpy.broadcast_to` for `COO`. Note that
this function returns a new array instead of a view.
Parameters
diff --git a/sparse/numba_backend/_utils.py b/sparse/numba_backend/_utils.py
index 8d1fb5ed1..08b24b84d 100644
--- a/sparse/numba_backend/_utils.py
+++ b/sparse/numba_backend/_utils.py
@@ -243,14 +243,14 @@ def random(
nnz : int, optional
Number of nonzero elements in the generated array.
Mutually exclusive with `density`.
- random_state : Union[numpy.random.Generator, int], optional
+ random_state : Union[`numpy.random.Generator, int`], optional
Random number generator or random seed. If not given, the
singleton numpy.random will be used. This random state will be used
for sampling the sparsity structure, but not necessarily for sampling
the values of the structurally nonzero entries of the matrix.
data_rvs : Callable
Data generation callback. Must accept one single parameter: number of
- :code:`nnz` elements, and return one single NumPy array of exactly
+ `nnz` elements, and return one single NumPy array of exactly
that length.
format : str
The format to return the output array in.
@@ -264,8 +264,8 @@ def random(
See Also
--------
- :obj:`scipy.sparse.rand` : Equivalent Scipy function.
- :obj:`numpy.random.rand` : Similar Numpy function.
+ - [`scipy.sparse.rand`][] : Equivalent Scipy function.
+ - [`numpy.random.rand`][] : Similar Numpy function.
Examples
--------