Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
31 changes: 26 additions & 5 deletions docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,18 @@ This folder contains the source of TVM's documentation, hosted at https://tvm.ap

### Native

1. [Build TVM](https://tvm.apache.org/docs/install/from_source.html) first in the repo root folder
1. [Build TVM](https://tvm.apache.org/docs/install/from_source.html) first in the repo root folder, then make it importable:

```bash
export TVM_HOME=/path-to-tvm
export TVM_LIBRARY_PATH=$TVM_HOME/build
pip install --target=$TVM_HOME/python $TVM_HOME/3rdparty/tvm-ffi
export PYTHONPATH=$TVM_HOME/python:$PYTHONPATH
```

`docs/conf.py` unconditionally imports `tvm` at startup, so the build will
fail immediately if TVM is not importable.

2. Install dependencies

```bash
Expand Down Expand Up @@ -92,15 +103,25 @@ python tests/scripts/ci.py docs --tutorial-pattern=file_name\.py

## Helper Scripts

You can run the following script to reproduce the CI sphinx pre-check stage.
This script skips the tutorial executions and is useful to quickly check the content.
The following script mirrors the CI docs pipeline: it runs a sphinx pre-check
(when not running locally) and then performs a full `make htmldepoly` build,
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

There is a typo in the command name: htmldepoly should be htmldeploy.

including tutorial execution. You will need a GPU CI environment.

```bash
tests/scripts/task_python_docs.sh
```

The following script runs the full build which includes tutorial executions.
You will need a GPU CI environment.
To build docs locally without executing tutorials (fastest local iteration):

```bash
cd docs && TVM_TUTORIAL_EXEC_PATTERN=none make html
```

Note: the sphinx pre-check (warning validation) only runs in CI (`IS_LOCAL=0`).
`python tests/scripts/ci.py docs` always sets `IS_LOCAL=1` and skips the
pre-check regardless of other flags.

To run the full build including tutorial executions:

```bash
python tests/scripts/ci.py docs --full
Expand Down
29 changes: 17 additions & 12 deletions docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -240,20 +240,18 @@ def install_request_hook(gallery_conf, fname):
pip install apache-tvm=={version}"""

INSTALL_TVM_CUDA_DEV = """\
%%shell
# Installs the latest dev build of TVM from PyPI, with CUDA enabled. To use this,
# you must request a Google Colab instance with a GPU by going to Runtime ->
# Change runtime type -> Hardware accelerator -> GPU. If you wish to build from
# source, see https://tvm.apache.org/docs/install/from_source.html
pip install tlcpack-nightly-cu113 --pre -f https://tlcpack.ai/wheels"""
%%markdown
> **Note:** This tutorial requires a CUDA-enabled build of TVM.
> Pre-built CUDA wheels are not currently available on PyPI.
> Please build TVM from source with CUDA enabled before running this notebook:
> https://tvm.apache.org/docs/install/from_source.html"""

INSTALL_TVM_CUDA_FIXED = f"""\
%%shell
# Installs TVM version {version} from PyPI, with CUDA enabled. To use this,
# you must request a Google Colab instance with a GPU by going to Runtime ->
# Change runtime type -> Hardware accelerator -> GPU. If you wish to build from
# source, see https://tvm.apache.org/docs/install/from_source.html
pip install apache-tvm-cu113=={version} --no-index -f https://tlcpack.ai/wheels"""
%%markdown
> **Note:** This tutorial requires a CUDA-enabled build of TVM (version {version}).
> Pre-built CUDA wheels are not currently available on PyPI.
> Please build TVM from source with CUDA enabled before running this notebook:
> https://tvm.apache.org/docs/install/from_source.html"""


@monkey_patch("sphinx_gallery.gen_rst", "jupyter_notebook")
Expand Down Expand Up @@ -552,6 +550,13 @@ def fixup_tutorials(original_url: str) -> str:
"v0.14.0/",
"v0.15.0/",
"v0.16.0/",
"v0.17.0/",
"v0.18.0/",
"v0.19.0/",
"v0.20.0/",
"v0.21.0/",
"v0.22.0/",
"v0.23.0/",
],
"display_github": True,
"github_user": "apache",
Expand Down
3 changes: 2 additions & 1 deletion docs/contribute/document.rst
Original file line number Diff line number Diff line change
Expand Up @@ -222,7 +222,8 @@ existing environment to demonstrate the usage.

If you add a new categorization of how-to, you will need to add references to
`conf.py <https://github.com/apache/tvm/tree/main/docs/conf.py>`_ and the
`how-to index <https://github.com/apache/tvm/tree/main/docs/how_to/dev/index.rst>`_
`top-level docs index <https://github.com/apache/tvm/tree/main/docs/index.rst>`_
(how-to entries are registered there directly, not in a separate how-to index).

Refer to Another Location in the Document
-----------------------------------------
Expand Down
19 changes: 10 additions & 9 deletions docs/install/from_source.rst
Original file line number Diff line number Diff line change
Expand Up @@ -72,12 +72,13 @@ one may simply use:
conda activate tvm-build-venv

.. note::
**For Frontend Contributors (TFLite):** If you plan to run or contribute to the frontend tests (e.g., ``test_frontend_tflite.py``), you must strictly use **Python 3.10**.

The TFLite tests currently require ``tensorflow==2.9.0`` and ``numpy==1.26.4`` to prevent ``_ARRAY_API`` core dumps.
Because TensorFlow 2.9.0 does not provide pre-compiled binaries for Python 3.11 or newer,
setting up your environment with Python 3.11+ will force incompatible pip upgrades
and cause C++ ABI crashes during testing.
**For Frontend Contributors (TFLite):** If you plan to run or contribute to the frontend tests (e.g., ``test_frontend_tflite.py``), you must install ``tensorflow==2.19.0``.

.. code:: bash

pip install tensorflow==2.19.0

Python 3.11 is supported.

Step 2. Get Source from GitHub
------------------------------
Expand Down Expand Up @@ -163,8 +164,8 @@ Leaving the build environment ``tvm-build-venv``, there are two ways to install
.. code-block:: bash

export TVM_HOME=/path-to-tvm
pip install $TVM_HOME/3rdparty/tvm-ffi
export PYTHONPATH=$TVM_HOME/python:$TVM_HOME/3rdparty/tvm-ffi/python:$PYTHONPATH
pip install --target=$TVM_HOME/python $TVM_HOME/3rdparty/tvm-ffi
export PYTHONPATH=$TVM_HOME/python:$PYTHONPATH

- Install via pip local project

Expand All @@ -173,7 +174,7 @@ Leaving the build environment ``tvm-build-venv``, there are two ways to install
conda activate your-own-env
conda install python # make sure python is installed
export TVM_LIBRARY_PATH=/path-to-tvm/build
pip install -e /path-to-tvm/python
pip install -e /path-to-tvm

Step 4. Validate Installation
-----------------------------
Expand Down
60 changes: 60 additions & 0 deletions docs/reference/api/python/contrib.rst
Original file line number Diff line number Diff line change
Expand Up @@ -37,24 +37,66 @@ tvm.contrib.cc
:members:


tvm.contrib.coreml_runtime
~~~~~~~~~~~~~~~~~~~~~~~~~~
.. automodule:: tvm.contrib.coreml_runtime
:members:


tvm.contrib.cublas
~~~~~~~~~~~~~~~~~~
.. automodule:: tvm.contrib.cublas
:members:


tvm.contrib.cublaslt
~~~~~~~~~~~~~~~~~~~~
.. automodule:: tvm.contrib.cublaslt
:members:


tvm.contrib.cudnn
~~~~~~~~~~~~~~~~~
.. automodule:: tvm.contrib.cudnn
:members:


tvm.contrib.dlpack
~~~~~~~~~~~~~~~~~~
.. automodule:: tvm.contrib.dlpack
:members:


tvm.contrib.dnnl
~~~~~~~~~~~~~~~~
.. automodule:: tvm.contrib.dnnl
:members:


tvm.contrib.download
~~~~~~~~~~~~~~~~~~~~
.. automodule:: tvm.contrib.download
:members:


tvm.contrib.emcc
~~~~~~~~~~~~~~~~
.. automodule:: tvm.contrib.emcc
:members:


tvm.contrib.hipblas
~~~~~~~~~~~~~~~~~~~
.. automodule:: tvm.contrib.hipblas
:members:


tvm.contrib.mkl
~~~~~~~~~~~~~~~
.. automodule:: tvm.contrib.mkl
:members:


tvm.contrib.ndk
~~~~~~~~~~~~~~~
.. automodule:: tvm.contrib.ndk
Expand All @@ -79,6 +121,12 @@ tvm.contrib.pickle_memoize
:members:


tvm.contrib.popen_pool
~~~~~~~~~~~~~~~~~~~~~~
.. automodule:: tvm.contrib.popen_pool
:members:


tvm.contrib.random
~~~~~~~~~~~~~~~~~~
.. automodule:: tvm.contrib.random
Expand All @@ -103,6 +151,18 @@ tvm.contrib.tar
:members:


tvm.contrib.thrust
~~~~~~~~~~~~~~~~~~
.. automodule:: tvm.contrib.thrust
:members:


tvm.contrib.tvmjs
~~~~~~~~~~~~~~~~~
.. automodule:: tvm.contrib.tvmjs
:members:


tvm.contrib.utils
~~~~~~~~~~~~~~~~~
.. automodule:: tvm.contrib.utils
Expand Down
1 change: 1 addition & 0 deletions docs/reference/api/python/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,7 @@ Python API
driver
testing
exec
support

.. toctree::
:maxdepth: 1
Expand Down
23 changes: 23 additions & 0 deletions docs/reference/api/python/support.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
.. Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at

.. http://www.apache.org/licenses/LICENSE-2.0

.. Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.

tvm.support
-----------
.. automodule:: tvm.support
:members:
:imported-members:
:autosummary:
Comment on lines +21 to +23
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The :autosummary: option is not a valid option for the automodule directive in Sphinx. This will likely trigger a warning during the documentation build and the option will be ignored. If you intended to show a summary table, you should use the .. autosummary:: directive separately.

Suggested change
:members:
:imported-members:
:autosummary:
:members:
:imported-members:

1 change: 1 addition & 0 deletions python/tvm/contrib/coreml_runtime.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@

def create(symbol, compiled_model_path, device):
"""Create a runtime executor module given a coreml model and context.

Parameters
----------
symbol : str
Expand Down
52 changes: 11 additions & 41 deletions python/tvm/contrib/cudnn.py
Original file line number Diff line number Diff line change
Expand Up @@ -64,50 +64,20 @@ def algo_to_index(algo_type, algo_name):

Parameters
----------
algo_type : str
["fwd", "bwd_filter", "bwd_data]

algo_name : str
algorithm name in cudnn definition
fwd = [
"CUDNN_CONVOLUTION_FWD_ALGO_IMPLICIT_GEMM",
"CUDNN_CONVOLUTION_FWD_ALGO_IMPLICIT_PRECOMP_GEMM",
"CUDNN_CONVOLUTION_FWD_ALGO_GEMM",
"CUDNN_CONVOLUTION_FWD_ALGO_DIRECT",
"CUDNN_CONVOLUTION_FWD_ALGO_FFT",
"CUDNN_CONVOLUTION_FWD_ALGO_FFT_TILING",
"CUDNN_CONVOLUTION_FWD_ALGO_WINOGRAD",
"CUDNN_CONVOLUTION_FWD_ALGO_WINOGRAD_NONFUSED",
"CUDNN_CONVOLUTION_FWD_ALGO_COUNT",
]
bwd_filter = [
"CUDNN_CONVOLUTION_BWD_FILTER_ALGO_0",
# non-deterministic
"CUDNN_CONVOLUTION_BWD_FILTER_ALGO_1",
"CUDNN_CONVOLUTION_BWD_FILTER_ALGO_FFT",
"CUDNN_CONVOLUTION_BWD_FILTER_ALGO_3",
# non-deterministic, algo0 with workspaceS
"CUDNN_CONVOLUTION_BWD_FILTER_ALGO_WINOGRAD",
# not implemented
"CUDNN_CONVOLUTION_BWD_FILTER_ALGO_WINOGRAD_NONFUSED",
"CUDNN_CONVOLUTION_BWD_FILTER_ALGO_FFT_TILING",
"CUDNN_CONVOLUTION_BWD_FILTER_ALGO_COUNT",
]
bwd_data = [
"CUDNN_CONVOLUTION_BWD_DATA_ALGO_0",
# non-deterministic
"CUDNN_CONVOLUTION_BWD_DATA_ALGO_1",
"CUDNN_CONVOLUTION_BWD_DATA_ALGO_FFT",
"CUDNN_CONVOLUTION_BWD_DATA_ALGO_FFT_TILING",
"CUDNN_CONVOLUTION_BWD_DATA_ALGO_WINOGRAD",
"CUDNN_CONVOLUTION_BWD_DATA_ALGO_WINOGRAD_NONFUSED",
"CUDNN_CONVOLUTION_BWD_DATA_ALGO_COUNT",
]
algo_type : str
One of ``"fwd"``, ``"bwd_filter"``, or ``"bwd_data"``.

algo_name : str
Algorithm name as defined in cuDNN. For example:

* fwd: ``CUDNN_CONVOLUTION_FWD_ALGO_IMPLICIT_GEMM``, etc.
* bwd_filter: ``CUDNN_CONVOLUTION_BWD_FILTER_ALGO_0``, etc.
* bwd_data: ``CUDNN_CONVOLUTION_BWD_DATA_ALGO_0``, etc.

Returns
-------
algo: int
algorithm index
algo : int
Algorithm index

"""
idx = -1
Expand Down
2 changes: 1 addition & 1 deletion python/tvm/contrib/popen_pool.py
Original file line number Diff line number Diff line change
Expand Up @@ -213,7 +213,7 @@ def is_alive(self):
return False

def send(self, fn, args=(), kwargs=None, timeout=None):
"""Send a new function task fn(*args, **kwargs) to the subprocess.
"""Send a new function task ``fn(*args, **kwargs)`` to the subprocess.

Parameters
----------
Expand Down
Loading