Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Doc] better documention #747

Merged
merged 55 commits into from
Apr 14, 2020
Merged
Show file tree
Hide file tree
Changes from 49 commits
Commits
Show all changes
55 commits
Select commit Hold shift + click to select a range
26a7569
[skip ci] update doc Global Settings
archibate Apr 11, 2020
2e66e18
update linalg
archibate Apr 11, 2020
9e9265a
[skip ci] update type
archibate Apr 11, 2020
21df5e6
update global settings again
archibate Apr 11, 2020
65afc11
update gui&imgio to util
archibate Apr 11, 2020
5a45140
[skip ci] tmp work save
archibate Apr 11, 2020
0e699ff
add vector.rst
archibate Apr 11, 2020
0fab826
fix typoly
archibate Apr 11, 2020
b11cdf8
add atomic
archibate Apr 11, 2020
4bf9a7b
[skip ci] tmp work on tensor of scalars
archibate Apr 11, 2020
d5a2c3b
[skip ci] applying reviews by @k-ye
archibate Apr 11, 2020
1427d5a
[skip ci] add table
archibate Apr 11, 2020
ac92ead
[skip ci] apply reviews by @yuanming-hu
archibate Apr 11, 2020
342eec7
[skip ci] typo
archibate Apr 11, 2020
bd39eb3
[skip ci] typo2
archibate Apr 11, 2020
34a1e02
[skip ci] update hello
archibate Apr 11, 2020
f7509ee
[skip ci] change tables
archibate Apr 12, 2020
af695ec
[skip ci] update vector.rst
archibate Apr 12, 2020
38904c7
[skip ci] update dev_install.rst
archibate Apr 12, 2020
ef5e54f
[skip ci] update
archibate Apr 12, 2020
99440a4
[skip ci] update again
archibate Apr 12, 2020
5c8fbba
[skip ci] upd
archibate Apr 12, 2020
5a9291d
[skip ci] u
archibate Apr 12, 2020
7762a54
[skip ci] update
archibate Apr 12, 2020
b6ddde0
[skip ci] update break usage
archibate Apr 12, 2020
9e7f737
[skip ci] upd
archibate Apr 12, 2020
7b1bcf3
[skip ci]
archibate Apr 12, 2020
4c66e29
[skip ci] a
archibate Apr 12, 2020
3e65f78
[skip ci] s
archibate Apr 12, 2020
340436f
[skip ci] Merge branch 'master' into doc
archibate Apr 12, 2020
f8c5c2a
[skip ci] !
archibate Apr 12, 2020
cecb3b6
[skip ci] overhaul
archibate Apr 12, 2020
c7b6095
[skip ci] update scalar_tensor
archibate Apr 12, 2020
0082022
[skip ci] misc
archibate Apr 12, 2020
8b72b0f
[skip ci] rename to References
archibate Apr 12, 2020
b7ec4ae
[skip ci] swap
archibate Apr 12, 2020
f19f228
[skip ci] gsm
archibate Apr 12, 2020
4669334
Merge branch 'master' into doc
archibate Apr 13, 2020
ada791a
[skip ci] gsm init recursive
archibate Apr 13, 2020
57fdd42
Merge branch 'doc' of github.com:archibate/taichi into doc
archibate Apr 13, 2020
71c41ae
[skip ci] enforce code format
taichi-gardener Apr 13, 2020
a344fd4
[skip ci] Merge branch 'master' into doc
archibate Apr 14, 2020
5d97289
[skip ci] Merge branch 'master' into doc
archibate Apr 14, 2020
5521c03
[skip ci] Merge branch 'doc' of github.com:archibate/taichi into doc
archibate Apr 14, 2020
a126e0e
update lists.py
archibate Apr 14, 2020
671142e
xxx
archibate Apr 14, 2020
82ea5d7
[skip ci] fix WARNING
archibate Apr 14, 2020
a85ef7e
[skip ci] fix contributor_guide
archibate Apr 14, 2020
61c9704
update atomic.rst
yuanming-hu Apr 14, 2020
94416d3
update syntax and contributor_guide
yuanming-hu Apr 14, 2020
543234a
[skip ci] reviews
archibate Apr 14, 2020
4350e32
[skip ci] Merge branch 'doc' of github.com:archibate/taichi into doc
archibate Apr 14, 2020
b1c4191
[skip ci] fix typo
archibate Apr 14, 2020
1fd7210
update dev-install
archibate Apr 14, 2020
443b6c3
[skip ci] finalize
yuanming-hu Apr 14, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
74 changes: 74 additions & 0 deletions docs/atomic.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
.. _atomic:

Atomic operations
=================

In Taichi, augmented assignments (e.g., ``x[i] += 1``) are automatically `atomic <https://en.wikipedia.org/wiki/Fetch-and-add>`_.


.. warning::

When accumulating to global variables in parallel, make sure you use atomic operations. For example, to compute the sum of all elements in ``x``,
::

@ti.kernel
def sum():
for i in x:
# Approach 1: OK
total[None] += x[i]

# Approach 2: OK
ti.atomic_add(total[None], x[i])

# Approach 3: Wrong result since the operation is not atomic.
total[None] = total[None] + x[i]


.. note::
When atomic operations are applied to local values, the Taichi compiler will try to demote these operations into their non-atomic correspondence.

Apart from augmented assignments, explicit atomic operations such as ``ti.atomic_add`` also do read-modify-write atomically.
These operations additionally return the **old value** of the first argument. Below is the full list of explicit atomic operations:

.. function:: ti.atomic_add(x, y)
.. function:: ti.atomic_sub(x, y)

Atomically compute ``x + y``/``x - y`` and store the result to ``x``.

:return: The old value of ``x``.

For example,
::

x = 3
y = 4
z = ti.atomic_add(x, y)
# now z = 7, y = 4, z = 3


.. function:: ti.atomic_and(x, y)
.. function:: ti.atomic_or(x, y)
.. function:: ti.atomic_xor(x, y)

Atomically compute ``x & y`` (bitwise and), ``x | y`` (bitwise or), ``x ^ y`` (bitwise xor) and store the result to ``x``.

:return: The old value of ``x``.


.. note::

Supported atomic operations on each backend:

+----------+-----------+-----------+---------+
| type | CPU/CUDA | OpenGL | Metal |
+==========+===========+===========+=========+
| ``i32`` | OK | OK | OK |
+----------+-----------+-----------+---------+
| ``f32`` | OK | OK | OK |
+----------+-----------+-----------+---------+
| ``i64`` | OK | EXT | MISS |
+----------+-----------+-----------+---------+
| ``f64`` | OK | EXT | MISS |
+----------+-----------+-----------+---------+

(OK: supported; EXT:r equire extension; MISS: not supported)
2 changes: 1 addition & 1 deletion docs/compilation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -101,7 +101,7 @@ which allows a series of further IR passes to happen, such as
- Atomic operation demotion

The just-in-time (JIT) compilation engine
---------------------------------------
-----------------------------------------

Finally, the optimized SSA IR is fed into the LLVM IR codegen, and LLVM JIT generates high-performance executable CPU/GPU programs.

Expand Down
2 changes: 1 addition & 1 deletion docs/contributor_guide.rst
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@ Using continuous integration
----------------------------

- Continuous Integration (CI), will **build** and **test** your commits in a PR against in environments.
- Currently, Taichi uses `"Travis CI" <https://travis-ci.org>`_(for OS X and Linux) and `"AppVeyor" <https://www.appveyor.com>`_(for Windows).
- Currently, Taichi uses `"Travis CI" <https://travis-ci.org>`_ (for OS X and Linux) and `"AppVeyor" <https://www.appveyor.com>`_ (for Windows).
- CI will be triggered everytime you push commits to an open PR.
- You can prepend ``[skip ci]`` to your commit message to avoid triggering CI. e.g. ``[skip ci] This commit will not trigger CI``
- A tick on the right of commit hash means CI passed, a cross means CI failed.
Expand Down
4 changes: 4 additions & 0 deletions docs/cpp_style.rst
Original file line number Diff line number Diff line change
Expand Up @@ -28,3 +28,7 @@ Don'ts
- ``NULL``, use ``nullptr`` instead.
- ``using namespace std;`` in global scope.
- ``typedef``. Use ``using`` instead.

Automatic code formatting
--------------------------------------------------------------------------------
- Please run ``ti format``
38 changes: 32 additions & 6 deletions docs/dev_install.rst
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,9 @@ For precise build instructions on Windows, please check out `appveyor.yml <https

Note that on Linux/OS X, ``clang`` is the only supported compiler for compiling the Taichi compiler. On Windows only MSVC supported.

Installing Depedencies
---------------------------------------------

- Make sure you are using Python 3.6/3.7/3.8
- Execute

Expand All @@ -19,39 +22,62 @@ Note that on Linux/OS X, ``clang`` is the only supported compiler for compiling
python3 -m pip install --user setuptools astpretty astor pytest opencv-python pybind11
python3 -m pip install --user Pillow numpy scipy GitPython yapf colorama psutil autograd

- (If on Ubuntu) Execute ``sudo apt install libtinfo-dev clang-8``. ``clang-7`` should work as well.
- Make sure you have LLVM 8.0.1 built from scratch (`Download <https://github.com/llvm/llvm-project/releases/download/llvmorg-8.0.1/llvm-8.0.1.src.tar.xz>`_). To do so, download and unzip the llvm source, move to the llvm folder, and execute
* (If on Ubuntu) Execute ``sudo apt install libtinfo-dev clang-8`` (or ``clang-7`` should work as well).

* (If on Arch Linux) Execute

.. code-block:: bash

wget https://archive.archlinux.org/packages/c/clang/clang-8.0.1-1-x86_64.pkg.tar.xz
sudo pacman -Qp clang-8.0.1-1-x86_64.pkg.tar.xz

.. warning::
If you have installed ``clang`` (9.0.1) before, this command will overrides the existing ``clang``.
If you don't want to break up depedencies, please build from scratch and install it in ``/opt``. Then add ``/opt/clang/bin`` to your ``$PATH``.


- Make sure you have LLVM 8.0.1 built from scratch. To do so:

.. code-block:: bash

wget https://github.com/llvm/llvm-project/releases/download/llvmorg-8.0.1/llvm-8.0.1.src.tar.xz
tar xvJf llvm-8.0.1.src.tar.xz
cd llvm-8.0.1.src
mkdir build
cd build
cmake .. -DLLVM_ENABLE_RTTI:BOOL=ON -DBUILD_SHARED_LIBS:BOOL=OFF -DCMAKE_BUILD_TYPE=Release -DLLVM_TARGETS_TO_BUILD="X86;NVPTX" -DLLVM_ENABLE_ASSERTIONS=ON
# If you are building on NVIDIA Jetson TX2, use -DLLVM_TARGETS_TO_BUILD="ARM;NVPTX"
make -j 8
sudo make install

- Clone the taichi repo, and then
Setting up Taichi for develop
archibate marked this conversation as resolved.
Show resolved Hide resolved
---------------------------------------------

- Clone the taichi repo, and build:

.. code-block:: bash

git clone https://github.com/taichi-dev/taichi --depth=1 --branch=master
gsm update --init --recursive --depth=1
archibate marked this conversation as resolved.
Show resolved Hide resolved
cd taichi
mkdir build
cd build
cmake ..
# if you are building with CUDA, say, 10.0, then please use "cmake .. -DCUDA_VERSION=10.0 -DTI_WITH_CUDA:BOOL=True"
# if you are building with CUDA 10.0, use this:
archibate marked this conversation as resolved.
Show resolved Hide resolved
# cmake .. -DCUDA_VERSION=10.0 -DTI_WITH_CUDA:BOOL=True
make -j 8

- Add the following to your ``~/.bashrc`` (or ``~/.zshrc`` if you use ``zsh``)
- Add the following codes to your ``~/.bashrc``:
archibate marked this conversation as resolved.
Show resolved Hide resolved

.. code-block:: bash

export TAICHI_REPO_DIR=/home/XXX/taichi # Path to your taichi repository
export PYTHONPATH=$TAICHI_REPO_DIR/python/:$PYTHONPATH
export PATH=$TAICHI_REPO_DIR/bin/:$PATH
# export PATH=/opt/llvm/bin:$PATH # Uncomment if your llvm-8 or clang-8 is in /opt

- Execute ``source ~/.bashrc`` to reload shell config
- Execute ``ti test`` to run all the tests. It may take up to 5 minutes to run all tests. (On Windows the ``ti`` command should be replaced by ``python -m taichi``)
- Execute ``ti test`` to run all the tests. It may take up to 5 minutes to run all tests. (On Windows, please execute ``python3 -m taichi test`` instead)
- Check out ``examples`` for runnable examples. Run them with ``python3``.


Expand Down
6 changes: 5 additions & 1 deletion docs/global_settings.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,4 +3,8 @@ Global Settings

- Restart the Taichi runtime system (clear memory, destroy all variables and kernels): ``ti.reset()``
- Eliminate verbose outputs: ``ti.get_runtime().set_verbose(False)``
- To specify which GPU to use: ``export CUDA_VISIBLE_DEVICES=0``
- Not to trigger GDB when crashes: ``ti.set_gdb_trigger(False)``
archibate marked this conversation as resolved.
Show resolved Hide resolved
- Show more detailed log (TI_TRACE): ``export TI_LOG_LEVEL=trace``
- To specify which GPU to use for CUDA: ``export CUDA_VISIBLE_DEVICES=0``
- To specify which Arch to use: ``export TI_ARCH=cuda``
- To print intermediate IR generated: ``export TI_PRINT_IR=1``
53 changes: 42 additions & 11 deletions docs/hello.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,13 +3,11 @@ Hello, world!

We introduce the Taichi programming language through a very basic `fractal` example.

If you haven't done so, please install Taichi via ``pip``.
Depending on your hardware and OS, please execute one of the following commands:
If you haven't done so, please install Taichi via ``pip``:

.. code-block:: bash

# Python 3.6+ needed

python3 -m pip install taichi

Now you are ready to run the Taichi code below (``python3 fractal.py``) to compute a
Expand All @@ -30,7 +28,7 @@ Now you are ready to run the Taichi code below (``python3 fractal.py``) to compu

@ti.func
def complex_sqr(z):
return ti.Vector([z[0] * z[0] - z[1] * z[1], z[1] * z[0] * 2])
return ti.Vector([z[0] ** 2 - z[1] ** 2, z[1] * z[0] * 2])

@ti.kernel
def paint(t: ti.f32):
Expand Down Expand Up @@ -65,16 +63,35 @@ You can also reuse the package management system, Python IDEs, and existing Pyth
Portability
-----------------

Taichi supports both CPUs and NVIDIA GPUs.
Taichi code can run on CPUs or GPUs. Depends on your platform:
archibate marked this conversation as resolved.
Show resolved Hide resolved

.. code-block:: python

# Run on GPU
# Run on NVIDIA GPU, require CUDA installed
archibate marked this conversation as resolved.
Show resolved Hide resolved
ti.init(arch=ti.cuda)
# Run on GPU, with OpenGL backend
archibate marked this conversation as resolved.
Show resolved Hide resolved
ti.init(arch=ti.opengl)
# Run on GPU, with Metal backend, if you're on OS X
archibate marked this conversation as resolved.
Show resolved Hide resolved
ti.init(arch=ti.metal)
# Run on CPU (default)
ti.init(arch=ti.x64)

If the machine does not have CUDA support, Taichi will fall back to CPUs instead.
.. note::
Supported archs on different platforms:
archibate marked this conversation as resolved.
Show resolved Hide resolved

+----------+------+------+--------+-------+
| platform | CPU | CUDA | OpenGL | Metal |
+==========+======+======+========+=======+
| Windows | OK | WIP | WIP | MISS |
archibate marked this conversation as resolved.
Show resolved Hide resolved
+----------+------+------+--------+-------+
| Linux | OK | OK | OK | MISS |
+----------+------+------+--------+-------+
| Mac OS X | OK | MISS | MISS | OK |
+----------+------+------+--------+-------+

(OK=supported, WIP=work in progress, MISS=not supported)
archibate marked this conversation as resolved.
Show resolved Hide resolved

If the machine does not have CUDA support, Taichi will fall back to CPUs instead.

.. note::

Expand All @@ -86,7 +103,7 @@ If the machine does not have CUDA support, Taichi will fall back to CPUs instead
On other platforms Taichi will make use of its on-demand memory allocator to adaptively allocate memory.

(Sparse) Tensors
-------
----------------

Taichi is a data-oriented programming language, where dense or spatially-sparse tensors are first-class citizens.
See :ref:`sparse` for more details on sparse tensors.
Expand All @@ -110,7 +127,7 @@ You can also define Taichi **functions** with ``ti.func``, which can be called a

.. warning::

Taichi kernels must be called in the Python-scope. I.e., **nested Taichi kernels are not supported**.
Taichi kernels must be called in the Python-scope. I.e., **nested kernels are not supported**.
Nested functions are allowed. **Recursive functions are not supported for now**.

Taichi functions can only be called in Taichi-scope.
Expand Down Expand Up @@ -171,9 +188,23 @@ In the fractal code above, ``for i, j in pixels`` loops over all the pixel coord
for i in x:
...

.. warning::
.. note::
``break`` is not supported in **outermost (parallelized)** loops:

Struct-for's must be at the outer-most scope of kernels.
yuanming-hu marked this conversation as resolved.
Show resolved Hide resolved
.. code-block:: python

@ti.kernel
def foo():
for i in x:
...
break # ERROR! You cannot break a parallelized loop!

@ti.kernel
def foo():
for i in x:
for j in y:
...
break # OK


Interacting with Python
Expand Down
43 changes: 25 additions & 18 deletions docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,58 +6,65 @@ The Taichi Programming Language
:maxdepth: 3

overview
hello


.. toctree::
:caption: Basic Concepts
:maxdepth: 3

hello

syntax

type
tensor_matrix
atomic
external

linalg

tensor_matrix
.. toctree::
:caption: API References
:maxdepth: 3

global_settings
scalar_tensor
vector
linalg

external

.. toctree::
:caption: Advanced Programming
:maxdepth: 3

meta

data_layout

sparse

differentiable_programming

odop

compilation

syntax_sugars


.. toctree::
:caption: Miscellaneous
:maxdepth: 3
:caption: Contribution
:maxdepth: 1

utilities
dev_install
contributor_guide
cpp_style
internal
faq


.. toctree::
:caption: Miscellaneous
:maxdepth: 3

utilities
global_settings
performance
acknowledgments
faq


.. toctree::
:caption: Legacy
:maxdepth: 3

installation
legacy_installation
1 change: 1 addition & 0 deletions docs/internal.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ Vector type system

Intermediate representation
---------------------------------------
Use ``ti.init(print_ir=True)`` to print IR on the console.


Code generation
Expand Down
File renamed without changes.