Skip to content
This repository has been archived by the owner on Nov 8, 2022. It is now read-only.

Commit

Permalink
fix doc format to rst
Browse files Browse the repository at this point in the history
  • Loading branch information
Ehsan Totoni committed Dec 20, 2015
1 parent 4b8c12e commit 3bb75d9
Show file tree
Hide file tree
Showing 3 changed files with 79 additions and 85 deletions.
54 changes: 24 additions & 30 deletions doc/examples.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,51 +4,45 @@
Examples
*********

The `examples/` subdirectory has a few example programs demonstrating
The ``examples/`` subdirectory has a few example programs demonstrating
how to use ParallelAccelerator. You can run them at the command line.
For instance:

``` .bash
$ julia ~/.julia/v0.4/ParallelAccelerator/examples/laplace-3d/laplace-3d.jl
Run laplace-3d with size 300x300x300 for 100 iterations.
SELFPRIMED 18.663935711
SELFTIMED 1.527286803
checksum: 0.49989778
```

The `SELFTIMED` line in the printed output shows the running time,
while the `SELFPRIMED` line shows the time it takes to compile the
For instance::

$ julia ~/.julia/v0.4/ParallelAccelerator/examples/laplace-3d/laplace-3d.jl
Run laplace-3d with size 300x300x300 for 100 iterations.
SELFPRIMED 18.663935711
SELFTIMED 1.527286803
checksum: 0.49989778

The *SELFTIMED* line in the printed output shows the running time,
while the *SELFPRIMED* line shows the time it takes to compile the
accelerated code and run it with a small "warm-up" input.

Pass the `--help` option to see usage information for each example:
Pass the ``--help`` option to see usage information for each example::

$ julia ~/.julia/v0.4/ParallelAccelerator/examples/laplace-3d/laplace-3d.jl -- --help laplace-3d.jl

``` .bash
$ julia ~/.julia/v0.4/ParallelAccelerator/examples/laplace-3d/laplace-3d.jl -- --help
laplace-3d.jl
Laplace 6-point 3D stencil.

Laplace 6-point 3D stencil.
Usage:
laplace-3d.jl -h | --help
laplace-3d.jl [--size=<size>] [--iterations=<iterations>]

Usage:
laplace-3d.jl -h | --help
laplace-3d.jl [--size=<size>] [--iterations=<iterations>]
Options:
-h --help Show this screen.
--size=<size> Specify a 3d array size (<size> x <size> x <size>); defaults to 300.
--iterations=<iterations> Specify a number of iterations; defaults to 100.

Options:
-h --help Show this screen.
--size=<size> Specify a 3d array size (<size> x <size> x <size>); defaults to 300.
--iterations=<iterations> Specify a number of iterations; defaults to 100.
```

You can also run the examples at the `julia>` prompt:
You can also run the examples at the *julia>* prompt::

```
julia> include("$(homedir())/.julia/v0.4/ParallelAccelerator/examples/laplace-3d/laplace-3d.jl")
Run laplace-3d with size 300x300x300 for 100 iterations.
SELFPRIMED 18.612651534
SELFTIMED 1.355707121
checksum: 0.49989778
```

Some of the examples require additional Julia packages. The top-level
`REQUIRE` file in this repository lists all registered packages that
``REQUIRE`` file in this repository lists all registered packages that
examples depend on.

8 changes: 4 additions & 4 deletions doc/howitworks.rst
Original file line number Diff line number Diff line change
Expand Up @@ -14,10 +14,10 @@ elements of input arrays. For the most part, these patterns are already
present in standard Julia, so programmers can use ParallelAccelerator to run
the same Julia program without (significantly) modifying the source code.

The `@acc` macro provided by ParallelAccelerator first intercepts Julia
The ``@acc`` macro provided by ParallelAccelerator first intercepts Julia
functions at the macro level and substitutes the set of implicitly parallel
operations that we are targeting. `@acc` points them to those supplied in the
`ParallelAccelerator.API` module. It then creates a proxy function that when
operations that we are targeting. ``@acc`` points them to those supplied in the
``ParallelAccelerator.API`` module. It then creates a proxy function that when
called with concrete arguments (and known types) will try to compile the
original function to an optimized form. Therefore, there is some compilation
time the first time an accelerated function is called. The subsequent
Expand All @@ -26,7 +26,7 @@ calls to the same function will not have compilation time overhead.
ParallelAccelerator performs aggressive optimizations when they are safe depending on the program structure.
For example, it will automatically infer size equivalence relations among array
variables and skip array bounds check whenever it can safely do so. Eventually all
parallel patterns are lowered into explicit parallel `for` loops which are internally
parallel patterns are lowered into explicit parallel *for* loops which are internally
represented at the level of Julia's typed AST. Aggressive loop fusion will
try to combine adjacent loops into one and eliminate temporary array objects
that store intermediate results.
Expand Down
102 changes: 51 additions & 51 deletions doc/limits.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,55 +8,55 @@ Currently, ParallelAccelerator tries to compile Julia to C, which puts some cons
can be successfully compiled and run:

1. We only support a limited subset of Julia language features currently.
This includes basic numbers and dense array types, a subset of math
functions, and basic control flow structures. Notably, we do not support
`String` types and custom data types such as records and unions, since their
translation to C is difficult. There is also no support for exceptions,
I/O operations (only very limited `println`), and arbitrary `ccall`s.
We also do not support keyword arguments.

2. We do not support calling Julia functions from C in the optimized
function. What this implies is that we transitively convert
every Julia function in the call chain to C. If any of them is not
translated properly, the target function with `@acc` will fail to compile.

3. We do not support Julia's `Any` type in C, mostly to
defend against erroneous translation. If the AST of a Julia function
contains a variable with `Any` type, our Julia-to-C translator will give up
compiling the function. This is indeed more limiting than it sounds, because
Julia does not annotate all expressions in a typed AST with complete type
information. For example, this happens for some expressions that call Julia's
own intrinsics. We are working on supporting more of them if we can derive
the actual type to be not `Any`, but this is still a work in progress.

At the moment ParallelAccelerator only supports the Julia-to-C back-end. We
are working on alternatives that make use of Julia's upcoming threading implementation
that hopefully can alleviate the above mentioned
restrictions, without sacrificing much of the speed brought by quality C
compilers and parallel runtimes such as OpenMP.

Apart from the constraints imposed by Julia-to-C translation, our current
implementation of ParallelAccelerator has some other limitations:

1. We currently support a limited subset of Julia functions available in the `Base` library.
However, not all Julia functions in `Base`
are supported yet, and using them may or may not work in ParallelAccelerator.
For supported functions, we rely on capturing operator names to resolve array related functions and operators
to our API module. This prevents them from being inlined by Julia
which helps our translation. For unsupported functions such as `mean(x)`,
Julia's typed AST for the program
that contains `mean(x)` becomes a lowered call that is basically the
the low-level sequential implementation which cannot be
handled by ParallelAccelerator. Of course, adding support
for functions like `mean` is not a huge effort, and we are still in
the process of expanding the coverage of supported APIs.

2. ParallelAccelerator relies heavily on full type information being available
in Julia's typed AST in order to work properly. Although we do not require
user functions to be explicitly typed, it is in general a good practice to
ensure the function that is being accelerated can pass Julia's type inference
without leaving any parameters or internal variables with an `Any` type.
There is currently no facility to help users understand whether something
is being optimized or silently rejected. We plan to provide
better report on what is going on under the hood.
This includes basic numbers and dense array types, a subset of math
functions, and basic control flow structures. Notably, we do not support
``String`` types and custom data types such as records and unions, since their
translation to C is difficult. There is also no support for exceptions,
I/O operations (only very limited ``println``), and arbitrary ``ccall``.
We also do not support keyword arguments.

2. We do not support calling Julia functions from C in the optimized
function. What this implies is that we transitively convert
every Julia function in the call chain to C. If any of them is not
translated properly, the target function with ``@acc`` will fail to compile.

3. We do not support Julia's ``Any`` type in C, mostly to
defend against erroneous translation. If the AST of a Julia function
contains a variable with ``Any`` type, our Julia-to-C translator will give up
compiling the function. This is indeed more limiting than it sounds, because
Julia does not annotate all expressions in a typed AST with complete type
information. For example, this happens for some expressions that call Julia's
own intrinsics. We are working on supporting more of them if we can derive
the actual type to be not ``Any``, but this is still a work in progress.
At the moment ParallelAccelerator only supports the Julia-to-C back-end. We
are working on alternatives that make use of Julia's upcoming threading implementation
that hopefully can alleviate the above mentioned
restrictions, without sacrificing much of the speed brought by quality C
compilers and parallel runtimes such as OpenMP.
Apart from the constraints imposed by Julia-to-C translation, our current
implementation of ParallelAccelerator has some other limitations:
1. We currently support a limited subset of Julia functions available in the ``Base`` library.
However, not all Julia functions in ``Base``
are supported yet, and using them may or may not work in ParallelAccelerator.
For supported functions, we rely on capturing operator names to resolve array related functions and operators
to our API module. This prevents them from being inlined by Julia
which helps our translation. For unsupported functions such as ``mean(x)``,
Julia's typed AST for the program
that contains ``mean(x)`` becomes a lowered call that is basically the
the low-level sequential implementation which cannot be
handled by ParallelAccelerator. Of course, adding support
for functions like ``mean`` is not a huge effort, and we are still in
the process of expanding the coverage of supported APIs.

2. ParallelAccelerator relies heavily on full type information being available
in Julia's typed AST in order to work properly. Although we do not require
user functions to be explicitly typed, it is in general a good practice to
ensure the function that is being accelerated can pass Julia's type inference
without leaving any parameters or internal variables with an ``Any`` type.
There is currently no facility to help users understand whether something
is being optimized or silently rejected. We plan to provide
better report on what is going on under the hood.

0 comments on commit 3bb75d9

Please sign in to comment.