Skip to content

Commit

Permalink
update links after moving LLAMA repository into alpaka-group
Browse files Browse the repository at this point in the history
  • Loading branch information
bernhardmgruber committed Sep 30, 2020
1 parent 9926702 commit 2b1f601
Show file tree
Hide file tree
Showing 5 changed files with 9 additions and 9 deletions.
2 changes: 1 addition & 1 deletion documentation/pages/blobs.rst
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@ Alpaka
The following descriptions are for alpaka users.
Without an understanding of alpaka they may hard to understand.

LLAMA features some examples using `alpaka <https://github.com/ComputationalRadiationPhysics/alpaka>`_ for the abstraction of computation parallelization.
LLAMA features some examples using `alpaka <https://github.com/alpaka-group/alpaka>`_ for the abstraction of computation parallelization.
Alpaka has its own memory allocation functions for different memory regions (e.g. host, device and shared memory).
Additionally there are some cuda-inherited rules which make e.g. sharing memory regions hard (e.g. no possibility to use a :cpp:`std::shared_ptr` on a GPU).

Expand Down
2 changes: 1 addition & 1 deletion documentation/pages/install.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ Getting LLAMA
-------------

The most recent version of LLAMA can be found at
`GitHub <https://github.com/ComputationalRadiationPhysics/llama>`_.
`GitHub <https://github.com/alpaka-group/llama>`_.

All examples use CMake and the library itself provides a
:bash:`llama-config.cmake` to be found by CMake. Although LLAMA is a header-only
Expand Down
4 changes: 2 additions & 2 deletions documentation/pages/introduction.rst
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ This often requires separate code paths depending on the target system.
But even then, sometimes projects last for decades while new architectures rise and fall, making it is dangerous to settle for a specific data structure.

Performance portable parallelism to exhaust multi-, manycore and GPU hardware is addressed in recent developments like
`alpaka <https://github.com/ComputationalRadiationPhysics/alpaka>`_ or
`alpaka <https://github.com/alpaka-group/alpaka>`_ or
`Kokkos <https://github.com/kokkos/kokkos>`_.

However, efficient use of a system's memory and cache hierarchies is crucial as well and equally heterogeneous.
Expand Down Expand Up @@ -76,7 +76,7 @@ computation devices, the image sensor data format and the problem size may vary
and a fast and easy adaption of the code is needed.

The shipped
`examples <https://github.com/ComputationalRadiationPhysics/llama/tree/master/examples>`_
`examples <https://github.com/alpaka-group/llama/tree/master/examples>`_
of LLAMA try to showcase the implemented feature in the intended usage.

Challenges
Expand Down
8 changes: 4 additions & 4 deletions documentation/pages/views.rst
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@ but as the elements of this datum may not be in contiguous in memory, it is call

Nevertheless, it can be used like a real local object.
A virtual datum can be passed as an argument to a function (as seen in the
`nbody example <https://github.com/ComputationalRadiationPhysics/llama/blob/master/examples/nbody/nbody.cpp>`_
`nbody example <https://github.com/alpaka-group/llama/blob/master/examples/nbody/nbody.cpp>`_
).
Furthermore, several arithmetic and logical operatores are overloaded:

Expand Down Expand Up @@ -230,7 +230,7 @@ This enables e.g. to easily add a velocity to a position like this:
datum(pos{}) += datum(vel{});

This is e.g. used in the
`nbody example <https://github.com/ComputationalRadiationPhysics/llama/blob/master/examples/nbody/nbody.cpp>`_
`nbody example <https://github.com/alpaka-group/llama/blob/master/examples/nbody/nbody.cpp>`_
to update the particle velocity based on the distances of particles and to
update the position after one time step movement with the velocity.

Expand Down Expand Up @@ -358,7 +358,7 @@ the coordinate of the leaf in the datum domain tree, the functor is called on.
});

A more detailed example can be found in the
`simpletest example <https://github.com/ComputationalRadiationPhysics/llama/blob/master/examples/simpletest/simpletest.cpp>`_.
`simpletest example <https://github.com/alpaka-group/llama/blob/master/examples/simpletest/simpletest.cpp>`_.

Thoughts on copies between views
--------------------------------
Expand Down Expand Up @@ -399,7 +399,7 @@ same mapping but possibly different than in :math:`A` **and** :math:`B` the copy
problem can be split to smaller chunks of memory. It makes also sense to combine
this approach with an asynchronous workflow where reindexing, copying and
computation are overloayed as e.g. seen in the
`async copy example <https://github.com/ComputationalRadiationPhysics/llama/blob/master/examples/asynccopy/asynccopy.cpp>`_.
`async copy example <https://github.com/alpaka-group/llama/blob/master/examples/asynccopy/asynccopy.cpp>`_.

Another benefit is, that the creating and copying of the intermediate view can
be analyzed and optimized by the compiler (e.g. with vector operations).
Expand Down
2 changes: 1 addition & 1 deletion include/llama/macros.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@
/// "resides" on the host, the accelerator (the offloading device) or both.
/// LLAMA supports this with marking every function needed on an accelerator
/// with `LLAMA_FN_HOST_ACC_INLINE`. When using such a language (or e.g. <a
/// href="https://github.com/ComputationalRadiationPhysics/alpaka">alpaka</a>)
/// href="https://github.com/alpaka-group/alpaka">alpaka</a>)
/// this macro should be defined on the compiler's command line. E.g. for
/// alpaka: -D'LLAMA_FN_HOST_ACC_INLINE=ALPAKA_FN_HOST_ACC'
# define LLAMA_FN_HOST_ACC_INLINE inline
Expand Down

0 comments on commit 2b1f601

Please sign in to comment.