Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
---
title: Introduction to Performance Libraries
weight: 2

### FIXED, DO NOT MODIFY
layout: learningpathall
---

## Introduction to Performance Libraries

The C++ Standard Library provides a collection of classes and functions that are essential for everyday programming tasks, such as data structures, algorithms, and input/output operations. It is designed to be versatile and easy to use, ensuring compatibility and portability across different platforms. However as a result of this portability, standard libraries introduces some limitations. Performance sensitive applications may wish to take maximum advantage of the hardware's capabilities - this is where performance libraries come in.

Performance libraries are specialized for high-performance computing tasks and are often tailored to the microarchitecture of a specific processor. These libraries are optimized for speed and efficiency, often leveraging hardware-specific features such as vector units to achieve maximum performance. Performance libraries are crafted through extensive benchmarking and optimization, and can be domain-specific, such as genomics libraries, or produced by Arm for general-purpose computing. For example, OpenRNG focuses on generating random numbers quickly and efficiently, which is crucial for simulations and scientific computations, whereas the C++ Standard Library offers a more general-purpose approach with functions like std::mt19937 for random number generation.

Performance libraries for Arm CPUs, such as the Arm Performance Libraries (APL), provide highly optimized mathematical functions for scientific computing. An analogous library for accelerating routines on GPU is cuBLAS for NVIDIA GPUs. These libraries can be linked dynamically at runtime or statically during compilation, offering flexibility in deployment. They are designed to support multiple versions of the Arm architecture, including those with NEON and SVE extensions. Generally, minimal source code changes are required to support these libraries, making them simple for porting and optimising.

### Choosing the right version of a library

Performance libraries are often distributed with the following formats to support various use cases.

- **ILP64** use 64 bits for representing integers, which are often used for indexing large arrays in scentific computing. In C++ source code we use the `long long` type to specify 64-bit integers.

- **LP64** use 32 bits to present integers which are more common in general purpose applications.

- **Open Multi-process** (OpenMP) is a programming interface for paralleling workloads across many CPU cores on shared memory across multiple platforms (i.e. x86, AArch64 etc.). Programmers would interact primarily through compiler directives, such as `#pragma omp parallel` indicating which section of source code can be run on parallel and which sections require synchronisation. This learning path does not serve to teach you about OpenMP but presumes the reader is familiar.

Arm performance libraries like the x86 equivalent, Open Math Kernel Library (MKL) provide optimised functions for both ILP64 and LP64 as well as OpenMP or single threaded implementations. Further, the interface libraries are available as shared libraries for dynamic linking (i.e. `*.so`) or static linking (i.e. `*.a`).

### Why do multiple performance Libraries exist?

A natural source of confusion stems from the plethora of similar seeming performance libraries, for example OpenBLAS, NVIDIA Performance Libraries (NVPL) which have their own implementations for specific functions, for example basic linear algebra subprograms (BLAS). This begs the question which one should a developer use?

Multiple performance libraries coexist to cater to the diverse needs of different hardware architectures and applications. For instance, Arm performance libraries are optimized for Arm CPUs, leveraging their unique instruction sets and power efficiency. On the other hand, NVIDIA performance libraries for Grace CPU are tailored to maximize the performance of NVIDIA's Grace hardware features specific to their own Neoverse implementation.

- **Hardware Specialization** Some libraries are designed to be cross-platform, supporting multiple hardware architectures to provide flexibility and broader usability. For example, the OpenBLAS library supports both Arm and x86 architectures, allowing developers to use the same library across different systems.

- **Domain-Specific Libraries**: Libraries are often created to handle specific domains or types of computations more efficiently. For instance, libraries like cuDNN are optimized for deep learning tasks, providing specialized functions that significantly speed up neural network training and inference.

- **Commercial Libraries**: Alternatively, some highly performant libraries require a license to use. This is more common in domain specific libraries such as computations chemistry or fluid dynamics.

These factors contribute to the existence of multiple performance libraries, each tailored to meet the specific demands of various hardware and applications.

Invariably, there will be performance differences between each library and the best way to observe it to use the library within your own program. For more information on performance benchmarking please read [this blog](https://community.arm.com/arm-community-blogs/b/servers-and-cloud-computing-blog/posts/arm-performance-libraries-24-10).

### What performance libraries are available on Arm?

For a directory of community-produced libraries we recommend looking at the the Arm Ecosystem Dashboard. Each library may not be available as a binary and may need to be compiled from source. The table below gives and example of such libraries that are available on Arm with a link to the full dashboard at the bottom.


| Package / Library | Domain |
| -------- | ------- |
| Minimap2 | Long-read sequence alignment in genomics |
| HMMER |Bioinformatics library for homologous sequences |
| FFTW | Open-source fast fourier transform library |
|[Please see the Arm Ecosystem Dashboard](https://www.arm.com/developer-hub/ecosystem-dashboard) for the most comprehensive and up-to-date list.||
Original file line number Diff line number Diff line change
@@ -0,0 +1,66 @@
---
title: Setting Up Your Environment
weight: 2

### FIXED, DO NOT MODIFY
layout: learningpathall
---

## Setting Up Your Environment

In this initial example we will use an Arm-based AWS `t4g.2xlarge` instance running Ubuntu 22.04 LTS along with the Arm Performance Libraries. For instructions to connect to an AWS instance, please see our [getting started guide](https://learn.arm.com/learning-paths/servers-and-cloud-computing/intro/).

Once connected via `ssh`, install the required packages with the following commands.

```bash
sudo apt update
sudo apt install gcc make
```
Next, install Arm performance libraries using the following [installation guide](https://learn.arm.com/install-guides/armpl/). Alternatively, use the commands below.

```bash
wget https://developer.arm.com/-/cdn-downloads/permalink/Arm-Performance-Libraries/Version_24.10/arm-performance-libraries_24.10_deb_gcc.tar
tar xvf arm-performance-libraries_24.10_deb_gcc.tar
cd arm-performance-libraries_24.10_deb/
```

Now we need to install environment modules to set the required environment variables, allowing us to quickly build the example applications.

```bash
sudo add-apt-respository universe
sudo apt install environment-modules
source /usr/share/modules/init/bash
export MODULEPATH=$MODULEPATH:/opt/arm/modulefiles
module avail
```

You should see the following `armpl/24.10.0_gcc` available.
```output
------------------------------------------------------------------------------------------------------- /opt/arm/modulefiles -------------------------------------------------------------------------------------------------------
armpl/24.10.0_gcc
```

Load the module with the following command.

```bash
module load armpl/24.10.0_gcc
```

Navigate to the `lp64` C source code examples and compile.

```bash
cd $ARMPL_DIR
cd /examples_lp64/
sudo -E make c_examples // -E is to preserve environment variables
```

Your terminal output should show the examples being compiled, ending with.

```output
...
Test passed OK
```

For more information on all the available function, please refer to the [Arm Performance Libraries Reference Guide](https://developer.arm.com/documentation/101004/latest/).


Original file line number Diff line number Diff line change
@@ -0,0 +1,103 @@
---
title: Using Optimised Math Library
weight: 4

### FIXED, DO NOT MODIFY
layout: learningpathall
---

## Example using Optimised Math library

The `libamath` library from Arm is an optimized subset of the standard library math functions for Arm-based CPUs, providing both scalar and vector functions at different levels of precision. It includes vectorized versions (Neon and SVE) of common math functions found in the standard library, such as those in the `<cmath>` header.

The trivial snippet below uses the `<cmath>` standard cmath header to calculate the base exponential of a scalar value. Copy and paste the code sample below into a file named `basic_math.cpp`.

```c++
#include <iostream>
#include <ctime>
#include <cmath> // Include the standard library

int main() {
std::srand(std::time(0));
double random_number = std::rand() / static_cast<double>(RAND_MAX);
double result = exp(random_number); // Use the standard exponential function
std::cout << "Exponential of " << random_number << " is " << result << std::endl;
return 0;
}
```

Compiling using the following g++ command. We can use the `ldd` command to print the shared objects for dynamic linking. Here we observe the superset `libm.so` is linked.

```bash
g++ basic_math.cpp -o basic_math
ldd basic_math
```
You should see the following output.

```output
linux-vdso.so.1 (0x0000f55218587000)
libstdc++.so.6 => /lib/aarch64-linux-gnu/libstdc++.so.6 (0x0000f55218200000)
libm.so.6 => /lib/aarch64-linux-gnu/libm.so.6 (0x0000f55218490000)
libc.so.6 => /lib/aarch64-linux-gnu/libc.so.6 (0x0000f55218050000)
/lib/ld-linux-aarch64.so.1 (0x0000f5521854e000)
libgcc_s.so.1 => /lib/aarch64-linux-gnu/libgcc_s.so.1 (0x0000f55218460000)
```

## Updating to use Optimised Library

To use the optimised math library `libamath` requires minimal source code changes for our scalar example. Modify the include statements to point to the correct header file and additional compiler flags.

Libamath routines have maximum errors inferior to 4 ULPs, where ULP stands for Unit in the Last Place, which is the smallest difference between two consecutive floating-point numbers at a specific precision. These routines only support the default rounding mode (round-to-nearest, ties to even). Therefore, switching from libm to libamath results in a small accuracy loss on a range of routines, similar to other vectorized implementations of these functions.

Copy and paste the following C++ snippet into a file named `optimised_math.cpp`.

```c++
#include <iostream>
#include <ctime>
#include <amath.h> // Include the Arm Performance Library header

int main() {
std::srand(std::time(0));
double random_number = std::rand() / static_cast<double>(RAND_MAX);
double result = exp(random_number); // Use the optimized exp function from libamath
std::cout << "Exponential of " << random_number << " is " << result << std::endl;
return 0;
}
```

Compiling using the following g++ command. Again we can use the `ldd` command to print the shared objects for dynamic linking.

```bash
g++ optimised_math.cpp -o optimised_math -lamath -lm
ldd optimised_math
```
Now we can observe the `libamath.so` shared object is linked.

```output

linux-vdso.so.1 (0x0000eb1eb379b000)
libamath.so => /opt/arm/armpl_24.10_gcc/lib/libamath.so (0x0000eb1eb35c0000)
libstdc++.so.6 => /lib/aarch64-linux-gnu/libstdc++.so.6 (0x0000eb1eb3200000)
libc.so.6 => /lib/aarch64-linux-gnu/libc.so.6 (0x0000eb1eb3050000)
libm.so.6 => /lib/aarch64-linux-gnu/libm.so.6 (0x0000eb1eb3520000)
/lib/ld-linux-aarch64.so.1 (0x0000eb1eb3762000)
libgcc_s.so.1 => /lib/aarch64-linux-gnu/libgcc_s.so.1 (0x0000eb1eb34f0000
```

### What about vector operations?

The naming convention of the Arm Performance Library for scalar operations follows that of `libm`. Hence, we are able to simply update the header file and recompile. For vector operations, we can either rely on the compiler autovectorisation, whereby the compiler generates the vector code for us. This is used in the Arm Compiler for Linux (ACfL). Alternatively, we can use vector routines, which uses name mangling. Mangling is a technique used in computer programming to modify the names of vector functions to ensure uniqueness and avoid conflicts. This is particularly important in compiled languages like C++ and in environments where multiple libraries or modules may be used together.

In the context of Arm's AArch64 architecture, vector name mangling follows the specific convention below to differentiate between scalar and vector versions of functions.

```output
'_ZGV' <isa> <mask> <vlen> <signature> '_' <original_name>
```

- **original_name** : name of scalar libm function
- **ISA** : 'n' for Neon, 's' for SVE
- **Mask** : 'M' for masked/predicated version, 'N' for unmasked. Only masked routines are defined for SVE, and only unmasked for Neon.
- **vlen** : integer number representing vector length expressed as number of lanes. For Neon <vlen>='2' in double-precision and <vlen>='4' in single-precision. For SVE, <vlen>='x'.
- **signature** : 'v' for 1 input floating point or integer argument, 'vv' for 2. More details in AArch64's vector function ABI.

Please refer to the [Arm Performance Library reference guide](https://developer.arm.com/documentation/101004/latest/) for more information.
Loading