Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -8,39 +8,44 @@ layout: learningpathall

## Host machine requirements

This Learning Path demonstrates how to improve the performance of camera pipelines using KleidiAI and KleidiCV in applications running on Arm. You will need an Arm64 machine, preferably running an Ubuntu-based distribution. The instructions have been tested on Ubuntu 24.04.
This Learning Path demonstrates how to improve the performance of camera pipelines using KleidiAI and KleidiCV on Arm. You’ll need an Arm64 machine, preferably running an Ubuntu-based distribution. The instructions have been tested on Ubuntu 24.04.

## Install required software

Make sure the following tools are installed:
- `git` - a version control system, for cloning the AI camera pipelines codebase.
- `git lfs` - an extension to `git` for managing large files by storing lightweight references instead of the files themselves.
- `docker` - an open-source containerization platform for running applications in isolated environments.
- `libomp` - LLVM's OpenMP runtime library, required for enabling parallel execution during application performance optimization.
- **Git** – version control, for cloning the AI camera pipelines codebase
- **Git LFS** – extension to Git for managing large files using lightweight pointers
- **Docker** – an open-source container platform for running applications in isolated environments
- **OpenMP runtime (`libomp`)** – LLVMs OpenMP runtime library, required for enabling parallel execution during application performance optimization

### git and git lfs
### Git and Git LFS

These tools can be installed by running the following command, depending on your OS:
Install with the commands for your OS:

{{< tabpane code=true >}}
{{< tab header="Linux/Ubuntu" language="bash">}}
sudo apt install git git-lfs -y
sudo apt update
sudo apt install -y git git-lfs
# one-time LFS setup on this machine:
git lfs install
{{< /tab >}}
{{< tab header="macOS" language="bash">}}
brew install git git-lfs
# one-time LFS setup on this machine:
git lfs install
{{< /tab >}}
{{< /tabpane >}}

### Docker

Start by checking that `docker` is installed on your machine by typing the following command line in a terminal:
Check that Docker is installed:

```bash { output_lines="2" }
docker --version
Docker version 27.3.1, build ce12230
```

If you see an error like "`docker: command not found`," then follow the steps from the [Docker Install Guide](https://learn.arm.com/install-guides/docker/).
If you see "`docker: command not found`," follow the [Docker Install Guide](https://learn.arm.com/install-guides/docker/).

{{% notice Note %}}
You might need to log in again or restart your machine for the changes to take effect.
Expand Down
Original file line number Diff line number Diff line change
@@ -1,84 +1,71 @@
---

title: Overview
weight: 4

### FIXED, DO NOT MODIFY
layout: learningpathall

---

## KleidiAI

[KleidiAI](https://gitlab.arm.com/kleidi/kleidiai) is an open-source library that provides optimized, performance-critical routines - also known as micro-kernels - for artificial intelligence (AI) workloads on Arm CPUs.
[KleidiAI](https://gitlab.arm.com/kleidi/kleidiai) is an open-source library of optimized, performance-critical routines (micro-kernels) for AI workloads on Arm CPUs. These routines are tuned for specific Arm microarchitectures to maximize performance and are designed for straightforward integration into C/C++ ML and AI frameworks.

These routines are tuned to take full advantage of specific Arm hardware architectures to maximize performance. The [KleidiAI](https://gitlab.arm.com/kleidi/kleidiai) library is designed for easy integration into C or C++ machine learning (ML) and AI frameworks.

Several popular AI frameworks already take advantage of [KleidiAI](https://gitlab.arm.com/kleidi/kleidiai) to improve performance on Arm platforms.
Several popular AI frameworks already take advantage of KleidiAI to improve performance on Arm platforms.

## KleidiCV

[KleidiCV](https://gitlab.arm.com/kleidi/kleidicv) is an open-source library that provides high-performance image processing functions for AArch64.

It is designed to be lightweight and simple to integrate into a wide variety of projects. Some computer vision frameworks, such as OpenCV, leverage [KleidiCV](https://gitlab.arm.com/kleidi/kleidicv) to accelerate image processing on Arm devices.
[KleidiCV](https://gitlab.arm.com/kleidi/kleidicv) is an open-source library that provides high-performance image-processing functions for AArch64. It is lightweight and simple to integrate, and computer-vision frameworks such as OpenCV can leverage KleidiCV to accelerate image processing on Arm devices.

## AI camera pipelines

This Learning Path provides three example applications that combine AI and computer vision (CV) techniques:
- Background Blur,
- Low-Light Enhancement,
- Neural Denoising.

## Background Blur and Low Light Enhancement
- Background blur
- Low-light enhancement (LLE)
- Neural denoising

Both applications:
- Use input and output images that are stored in `png` format, with three RGB channels (Red, Green, and Blue). Each channel supports 256 intensity levels (0-255) commonly referred to as `RGB8`.
- Convert the images to the `YUV420` color space for processing.
- Apply the relevant effect (background blur or low-light enhancement).
- Convert the processed images back to `RGB8` and save them as `.png` files.
## Background blur and low-light enhancement

### Background Blur
The applications:

- Use input and output images in **PNG** format with three **RGB** channels (8-bit per channel, often written as **RGB8**)
- Convert images to **YUV 4:2:0** for processing
- Apply the relevant effect (background blur or low-light enhancement)
- Convert the processed images back to **RGB8** and save as **.png**

## Background blur

The background blur pipeline is implemented as follows:

![example image alt-text#center](blur_pipeline.png "Background Blur Pipeline Diagram")
![Background blur pipeline diagram showing RGB8 input, conversion to YUV 4:2:0, blur applied to background mask, and reconversion to RGB8 alt-text#center](blur_pipeline.png "Background blur pipeline")

## Low-light enhancement

### Low Light Enhancement
The low-light enhancement pipeline is adapted from the LiveHDR+ method proposed by Google Research (2017):

The low-light enhancement pipeline is adapted from the LiveHDR+ method originally proposed by Google Research in 2017:
![Low-light enhancement pipeline diagram with burst capture, alignment/merge, coefficient prediction network (LiteRT), tone mapping, and RGB output alt-text#center](lle_pipeline.png "Low-light enhancement pipeline")

![example image alt-text#center](lle_pipeline.png "Low-Light Enhancement Pipeline Diagram")
The low-resolution coefficient-prediction network (implemented with LiteRT) performs operations such as:

The Low-Resolution Coefficient Prediction Network (implemented with LiteRT) performs computations such as:
- Strided convolutions.
- Local feature extraction using convolutional layers.
- Global feature extraction using convolutional and fully connected layers.
- Add, convolve, and reshape operations.
- Strided convolutions
- Local feature extraction using convolutional layers
- Global feature extraction using convolutional and fully connected layers
- Add, convolve, and reshape ops

## Neural Denoising
## Neural denoising

Every smartphone photographer has seen it: images that look sharp in daylight
but fall apart in dim lighting. This is because _signal-to-noise ratio (SNR)_
drops dramatically when sensors capture fewer photons. At 1000 lux, the signal
dominates and images look clean; at 1 lux, readout noise becomes visible as
grain, color speckles, and loss of fine detail.
Every smartphone photographer has experienced it: images that look sharp in daylight but degrade in dim lighting. This is because **signal-to-noise ratio (SNR)** drops sharply when sensors capture fewer photons. At 1000 lux, the signal dominates and images look clean; at 1 lux, readout noise becomes visible as grain, color speckling, and loss of fine detail.

That’s why _neural camera denoising_ is one of the most critical --- and
computationally demanding --- steps in a camera pipeline. Done well, it
transforms noisy frames into sharp, vibrant captures. Done poorly, it leaves
smudges and artifacts that ruin the shot.
That’s why **neural camera denoising** is a critical, computationally-demanding, stage in modern camera pipelines. Done well, it can transform noisy frames into sharp, vibrant captures; done poorly, it leaves smudges and artifacts.

As depicted in the diagram below, the Neural Denoising pipeline is using 2
algorithms to process the frames:
- either temporally, with an algorithm named `ultralite` in the code
repository,
- or spatially, with an algorithm named `collapsenet` in the code repository,
- or both.
As shown below, the neural-denoising pipeline uses two algorithms:

Temporal denoising uses some frames as history.
- **Temporal** denoising, `ultralite` in the repository (uses a history of previous frames)
- **Spatial** denoising, `collapsenet` in the repository
- Or a combination of both

![example image alt-text#center](denoising_pipeline.png "Neural Denoising Pipeline Diagram")
![Neural denoising pipeline diagram showing temporal path (with frame history) and spatial path, followed by fusion and output alt-text#center](denoising_pipeline.png "Neural denoising pipeline")

The Neural Denoising application works on frames, as emitted by a camera sensor in Bayer format:
- the input frames are in RGGB 1080x1920x4 format,
- the output frames in YGGV 4x1080x1920 format.
- The input frames are in RGGB 1080x1920x4 format
- The output frames in YGGV 4x1080x1920 format
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ weight: 5
layout: learningpathall
---

## Download the AI Camera Pipelines Project
## Download the AI camera pipelines project

Clone the project repository:

Expand All @@ -33,7 +33,7 @@ docker build -t ai-camera-pipelines -f docker/Dockerfile \
docker/
```

## Build the AI Camera Pipelines
## Build the AI camera pipelines

Start a shell in the container you just built:

Expand All @@ -46,7 +46,6 @@ Inside the container, run the following commands:
```bash
ENABLE_SME2=0
TENSORFLOW_GIT_TAG="v2.19.0"

# Build flatbuffers
git clone https://github.com/google/flatbuffers.git
cd flatbuffers
Expand All @@ -72,11 +71,11 @@ tar cfz example/install.tar.gz install

Leave the container by pressing `Ctrl+D`.

## Notes on the cmake configuration options
## Notes on the CMake configuration options

The `cmake` command line options relevant to this learning path are:
The `cmake` command-line options relevant to this learning path are:

| Command line option | Description |
| Command-line option | Description |
|-------------------------------------|----------------------------------------------------------------------------------------------|
| `ENABLE_SME2=$ENABLE_SME2` | SME2 (Scalable Matrix Extension 2) is disabled in this build with `ENABLE_SME2=0`. |
| `ARMNN_TFLITE_PARSER=0` | Configures the `ai-camera-pipelines` repository to use LiteRT with XNNPack instead of ArmNN. |
Expand All @@ -91,7 +90,7 @@ tar xfz ai-camera-pipelines.git/install.tar.gz
mv install ai-camera-pipelines
```

## Diving further in the AI camera pipelines
## Dive deeper into the AI camera pipelines

The AI camera pipelines
[repository](https://git.gitlab.arm.com/kleidi/kleidi-examples/ai-camera-pipelines)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ python3 -m venv venv
pip install -r ai-camera-pipelines.git/docker/python-requirements.txt
```

### Background Blur
## Background blur

Run the background Blur pipeline, using `resources/test_input.png` as the input image and write the transformed image to `test_output.png`:

Expand All @@ -31,7 +31,7 @@ bin/cinematic_mode resources/test_input.png test_output.png resources/depth_and_
![example image alt-text#center](test_input2.png "Input image")
![example image alt-text#center](test_output2.png "Image with blur applied")

### Low-Light Enhancement
## Low-Light Enhancement

Run the Low-Light Enhancement pipeline, using `resources/test_input.png` as the input image and write the transformed image to `test_output2_lime.png`:

Expand All @@ -55,9 +55,9 @@ will become available very soon:
```

The input frames are:
- first converted from `.png` files in the `resources/test-lab-sequence/` directory to the sensor format (RGGB Bayer) into `neural_denoiser_io/input_noisy*`,
- those frames are then processed by the Neural Denoiser and written into `neural_denoiser_io/output_denoised*`,
- last, the denoised frames are converted back to `.png` for easy visualization in directory `test-lab-sequence-out`.
- first converted from `.png` files in the `resources/test-lab-sequence/` directory to the sensor format (RGGB Bayer) into `neural_denoiser_io/input_noisy*`
- those frames are then processed by the Neural Denoiser and written into `neural_denoiser_io/output_denoised*`
- last, the denoised frames are converted back to `.png` for easy visualization in directory `test-lab-sequence-out`

![example image alt-text#center](denoising_input_0010.png "Original frame")
![example image alt-text#center](denoising_output_0010.png "Frame with temporal denoising applied")
Original file line number Diff line number Diff line change
Expand Up @@ -16,11 +16,11 @@ The application you built earlier includes a *benchmark mode* that runs the core

These benchmarks demonstrate the performance improvements enabled by KleidiCV and KleidiAI:
- KleidiCV enhances OpenCV performance with computation kernels optimized for Arm processors.
- KleidiAI accelerates LiteRT + XNNPack inference using AI-optimized micro-kernels tailored for Arm CPUs.
- KleidiAI accelerates LiteRT+XNNPack inference using AI-optimized micro-kernels tailored for Arm CPUs.

## Performances with KleidiCV and KleidiAI
## Performance with KleidiCV and KleidiAI

By default, the OpenCV library is built with KleidiCV support, and LiteRT+xnnpack is built with KleidiAI support.
By default, the OpenCV library is built with KleidiCV support, and LiteRT+XNNPack is built with KleidiAI support.

You can run the benchmarks using the applications you built earlier.

Expand Down Expand Up @@ -59,13 +59,13 @@ bin/neural_denoiser_temporal_benchmark_4K 20
The output is similar to:

```output
Total run time over 10 iterations: 37.6839 ms
Total run time over 20 iterations: 37.6839 ms
```

From these results, you can see that:
- `cinematic_mode_benchmark` performed 20 iterations in 2028.745 ms.
- `low_light_image_enhancement_benchmark` performed 20 iterations in 58.2126 ms.
- `neural_denoiser_temporal_benchmark_4K` performed 20 iterations in 37.6839 ms.
- `cinematic_mode_benchmark` performed 20 iterations in 2028.745 ms
- `low_light_image_enhancement_benchmark` performed 20 iterations in 58.2126 ms
- `neural_denoiser_temporal_benchmark_4K` performed 20 iterations in 37.6839 ms

## Benchmark results without KleidiCV and KleidiAI

Expand All @@ -87,7 +87,7 @@ INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
Total run time over 20 iterations: 2030.5525 ms
```

Re-run the Low Light Enhancment benchmark:
Re-run the Low Light Enhancement benchmark:

```bash
bin/low_light_image_enhancement_benchmark 20 resources/HDRNetLIME_lr_coeffs_v1_1_0_mixed_low_light_perceptual_l1_loss_float32.tflite
Expand All @@ -112,7 +112,7 @@ The new output is similar to:
Total run time over 20 iterations: 38.0813 ms
```

### Comparison table and future performance uplift with SME2
## Comparison table and future performance uplift with SME2

| Benchmark | Without KleidiCV+KleidiAI | With KleidiCV+KleidiAI |
|-------------------------------------------|---------------------------|------------------------|
Expand All @@ -121,7 +121,7 @@ Total run time over 20 iterations: 38.0813 ms
| `neural_denoiser_temporal_benchmark_4K` | 38.0813 ms | 37.6839 ms (-1.04%) |

As shown, the Background Blur (`cinematic_mode_benchmark`) and Neural Denoising
pipelines gains only a minor improvement, while the low-light enhancement pipeline
pipelines gain only a minor improvement, while the low-light enhancement pipeline
sees a minor performance degradation (0.26%) when KleidiCV and KleidiAI are
enabled.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,14 +3,14 @@ title: Accelerate Denoising, Background Blur and Low-Light Camera Effects with K

minutes_to_complete: 30

who_is_this_for: This Learning Path introduces developers to the benefits of optimizing the performance of camera pipelines using KleidiAI and KleidiCV.
who_is_this_for: This introductory topic is for mobile and computer-vision developers, camera pipeline engineers, and performance-minded practitioners who want to optimize real-time camera effects on Arm using KleidiAI and KleidiCV.

learning_objectives:
- Compile and run AI-powered camera pipeline applications.
- Use KleidiCV and KleidiAI to improve the performance of real-time camera pipelines.
- Build and run AI-powered camera pipeline applications
- Use KleidiCV and KleidiAI to improve the performance of real-time camera pipelines

prerequisites:
- A computer running Arm Linux or macOS with Docker installed.
- A computer running Arm Linux or macOS with Docker installed

author: Arnaud de Grandmaison

Expand All @@ -25,6 +25,7 @@ armips:
- Cortex-A
tools_software_languages:
- C++
- Docker
operatingsystems:
- Linux
- macOS
Expand Down