diff --git a/docs/overview.md b/docs/overview.md
index 2959328..5a668cb 100644
--- a/docs/overview.md
+++ b/docs/overview.md
@@ -30,7 +30,7 @@ SPDX-License-Identifier: MIT
│ User Mode Driver │ │ NPU compiler │
│ │ │ │
│ intel-level-zero-npu ◀═══▶ intel-driver-compiler-npu │
- │ (libze_intel_vpu.so) │ │ (libnpu_driver_compiler.so) │
+ │ (libze_intel_npu.so) │ │ (libnpu_driver_compiler.so) │
│ │ │ │
└──────────────────▲──────────────────┘ └────────────────────────────────────────┘
╚════════════════════╗
@@ -51,6 +51,15 @@ SPDX-License-Identifier: MIT
## Changelog
+
+Driver library name change from libze_intel_vpu.so to libze_intel_npu.so (from v1.16.0)
+
+Starting from v1.16.0 release the driver library name has been changed from `libze_intel_vpu.so` to
+`libze_intel_npu.so`. The old library name is still supported for backward compatibility in Level
+Zero loader, but it is recommended to use the new library name. Using an older version of Level Zero
+than v1.17.17 requires to keep the old library name.
+
+
zeMutableCommandList extension implementation (from v1.6.0)
@@ -111,7 +120,7 @@ set, the `ZE_INTEL_NPU_LOGMASK` allows to target specific log group. The log
group are listed in
[umd/vpu_driver/source/utilities/log.hpp](../umd/vpu_driver/source/utilities/log.hpp#L19)
-```
+```bash
# Set log level to INFO
export ZE_INTEL_NPU_LOGLEVEL=INFO
@@ -147,7 +156,7 @@ and set "modularize" for `Intel NPU (Neural Processing Unit)`.
Finding the intel_vpu kernel module in the system
-```
+```bash
# check if the intel_vpu exists is in the system
modinfo intel_vpu
@@ -165,16 +174,24 @@ ls /dev/accel/accel0
```
+## Driver package installation
+
+The driver binary package and installation process can be found in the [release
+page](https://github.com/intel/linux-npu-driver/releases). The list of disbributed packages:
+* intel-fw-npu: firmware binaries
+* intel-level-zero-npu: user space driver library with name libze_intel_npu.so
+* intel-driver-compiler-npu: NPU compiler library with name libnpu_driver_compiler.so
+
## Building a standalone driver
Install the required dependencies in Ubuntu:
-```
+```bash
sudo apt update
sudo apt install -y build-essential git git-lfs cmake python3
```
Commands to build the driver:
-```
+```bash
cd linux-npu-driver
git submodule update --init --recursive
@@ -197,8 +214,8 @@ Compiler-in-Driver component from [NPU compiler repository](https://github.com/o
OpenVINO runtime is required by compiler. About the dependencies for building OpenVINO,
please check the [OpenVINO build document](https://github.com/openvinotoolkit/openvino/blob/master/docs/dev/build.md).
-To build a compiler from the driver repository the `ENABLE_NPU_COMPILER_BUILD` flag has to be set:
-```
+To build a compiler the `ENABLE_NPU_COMPILER_BUILD` flag has to be set:
+```bash
cd linux-npu-driver
git submodule update --init --recursive
@@ -226,7 +243,7 @@ The binary `npu-umd-test` is located in the build folder, ex. `build/bin/`.
Command line to run functional tests (after driver installation):
-```
+```bash
npu-umd-test
```
@@ -236,7 +253,7 @@ control the inference test content. Those tests require compiler in system.
Config file requires to download any OpenVINO model. Command line to setup a
`basic.yaml`:
-```
+```bash
# Prepare the add_abc model in path pointed by basic.yaml
mkdir -p models/add_abc
curl -o models/add_abc/add_abc.xml https://raw.githubusercontent.com/openvinotoolkit/openvino/master/src/core/tests/models/ir/add_abc.xml
@@ -250,12 +267,54 @@ More information about config can be found in [validation/umd-test/configs](/val
## Troubleshooting
+
+Device is not detectable
+
+To check if device is available the user can use `npu-umd-test` or `hello_query_device` from the OpenVINO sample applications.
+To debug missing NPU device, the `strace` allows to trace system calls that initalize the device. Run test command with `strace`:
+
+```bash
+# Record system calls using strace and npu-umd-test
+strace -o strace.log --trace=file ./build/bin/npu-umd-test
+...
+# Or using OpenVINO python API
+strace -o strace.log --trace=file python -c "from openvino import Core; print(Core().available_devices)"
+...
+```
+> [!WARNING]
+> After v1.16.0 release the driver library has a libze_intel_npu.so.1 name. If you are using
+> libze_intel_vpu.so.1 by mistake, please remove it from system
+
+Analyze the `strace.log` file for system calls that open NPU libraries and device:
+
+```bash
+grep -E "(accel|libnpu_|libze_)" strace.log
+# Below output from command
+...
+# Check if the Level Zero loader is found in system
+openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libze_loader.so.1", O_RDONLY|O_CLOEXEC) = 3
+....
+# libze_intel_vpu.so.1 should not be used after v1.16.0 release, consider to remove it if it is in the system
+openat(AT_FDCWD, "/usr/lib/x86_64-linux-gnu/libze_intel_vpu.so.1", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
+...
+# Check if driver library is found
+openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libze_intel_npu.so.1", O_RDONLY|O_CLOEXEC) = 3
+...
+# Check if driver successfully opened accel0. If unsuccessful, check next section
+openat(AT_FDCWD, "/dev/accel/accel0", O_RDWR|O_NOFOLLOW|O_CLOEXEC) = 3
+...
+# Check if compiler was found
+openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libnpu_driver_compiler.so", O_RDONLY|O_CLOEXEC) = 3
+...
+```
+
+
Non-root access to the NPU device
To access the NPU device, the user must be in the "render" or "video" group.
A group depends on system configuration:
-```
+```bash
# check user groups
groups
@@ -271,7 +330,7 @@ but might not be available in your Linux distribution. See
If setting the "render" group does not resolve the non-root access issue,
this must be done by an administrator manually:
-```
+```bash
# check device permissions
ls -l /dev/accel/
@@ -286,23 +345,16 @@ $ ls -lah /dev/accel/accel0
crw-rw---- 1 root render 261, 0 Jan 31 15:58 /dev/accel/accel0
```
+
Compilation problem due to lack of memory
-The compilation may fail due to memory shortage. The recommendation is to
-use the Ninja generator instead of Unix Makefiles. If it does not help, please
+The compilation may fail due to memory shortage. The recommendation is to use the Ninja generator
+instead of Unix Makefiles and extending swap memory. If it does not help, please
[file a new issue](https://github.com/intel/linux-npu-driver/issues/new).
-```
-# install Ninja
-sudo apt update
-sudo apt install -y ninja-build
-
-# remove the old build and create a new one
-rm build -rf
-cmake -B build -S . -G Ninja
-```
+
Enable driver log using an environment variable
@@ -310,12 +362,12 @@ Valid logging levels are `ERROR`, `WARNING`, `INFO` (and `VERBOSE` for driver
older than v1.5.0 release)
Seting the logging level using the `ZE_INTEL_NPU_LOGLEVEL` environment variable:
-```
+```bash
export ZE_INTEL_NPU_LOGLEVEL=
```
Command to clear an exported value:
-```
+```bash
unset ZE_INTEL_NPU_LOGLEVEL
```
@@ -323,13 +375,14 @@ Setting `ZE_INTEL_NPU_LOGMASK` allows to print specific log groups in driver.
The log group are listed in
[umd/vpu_driver/source/utilities/log.hpp](../umd/vpu_driver/source/utilities/log.hpp#L19)
-```
+```bash
# Set log level to INFO to enable LOGMASK
export ZE_INTEL_NPU_LOGLEVEL=INFO
# Set log mask to only print from DEVICE, DRIVER and CACHE groups
export ZE_INTEL_NPU_LOGMASK=$((1<<4|1<<3|1<<17))
```
+
@@ -338,7 +391,7 @@ export ZE_INTEL_NPU_LOGMASK=$((1<<4|1<<3|1<<17))
The user can use different kernel and firmware combination for NPU device. The
user might receive the following error message:
-```
+```bash
ERROR! MAPPED_INFERENCE_VERSION is NOT compatible with the ELF Expected: 6.1.0 vs received: 7.0.0
```
@@ -346,9 +399,8 @@ It means that NPU compiler mismatches the NPU firmware. To fix this issue the
user needs to upgrade the firmware. Firmware update should be done from
driver repository using release tag that matches the NPU compiler:
-```
+```bash
cmake -B build -S .
cmake --install build/ --component fw-npu --prefix /
```
-
-
+
\ No newline at end of file
diff --git a/docs/umd-testing.md b/docs/umd-testing.md
new file mode 100644
index 0000000..63c84a6
--- /dev/null
+++ b/docs/umd-testing.md
@@ -0,0 +1,309 @@
+
+
+## Overview
+
+This document provides a description how to test the NPU driver using npu-umd-test tool
+
+## Prerequisites
+
+Install the driver and compile the npu-umd-test tool as described in the [Installation
+Guide](overview.md#driver-package-installation).
+
+Check if the driver is loaded and the device is visible in the system:
+
+```bash
+# User should be in the 'render' group to access the device
+groups
+...
+
+# Check if the driver is loaded
+ls /dev/accel/
+```
+
+## Prepare npu-umd-test
+
+The npu-umd-test is not provided in release packages. It has to be built separately. The build
+process is described in the [Installation Guide](overview.md#building-a-standalone-driver). If
+driver is already installed, you can build the npu-umd-test target only:
+
+```bash
+# Download a repository
+git clone https://github.com/intel/linux-npu-driver.git
+cd linux-npu-driver/
+git submodule update --init --recursive
+
+# Build the npu-umd-test
+cmake -B build -S .
+cmake --build build/ -j11 --target npu-umd-test
+
+# Run the test to verify setup
+./build/bin/npu-umd-test --gtest_filter=Device.GetProperties*
+
+# Make a symbolic link for easy access
+cd ../
+ln -rsf ./linux-npu-driver/build/bin/npu-umd-test npu-umd-test
+```
+
+## Prepare OpenVINO IR model set
+
+The NPU driver supports models in the OpenVINO IR format. OpenVINO allows to convert model from
+different frameworks, see [Conventional Model
+Preparation](https://docs.openvino.ai/2025/openvino-workflow/model-preparation.html) docs.
+
+The npu-umd-test is able to run tests with a set of models. The models have to be converted to the
+OpenVINO IR format.
+
+First model we can add `add_abc.xml` from [overview.md](overview.md)
+```bash
+# Prepare the add_abc model in the path indicated by basic.yaml
+mkdir -p models/add_abc
+curl -o models/add_abc/add_abc.xml https://raw.githubusercontent.com/openvinotoolkit/openvino/master/src/core/tests/models/ir/add_abc.xml
+touch models/add_abc/add_abc.bin
+```
+
+### Imagenet Classification Model - ResNet-50
+
+The npu-umd-test allows to test output of image classification model trained on ImageNet dataset.
+Below is an example of how to download and convert a ResNet-50 model from Pytorch to OpenVINO IR
+format.
+
+```bash
+python3 -m venv openvino-venv
+source openvino-venv/bin/activate
+pip install --upgrade pip
+# Pytorch CPU index is added to reduce download time, feel free to remove it
+pip install --extra-index-url=https://download.pytorch.org/whl/cpu openvino torch torchvision opencv-python
+```
+
+Pick up ResNet-50 from PyTorch and apply image prepostprocessing using OpenVINO. This can be done using Python code:
+
+```python
+# Download and convert ResNet-50 to OpenVINO IR format using Python
+import openvino
+import os
+import torch
+import torchvision
+
+model = torchvision.models.resnet50(weights='DEFAULT')
+ov_model = openvino.convert_model(model, example_input=torch.rand(1, 3, 224, 224))
+
+ppp = openvino.preprocess.PrePostProcessor(ov_model)
+ppp.input().tensor() \
+ .set_shape([1,224,224,3]) \
+ .set_element_type(openvino.Type.u8) \
+ .set_layout(openvino.Layout('NHWC'))
+ppp.input().preprocess() \
+ .convert_element_type(openvino.Type.f32) \
+ .mean([103.94, 116.78, 123.68]) \
+ .scale([57.21, 57.45, 57.73])
+ppp.input().model().set_layout(openvino.Layout('NCHW'))
+ov_model = ppp.build()
+openvino.save_model(ov_model, "models/resnet50.xml")
+```
+
+Prepare an image, download and resize it to 224x224 using OpenCV:
+
+```python
+# Prepare an image using OpenCV
+import cv2
+import os
+import subprocess
+
+os.makedirs("images", exist_ok=True)
+subprocess.run("wget -O images/dog.jpg https://github.com/pytorch/hub/raw/master/images/dog.jpg", shell=True)
+image = cv2.imread("images/dog.jpg")
+resized_image = cv2.resize(image, (224,224))
+cv2.imwrite("images/dog.bmp", resized_image)
+```
+
+Check the model prediction:
+
+```python
+# Check the model prediction using OpenVINO
+import openvino
+import numpy as np
+import cv2
+
+image = cv2.imread("images/dog.bmp")
+tensor = np.expand_dims(image, 0)
+
+compiled_model = openvino.compile_model("models/resnet50.xml", "NPU")
+request = compiled_model.create_infer_request()
+request.infer(tensor)
+predictions = request.get_output_tensor().data
+probs = predictions.reshape(-1)
+top_10 = np.argsort(probs)[-10:][::-1]
+
+header = 'class_id probability'
+
+print('Top 10 results: ')
+print(header)
+print('-' * len(header))
+
+for class_id in top_10:
+ probability_indent = ' ' * (len('class_id') - len(str(class_id)) + 1)
+ print(f'{class_id}{probability_indent}{probs[class_id]:.7f}')
+```
+
+Add the model and the image to the new npu-umd-test config file:
+
+```bash
+cat < resnet_config.yaml
+
+model_dir: models/
+image_dir: images/
+
+image_classification_imagenet:
+ - path: resnet50.xml
+ name: resnet50
+ in: [ dog.bmp ]
+ class_index: [ 258 ]
+EOF
+```
+
+Test the image classification model:
+
+```bash
+./npu-umd-test --config=resnet_config.yaml --verbose */resnet50
+```
+
+### Object Detection Model - Yolo
+
+The npu-umd-test does not support output validation for object detection models. The npu-umd-test
+framework can still be used to run inference on these models, but without accuracy tests. Let's
+download and convert a YOLOv8 object detection model to OpenVINO IR format. First download
+Ultralytics package:
+
+```bash
+# Install Ultralytics package in same virtual environment
+source openvino-venv/bin/activate
+pip install ultralytics
+```
+
+Download Yolov8s model and convert it to OpenVINO IR format:
+
+```python
+# Convert YOLOv8s to OpenVINO IR format using Python
+from ultralytics import YOLO
+import os
+
+model = YOLO('yolov8s.pt')
+model.export(format="openvino")
+
+os.rename("yolov8s_openvino_model/yolov8s.xml", "models/yolov8s.xml")
+os.rename("yolov8s_openvino_model/yolov8s.bin", "models/yolov8s.bin")
+```
+
+Add the object detection model to config file:
+
+```bash
+cat < yolo_config.yaml
+
+model_dir: models/
+image_dir: images/
+
+graph_execution:
+ - path: yolov8s.xml
+ name: yolov8s
+ # The GraphQueryNetwork* requires to pass any compiler acceptable flag.
+ # TODO: Fix in the next release after v1.23.0
+ flags: "--config"
+EOF
+```
+
+Test the object detection model:
+
+```bash
+./npu-umd-test --config=yolo_config.yaml --verbose */yolov8s
+```
+
+For more information on Ultralytics and OpenVINO integration, please visit
+https://docs.ultralytics.com/integrations/openvino/
+
+### Run tests
+
+There are many sections in the npu-umd-test configuration file. All sections are described in the
+[../validation/umd-test/configs/README.md documentation](../validation/umd-test/configs/README.md).
+Let's set up a config file with models downloaded in previous section [Prepare a
+model](#prepare-a-model). New configuration file covers all available test sections:
+
+```yaml
+# filename: extend.yaml
+model_dir: models/
+image_dir: images/
+
+graph_execution:
+ - path: add_abc/add_abc.xml
+ # The GraphQueryNetwork* requires to pass any compiler acceptable flag.
+ # TODO: Fix in the next release after v1.23.0
+ flags: "--config"
+ - path: resnet50.xml
+ name: resnet50
+ flags: "--config"
+ in: [ dog.bmp ]
+ class_index: [ 258 ]
+ - path: yolov8s.xml
+ name: yolov8s
+ flags: "--config"
+
+image_classification_imagenet:
+ - path: resnet50.xml
+ name: resnet50
+ in: [ dog.bmp ]
+ class_index: [ 258 ]
+
+driver_cache:
+ - path: resnet50.xml
+ - path: yolov8s.xml
+
+multi_inference:
+ - name: "ObjectRecognitionPipeline"
+ pipeline:
+ - path: resnet50.xml
+ target_fps: 60
+ exec_time_in_secs: 10
+ - path: yolov8s.xml
+ target_fps: 30
+ exec_time_in_secs: 10
+```
+
+Run tests with the new config file:
+
+```bash
+./npu-umd-test --config=extend.yaml
+```
+
+### Run additional tests using options
+
+The npu-umd-test comes with extra test cases:
+* Driver initialization tests that require to be run in new process.
+```bash
+./npu-umd-test --ze-init-tests
+```
+* GPU and NPU tests using Level Zero API. Require
+ [compute-runtime](https://github.com/intel/compute-runtime/releases) to be installed.
+```bash
+./npu-umd-test --gpu
+```
+* External memory tests using System DMA Heap. Require access to /dev/dma_heap/system that is
+ limited to root access in Ubuntu.
+```bash
+sudo ./npu-umd-test --dma-heap
+```
+
+### Reference test results
+
+The table contains test results collected using [v1.23.0 release](https://github.com/intel/linux-npu-driver/releases/tag/v1.23.0)
+
+|Platform|System|Command|Test Result|Test Skipped|
+|:---:|:---:|:---:|:---:|:---:|
+|Intel(R) Core(TM) Ultra 5 125H|Ubuntu 24.04.3 LTS with HWE Kernel 6.14.0-29-generic|`npu-umd-test --config=extend.yaml`|265/291 passed|26 skipped|
+|||`npu-umd-test --ze-init-tests`|6/6 passed|0 skipped|
+|||`npu-umd-test --gpu`|1/2 passed|1 skipped|
+|||`sudo npu-umd-test --dma-heap`|3/3 passed|0 skipped|
\ No newline at end of file