diff --git a/content/learning-paths/smartphones-and-mobile/profiling-ml-on-arm/Streamline.png b/content/learning-paths/smartphones-and-mobile/profiling-ml-on-arm/Streamline.png new file mode 100644 index 0000000000..e02ea645ce Binary files /dev/null and b/content/learning-paths/smartphones-and-mobile/profiling-ml-on-arm/Streamline.png differ diff --git a/content/learning-paths/smartphones-and-mobile/profiling-ml-on-arm/_index.md b/content/learning-paths/smartphones-and-mobile/profiling-ml-on-arm/_index.md new file mode 100644 index 0000000000..1271520b0d --- /dev/null +++ b/content/learning-paths/smartphones-and-mobile/profiling-ml-on-arm/_index.md @@ -0,0 +1,38 @@ +--- +title: Profile the performance of ML models on Arm + +minutes_to_complete: 60 + +who_is_this_for: This is an introductory topic for software developers who want to learn how to profile the performance of their ML models running on Arm devices. + +learning_objectives: + - Profile the execution times of ML models on Arm devices. + - Profile ML application performance on Arm devices. + +prerequisites: + - An Arm-powered Android smartphone, and USB cable to connect with it. + +author_primary: Ben Clark + +### Tags +skilllevels: Introductory +subjects: ML +armips: + - Cortex-X + - Cortex-A + - Mali + - Immortalis +tools_software_languages: + - Android Studio + - tflite +operatingsystems: + - Android + - Linux + + +### FIXED, DO NOT MODIFY +# ================================================================================ +weight: 1 # _index.md always has weight of 1 to order correctly +layout: "learningpathall" # All files under learning paths have this same wrapper +learning_path_main_page: "yes" # This should be surfaced when looking for related content. Only set for _index.md of learning path content. +--- diff --git a/content/learning-paths/smartphones-and-mobile/profiling-ml-on-arm/_next-steps.md b/content/learning-paths/smartphones-and-mobile/profiling-ml-on-arm/_next-steps.md new file mode 100644 index 0000000000..f468cb1b80 --- /dev/null +++ b/content/learning-paths/smartphones-and-mobile/profiling-ml-on-arm/_next-steps.md @@ -0,0 +1,20 @@ +--- +next_step_guidance: You might be interested in learning how to profile your Unity apps on Android. + +recommended_path: /learning-paths/smartphones-and-mobile/profiling-unity-apps-on-android/ + +further_reading: + - resource: + title: Arm Streamline User Guide + link: https://developer.arm.com/documentation/101816/latest/ + type: documentation + + + +# ================================================================================ +# FIXED, DO NOT MODIFY +# ================================================================================ +weight: 21 # set to always be larger than the content in this path, and one more than 'review' +title: "Next Steps" # Always the same +layout: "learningpathall" # All files under learning paths have this same wrapper +--- diff --git a/content/learning-paths/smartphones-and-mobile/profiling-ml-on-arm/_review.md b/content/learning-paths/smartphones-and-mobile/profiling-ml-on-arm/_review.md new file mode 100644 index 0000000000..7eae5a8b1b --- /dev/null +++ b/content/learning-paths/smartphones-and-mobile/profiling-ml-on-arm/_review.md @@ -0,0 +1,45 @@ +--- +review: + - questions: + question: > + Streamline Profiling lets you profile: + answers: + - Arm CPU activity + - Arm GPU activity + - when your Neural Network is running + - All of the above + correct_answer: 4 + explanation: > + Streamline will show you CPU and GPU activity (and a lot more counters!), and if Custom Activity Maps are used, you can see when your Neural Network and other parts of your application are running. + + - questions: + question: > + Does Android Studio have a profiler? + answers: + - "Yes" + - "No" + correct_answer: 1 + explanation: > + Yes, Android Studio has a built-in profiler that can be used to monitor the memory usage of your app among other things + + - questions: + question: > + Is there a way to profile what is happening inside your Neural Network? + answers: + - Yes, Streamline just shows you out of the box + - No. + - Yes, ArmNN's ExecuteNetwork can do this + - Yes, Android Studio Profiler can do this + correct_answer: 3 + explanation: > + Standard profilers don't have an easy way to see what is happening inside an ML framework to see a model running inside it. ArmNN's ExecuteNetwork can do this for TensorFlow Lite models, and ExecuTorch has tools that can do this for PyTorch models. + + + +# ================================================================================ +# FIXED, DO NOT MODIFY +# ================================================================================ +title: "Review" # Always the same title +weight: 20 # Set to always be larger than the content in this path +layout: "learningpathall" # All files under learning paths have this same wrapper +--- diff --git a/content/learning-paths/smartphones-and-mobile/profiling-ml-on-arm/android-profiling-version.png b/content/learning-paths/smartphones-and-mobile/profiling-ml-on-arm/android-profiling-version.png new file mode 100644 index 0000000000..7e058f009f Binary files /dev/null and b/content/learning-paths/smartphones-and-mobile/profiling-ml-on-arm/android-profiling-version.png differ diff --git a/content/learning-paths/smartphones-and-mobile/profiling-ml-on-arm/app-profiling-android-studio.md b/content/learning-paths/smartphones-and-mobile/profiling-ml-on-arm/app-profiling-android-studio.md new file mode 100644 index 0000000000..9f8508f3a8 --- /dev/null +++ b/content/learning-paths/smartphones-and-mobile/profiling-ml-on-arm/app-profiling-android-studio.md @@ -0,0 +1,45 @@ +--- +title: Memory Profiling with Android Studio +weight: 4 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## Android Memory Profiling +Memory is often a problem in ML, with ever bigger models and data. For profiling an Android app's memory, Android Studio has a built-in profiler. This can be used to monitor the memory usage of your app, and to find memory leaks. + +To find the Profiler, open your project in Android Studio and click on the *View* menu, then *Tool Windows*, and then *Profiler*. This opens the Profiler window. Attach your device in Developer Mode with a USB cable, and then you should be able to select your app's process. Here there are a number of different profiling tasks available. + +Most likely with an Android ML app you'll need to look at memory both from the Java/Kotlin side and the native side. The Java/Kotlin side is where the app runs, and may be where buffers are allocated for input and output if, for example, you're using LiteRT (formerly known as TensorFlow Lite). The native side is where the ML framework will run. Looking at the memory consumption for Java/Kotlin and native is 2 separate tasks in the Profiler: *Track Memory Consumption (Java/Kotlin Allocations)* and *Track Memory Consumption (Native Allocations)*. + +Before you start either task, you have to build your app for profiling. The instructions for this and for general profiling setup can be found [here](https://developer.android.com/studio/profile). You will want to start the correct profiling version of the app depending on the task. + +![Android Studio profiling run types alt-text#center](android-profiling-version.png "Figure 1. Profiling run versions") + +For the Java/Kotlin side, you want the **debuggable** "Profile 'app' with complete data", which is based off the debug variant. For the native side, you want the **profileable** "Profile 'app' with low overhead", which is based off the release variant. + +### Java/Kotlin + +If you start looking at the [Java/Kotlin side](https://developer.android.com/studio/profile/record-java-kotlin-allocations), choose *Profiler: Run 'app' as debuggable*, and then select the *Track Memory Consumption (Java/Kotlin Allocations)* task. Navigate to the part of the app you wish to profile and then you can start profiling. At the bottom of the Profiling window it should look like Figure 2 below. Click *Start Profiler Task*. + +![Android Studio Start Profile alt-text#center](start-profile-dropdown.png "Figure 2. Start Profile") + +When you're ready, *Stop* the profiling again. Now there will be a nice timeline graph of memory usage. While Android Studio has a nicer interface for the Java/Kotlin side than the native side, the key to the timeline graph may be missing. This key is shown below in Figure 3, so you can refer to the colors from this. +![Android Studio memory key alt-text#center](profiler-jk-allocations-legend.png "Figure 3. Memory key for the Java/Kotlin Memory Timeline") + +The default height of the Profiling view, as well as the timeline graph within it is usually too small, so adjust these heights to get a sensible graph. You can click at different points of the graph to see the memory allocations at that time. If you look according to the key you can see how much memory is allocated by Java, Native, Graphics, Code etc. + +Looking further down you can see the *Table* of Java/Kotlin allocations for your selected time on the timeline. With ML a lot of your allocations are likely to be byte[] for byte buffers, or possibly int[] for image data, etc. Clicking on the data type will open up the particular allocations, showing their size and when they were allocated. This will help to quickly narrow down their use, and whether they are all needed etc. + +### Native + +For the [native side](https://developer.android.com/studio/profile/record-native-allocations), the process is similar but with different options. Choose *Profiler: Run 'app' as profileable*, and then select the *Track Memory Consumption (Native Allocations)* task. Here you have to *Start profiler task from: Process Start*. Choose *Stop* once you've captured enough data. + +The Native view doesn't have the same nice timeline graph as the Java/Kotlin side, but it does have the *Table* and *Visualization* tabs. The *Table* tab no longer has a list of allocations, but options to *Arrange by allocation method* or *callstack*. Choose *Arrange by callstack* and then you can trace down which functions were allocating significant memory. Potentially more useful, you can also see Remaining Size. + +In the Visualization tab you can see the callstack as a graph, and once again you can look at total Allocations Size or Remaining Size. If you look at Remaining Size, you can see what is still allocated at the end of the profiling, and by looking a few steps up the stack, probably see which allocations are related to the ML model, by seeing functions that relate to the framework you are using. A lot of the memory may be allocated by that framework rather than in your code, and you may not have much control over it, but it is useful to know where the memory is going. + +## Other platforms + +On other platforms, you will need a different memory profiler. The objective of working out where the memory is being used is the same, and whether there are issues with leaks or just too much memory being used. There are often trade-offs between memory and speed, and they can be considered more sensibly if the numbers involved are known. diff --git a/content/learning-paths/smartphones-and-mobile/profiling-ml-on-arm/app-profiling-streamline.md b/content/learning-paths/smartphones-and-mobile/profiling-ml-on-arm/app-profiling-streamline.md new file mode 100644 index 0000000000..e55e4e172d --- /dev/null +++ b/content/learning-paths/smartphones-and-mobile/profiling-ml-on-arm/app-profiling-streamline.md @@ -0,0 +1,249 @@ +--- +title: Profile your application with Streamline +weight: 3 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## Application Profiling +Application profiling can be split into 2 main types - *Instrumentation* and *Sampling*. [Streamline](https://developer.arm.com/Tools%20and%20Software/Streamline%20Performance%20Analyzer), for example, is a sampling profiler, that takes regular samples of various counters and registers in the system to provide a detailed view of the system's performance. Sampling will only provide a statistical view, but it is less intrusive and has less processing overhead than instrumentation. + +The profiler can look at memory, CPU activity and cycles, cache misses, and many parts of the GPU as well as other performance metrics. It can also provide a timeline view of these counters to show the application's performance over time. This will show bottlenecks, and help you understand where to focus your optimization efforts. + +![Streamline image alt-text#center](Streamline.png "Figure 1. Streamline timeline view") + +## Example Android Application + +In this Learning Path, you will use profile [an example Android application](https://github.com/dawidborycki/Arm.PyTorch.MNIST.Inference) using Streamline. +Start by cloning the repository containing this example on your machine and open it in a recent Android Studio. It is generally safest to not update the Gradle version when prompted. + +## Streamline +You will install Streamline and Performance Studio on your host machine and connect to your target Arm device to capture the data. In this example, the target device is an Arm-powered Android phone. The data is captured over a USB connection, and then analyzed on your host machine. + +For more details on Streamline usage you can refer to these [tutorials and training videos](https://developer.arm.com/Tools%20and%20Software/Arm%20Performance%20Studio). While the example you are running is based on Android, you can use [the setup and capture instructions for Linux](https://developer.arm.com/documentation/101816/0903/Getting-started-with-Streamline/Profile-your-Linux-application). + +First, follow these [setup instructions](https://developer.arm.com/documentation/102477/0900/Setup-tasks?lang=en), to make sure you have `adb` (Android Debug Bridge) installed. If you have installed [Android Studio](https://developer.android.com/studio), you will have installed adb already. Otherwise, you can get it as part of the Android SDK platform tools [here](https://developer.android.com/studio/releases/platform-tools.html). + +Make sure `adb` is in your path. You can check this by running `adb` in a terminal. If it is not in your path, you can add it by installing the [Android SDK `platform-tools`](https://developer.android.com/tools/releases/platform-tools#downloads) directory to your path. + +Next, install [Arm Performance Studio](https://developer.arm.com/Tools%20and%20Software/Arm%20Performance%20Studio#Downloads), which includes Streamline. + +Connect your Android phone to your host machine through USB. Ensure that your Android phone is set to [Developer mode](https://developer.android.com/studio/debug/dev-options). + +On your phone, go to `Settings > Developer Options` and enable USB Debugging. If your phone asks you to authorize connection to your host machine, confirm this. Test the connection by running `adb devices` in a terminal. You should see your device ID listed. + +Next, you need a debuggable build of the application you want to profile. +- In Android Studio, ensure your *Build Variant* is set to `debug`. You can then build the application and install it on your device. +- For a Unity app, select Development Build under File > Build Settings when building your application. +- In Unreal Engine, open Project Settings > Project > Packaging > Project, and ensure that the For Distribution checkbox is not set. +- In the general case, you can set `android:debuggable=true` in the application manifest file. + +For the example application that you cloned earlier, the Build Variant is `debug` by default, but you can verify this by going to `Build > Select Build Variant` in Android Studio. Build and install this application on your device. + +You can now run Streamline and [capture a profile](https://developer.arm.com/documentation/102477/0900/Capture-a-profile?lang=en) of your application. But before you do, lets add some useful annotations to your code that can help with more specific performance analysis of your application. + +## Custom Annotations + +In Streamline, it is possible to add custom annotations to the timeline view. This can be useful to mark the start and end of specific parts of your application, or to mark when a specific event occurs. This can help you understand the performance of your application in relation to these events. At the bottom of *Figure 1* above there are custom annotations to show when inference, pre-processing, and post-processing are happening. + +To add annotations, you will need to add some files into your project from the **gator** daemon that Streamline uses. These files are named `streamline_annotate.c`, `streamline_annotate.h` and `streamline_annotate_logging.h` and made available [here](https://github.com/ARM-software/gator/tree/main/annotate). Using these annotations, you will be able to show log strings, markers, counters and Custom Activity Maps. WIthin your example project, create a `cpp` folder under the `app/src/main` folder, and add these three files there. + +These files are written in C, so if your Android Studio project is in Java or Kotlin, you will need to add a C library to your project. This is slightly trickier than just adding a Java or Kotlin file, but it is not difficult. You can find instructions on how to do this [here](https://developer.android.com/studio/projects/add-native-code). + +Create a file in the `app/src/main/cpp/` folder under your project and name it `annotate_jni_wrapper.c`. This will be a wrapper around the gator daemon's functions, and will be called from your Kotlin code. Copy the code below into this file. You can also create very similar wrapper functions for other gator daemon functions. + +```c +#include +#include "streamline_annotate.h" + +JNIEXPORT void JNICALL Java_AnnotateStreamline_AnnotateSetup(JNIEnv* env, jobject obj) { + gator_annotate_setup(); +} + +JNIEXPORT jlong JNICALL Java_AnnotateStreamline_GetTime(JNIEnv* env, jobject obj) { + return gator_get_time(); +} +``` + +Some functions have `unsigned int`, but that needs to be a `jint` in the wrapper, with some casting required in your Kotlin code to enforce type correctness at that end. Some functions have strings as arguments, and you will need to do a small conversion as shown below: + +```c +JNIEXPORT void JNICALL Java_AnnotateStreamline_AnnotateMarkerColorStr(JNIEnv* env, jobject obj, jint color, jstring str) { + const char* nativeStr = (*env)->GetStringUTFChars(env, str, 0); + gator_annotate_marker_color(color, nativeStr); + (*env)->ReleaseStringUTFChars(env, str, nativeStr); +} +``` + +In Android Studio `cmake` is used to create your C library, so you will need a `CMakelists.txt` file in the same directory as the C files (`app/src/main/cpp/` in the example). Copy the contents shown below into `CMakelists.txt`: + +```cmake +# Sets the minimum CMake version required for this project. +cmake_minimum_required(VERSION 3.22.1) + +# Declare the project name. +project("StreamlineAnnotationJNI") + +# Create and name the library +add_library(${CMAKE_PROJECT_NAME} SHARED + annotate_jni_wrapper.c + streamline_annotate.c) + +# Specifies libraries CMake should link to your target library. +# Adding in the Android system log library pulls in the NDK path. +find_library( # Sets the path to the NDK library. + log-lib + log ) + +target_link_libraries( # Specifies the target library. + ${CMAKE_PROJECT_NAME} + ${log-lib} ) +``` + +Now add the code below to the `build.gradle` file of the Module you wish to profile (`:app` in the example). You will be able to call the functions from your Kotlin code: + +```gradle + externalNativeBuild { + cmake { + path file('src/main/cpp/CMakeLists.txt') + version '3.22.1' + } + } +``` + +This will create a `libStreamlineAnnotationJNI.so` library that you can load in your Kotlin code, and then you can call the functions. Here you will create a singleton `AnnotateStreamline.kt`. Place the file alongside `MainActivity.kt` in `app\src\main\java\com\arm\armpytorchmnistinference` for the example. Add the following code to `AnnotateStreamline.kt` to enable Kotlin calls to the gator daemon from the rest of your code: + +```kotlin +// Kotlin wrapper class for integration into Android project +class AnnotateStreamline { + init { + // Load the native library + System.loadLibrary("StreamlineAnnotationJNI") + } + + companion object { + // #defines for colors from the Streamline Annotation c code + const val ANNOTATE_RED: UInt = 0x0000ff1bu + const val ANNOTATE_BLUE: UInt = 0xff00001bu + const val ANNOTATE_GREEN: UInt = 0x00ff001bu + const val ANNOTATE_PURPLE: UInt = 0xff00ff1bu + const val ANNOTATE_YELLOW: UInt = 0x00ffff1bu + // any other constants you want from the included gator files + + // Create an instance of the AnnotateStreamline class + private val annotations = AnnotateStreamline() + + // Function to setup the Streamline Annotation - call this first + @JvmStatic + fun setup() { + annotations.AnnotateSetup() + } + + // Function to get the current time from gator + @JvmStatic + fun getTime(): Long { + return annotations.GetTime() + } + + // more functions that you want, e.g. (note UInt conversion) + @JvmStatic + fun annotateMarkerColorStr(color: UInt, str: String) { + annotations.AnnotateMarkerColorStr(color.toInt(), str) + } + // ... + } + + // externals match the last part of function names in annotate_jni_wrapper.c + external fun AnnotateSetup() + external fun GetTime(): Long + external fun AnnotateMarkerColorStr(color: Int, str: String) + // ... +} +``` + +Fill in all the function calls to match the functions you added into `annotate_jni_wrapper.c`. + +The `AnnotateStreamline` class can now be used in your Kotlin code to add annotations to the Streamline timeline view. The first thing is to make sure `AnnotateStreamline.setup()` is called before any other gator functions. For the example project, add it into the `onCreate()` function of `MainActivity.kt`. Then you can add annotations like this: + +```kotlin + AnnotateStreamline.annotateMarkerColorStr(AnnotateStreamline.ANNOTATE_BLUE, "Model Load") +``` + +In the example app you could add this in the `onCreate()` function of `MainActivity.kt` after the `Module.load()` call to load the `model.pth`. + +This 'colored marker with a string' annotation will add the string and time to Streamline's log view, and look like the image shown below in Streamline's timeline (in the example app ArmNN isn't used, so there are no white ArmNN markers): + +![Streamline image alt-text#center](streamline_marker.png "Figure 2. Streamline timeline markers") + +## Custom Activity Maps (CAMs) + +In addition to adding strings to the log and colored markers to the timeline, a particularly useful set of annotations is the Custom Activity Maps. These are the named colored bands you can see at the bottom of the Streamline timeline view shown in *Figure 1*. They can be used to show when specific parts of your application are running, such as the pre-processing or inference, and layered for functions within functions etc. + +To add these you will need to import the functions that start `gator_cam_` from `streamline_annotate.h` through your wrapper files in the same way as the functions above. Then you can use CAMs, but first you will need to set up the tracks the annotations will appear on and an id system for each annotation. The `baseId` code below is to ensure that if you add annotations in multiple places in your code, the ids are unique. + +Here is an example setup in a class's companion object: + +```kotlin + companion object { + const val camViewId = 1u + const val trackRoot = 1u + const val trackChild = 2u + baseId = (0u..UInt.MAX_VALUE/2u-5000u).random() + currentId = baseId + + init { + AnnotateStreamline.camViewName(camViewId, "Inference") + AnnotateStreamline.camTrack(camViewId, trackRoot,0xffffffffu, "Root") // root wants -1 for parent id + AnnotateStreamline.camTrack(camViewId, trackChild, trackRoot, "Children") + } +``` + +For the example app, add this to the `MainActivity` class. + +Then it can be used like this: + +```kotlin + val preprocess = currentId++ + AnnotateStreamline.camJobStart(camViewId, preprocess, "Preprocess", trackRoot, AnnotateStreamline.getTime(), AnnotateStreamline.ANNOTATE_YELLOW) + val childjob = currentId++ + AnnotateStreamline.camJobStart(camViewId, childjob, "child job", trackChild, AnnotateStreamline.getTime(), AnnotateStreamline.ANNOTATE_CYAN) + //child job code... + AnnotateStreamline.camJobEnd(camViewId, childjob, AnnotateStreamline.getTime()) + //rest of preprocessing code... + AnnotateStreamline.camJobEnd(camViewId, preprocess, AnnotateStreamline.getTime()) +``` + +In the example app, the CAM annotations are added to the `runInference()` function, which should look like this: + +```kotlin + private fun runInference(bitmap: Bitmap) { + val preprocess = currentId++ + AnnotateStreamline.camJobStart(camViewId, preprocess, "Preprocess", trackRoot, AnnotateStreamline.getTime(), AnnotateStreamline.ANNOTATE_YELLOW) + // Convert bitmap to a float array and create a tensor with shape [1, 1, 28, 28] + val inputTensor = createTensorFromBitmap(bitmap) // could add a child CAM job inside function call, but probably too simple + AnnotateStreamline.camJobEnd(camViewId, preprocess, AnnotateStreamline.getTime()) + + // Run inference and measure time + val inferenceTimeMicros = measureTimeMicros { + // Forward pass through the model + val inference = currentId++ + AnnotateStreamline.camJobStart(camViewId, inference, "Inference", trackRoot, AnnotateStreamline.getTime(), AnnotateStreamline.ANNOTATE_RED) + val outputTensor = model.forward(IValue.from(inputTensor)).toTensor() + AnnotateStreamline.camJobEnd(camViewId, inference, AnnotateStreamline.getTime()) + // and then post-processing is simplistic in this case, so not worth a CAM job + val scores = outputTensor.dataAsFloatArray + + // Get the index of the class with the highest score + val maxIndex = scores.indices.maxByOrNull { scores[it] } ?: -1 + predictedLabel.text = "Predicted Label: $maxIndex" + } + + // Update inference time TextView in microseconds + inferenceTime.text = "Inference Time: $inferenceTimeMicros µs" + } +``` + +The example application is very fast and simple, so the CAMs will not show much information. In a more complex application you could add more CAMs, including child-level ones, to give more detailed annotations to show where time is spent in your application. For this example app with its very fast inference, it's best to change the Streamline timeline view scale to 10µs in order to see the CAM annotations better. + +Once you've added in useful CAM annotations, you can build and deploy a debug version of your application. You can run Streamline and see the annotations and CAMs in the timeline view. See the [Streamline documentation](https://developer.arm.com/documentation/101816/latest/) for how to make a capture for profiling. After the capture is made and analyzed, you will be able to see when your application is running the inference, ML pre-processing, ML post-processing, or other parts of your application. From there you can see where the most time is spent, and how hard the CPU or GPU is working during different parts of the application. From this you can then decide if work is needed to improve performance and where that work needs doing. diff --git a/content/learning-paths/smartphones-and-mobile/profiling-ml-on-arm/nn-profiling-executenetwork.md b/content/learning-paths/smartphones-and-mobile/profiling-ml-on-arm/nn-profiling-executenetwork.md new file mode 100644 index 0000000000..f4ca26994d --- /dev/null +++ b/content/learning-paths/smartphones-and-mobile/profiling-ml-on-arm/nn-profiling-executenetwork.md @@ -0,0 +1,85 @@ +--- +title: ML profiling of a tflite model with ExecuteNetwork +weight: 6 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## ArmNN's Network Profiler +One way of running tflite models is with ArmNN. This is available as a delegate to the standard tflite interpreter. But to profile the model, ArmNN comes with a command-line utility called `ExecuteNetwork`. This program just runs the model without the rest of the app. It is able to output layer timings and other useful information to let you know where there might be bottlenecks within your model. + +If you are using tflite without ArmNN, then the output from `ExecuteNetwork` will be more of an indication than a definitive answer. But it can still be useful to spot any obvious problems. + +To try this out, you can download a tflite model from the [Arm Model Zoo](https://github.com/ARM-software/ML-zoo). In this Learning Path, you will download [mobilenet tflite](https://github.com/ARM-software/ML-zoo/blob/master/models/image_classification/mobilenet_v2_1.0_224/tflite_int8/mobilenet_v2_1.0_224_INT8.tflite). + +To get `ExecuteNetwork` you can download it from the [ArmNN GitHub](https://github.com/ARM-software/armnn/releases). Download the version appropriate for the Android phone you wish to test on - the Android version and the architecture of the phone. If you are unsure of the architecture, you can use a lower one, but you may miss out on some optimizations. Inside the `tar.gz` archive that you download, `ExecuteNetwork` is included. Note among the other release downloads on the ArmNN Github is the separate file for the `aar` delegate which is the easy way to include the ArmNN delegate into your app. + +To run `ExecuteNetwork` you'll need to use `adb` to push the model and the executable to your phone, and then run it from the adb shell. `adb` is included with Android Studio, but you may need to add it to your path. Android Studio normally installs it to a location like `\\AppData\Local\Android\Sdk\platform-tools`. `adb` can also be downloaded separately from the [Android Developer site](https://developer.android.com/studio/releases/platform-tools). + +Unzip the `tar.gz` folder you downloaded. From a command prompt, you can then adapt and run the following commands to push the files to your phone. The `/data/local/tmp` folder of your Android device is a place with relaxed permissions that you can use to run this profiling. + +```bash +adb push mobilenet_v2_1.0_224_INT8.tflite /data/local/tmp/ +adb push ExecuteNetwork /data/local/tmp/ +adb push libarm_compute.so /data/local/tmp/ +adb push libarmnn.so /data/local/tmp/ +adb push libarmnn_support_library.so /data/local/tmp/ +# more ArmNN .so library files +``` +Push all the `.so` library files that are in the base folder of the `tar.gz` archive you downloaded, alongside `ExecuteNetwork`, and all the `.so` files in the `delegate` sub-folder. If you are using a recent version of Android Studio this copying can be done much more easily with drag and drop in the *Device Explorer > Files*. + +Then you need to set the permissions on the files: + +```bash +adb shell +cd /data/local/tmp +chmod 777 ExecuteNetwork +chmod 777 *.so +``` + +Now you can run ExecuteNetwork to profile the model. With the example tflite, you can use the following command: + +```bash +LD_LIBRARY_PATH=. ./ExecuteNetwork -m mobilenet_v2_1.0_224_INT8.tflite -c CpuAcc -T delegate --iterations 2 --do-not-print-output --enable-fast-math --fp16-turbo-mode -e --output-network-details > modelout.txt +``` + +If you are using your own tflite, replace `mobilenet_v2_1.0_224_INT8.tflite` with the name of your tflite file. + +This will run the model twice, outputting the layer timings to `modelout.txt`. The `--iterations 2` flag is the command that means it runs twice: the first run includes a lot of startup costs and one-off optimizations, so the second run is more indicative of the real performance. + +The other flags to note are the `-e` and `--output-network-details` flags which will output a lot of timeline information about the model, including the layer timings. The `--do-not-print-output` flag will stop the output of the model, which can be very large, and without sensible input it is meaningless. The `--enable-fast-math` and `--fp16-turbo-mode` flags enable some math optimizations. `CpuAcc` is the acclerated CPU backend, it can be replaced with `GpuAcc` for the accelerated GPU backend. + +After running the model, you can pull the output file back to your host machine with the following commands: + +```bash +exit +adb pull /data/local/tmp/modelout.txt +``` +Once again, this can be done with drag and drop in Android Studio's *Device Explorer > Files*. + +Depending on the size of your model, the output will probably be quite large. You can use a text editor to view the file. The output is in JSON format, so you can use a JSON viewer to make it more readable. Usually some scripting can be used to extract the information you need more easily out of the very raw data in the file. + +At the top is the summary, with the setup time and inference time of your 2 runs, which will look something like this: + +```text +Info: ArmNN v33.2.0 +Info: Initialization time: 7.20 ms. +Info: ArmnnSubgraph creation +Info: Parse nodes to ArmNN time: 50.99 ms +Info: Optimize ArmnnSubgraph time: 85.94 ms +Info: Load ArmnnSubgraph time: 91.11 ms +Info: Overall ArmnnSubgraph creation time: 228.47 ms + +Info: Execution time: 721.91 ms. +Info: Inference time: 722.02 ms + +Info: Execution time: 468.42 ms. +Info: Inference time: 468.58 ms +``` + +After the summary comes the graph of the model, then the layers and their timings from the second run. At the start of the layers there are a few optimizations and their timings recorded before the network itself. You can skip past the graph and the optimization timings to get to the part that needs analyzing. + +In the mobilenet example output, the graph is from lines 18 to 1629. After this is the optimization timings, which are part of the runtime, but not the network - these go until line 1989. Next there are a few wall clock recordings for the loading of the network, before the first layer "Convolution2dLayer_CreateWorkload_#18" at line 2036. Here is where the layer info that needs analyzing starts. + +The layers' "Wall clock time" in microseconds shows how long they took to run. These layers and their timings can then be analyzed to see which layers, and which operators, took the most time. diff --git a/content/learning-paths/smartphones-and-mobile/profiling-ml-on-arm/nn-profiling-general.md b/content/learning-paths/smartphones-and-mobile/profiling-ml-on-arm/nn-profiling-general.md new file mode 100644 index 0000000000..91a35381f1 --- /dev/null +++ b/content/learning-paths/smartphones-and-mobile/profiling-ml-on-arm/nn-profiling-general.md @@ -0,0 +1,16 @@ +--- +title: Profiling the Neural Network +weight: 5 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## Profiling your model +App profilers will give you a good overall view of your performance, but often you might want to look inside the model and work out bottlenecks within the network. The network is often the bulk of the time, in which case it will warrant closer analysis. + +With general profilers this is hard to do, as there needs to be annotations inside the ML framework code to get the information. It is a large task to write the profiling annotations throughout the framework, so it is easier to use tools from a framework or inference engine that already has the required instrumentation. + +Depending on your model, your choice of tools will differ. For example, if you are using LiteRT (formerly TensorFlow Lite), Arm provides the ArmNN delegate that you can run with the model running on Linux or Android, CPU or GPU. ArmNN in turn provides a tool called `ExecuteNetwork` that can run the model and give you layer timings among other useful information. + +If you are using PyTorch, you will probably use ExecuTorch the ons-device inference runtime for your Android phone. ExecuTorch has a profiler available alongside it. diff --git a/content/learning-paths/smartphones-and-mobile/profiling-ml-on-arm/plan.txt b/content/learning-paths/smartphones-and-mobile/profiling-ml-on-arm/plan.txt new file mode 100644 index 0000000000..70e7667178 --- /dev/null +++ b/content/learning-paths/smartphones-and-mobile/profiling-ml-on-arm/plan.txt @@ -0,0 +1,20 @@ + +want the performance of your ML app +memory and compute + +how can you find that out + +different steps: +- ML network +- app around the ML network, especially pre and post processing, and the network as a whole + +for around the ML network - streamline profiler +here's how to do that... +Also Android Profiler, memory example + +Ml network, it will depend on the inference engine you are using +- here's an example for if you are using ArmNN with TFLite +- if you're not using it, it may still have some useful information, but different operators will be used and their performance will be different +can see structure with netron or google model explorer to compare operators or different versions of networks +may need to use a conversion tool to convert to TFLite (or whatever your inference engine wants) + diff --git a/content/learning-paths/smartphones-and-mobile/profiling-ml-on-arm/profiler-jk-allocations-legend.png b/content/learning-paths/smartphones-and-mobile/profiling-ml-on-arm/profiler-jk-allocations-legend.png new file mode 100644 index 0000000000..a9dfadfe0d Binary files /dev/null and b/content/learning-paths/smartphones-and-mobile/profiling-ml-on-arm/profiler-jk-allocations-legend.png differ diff --git a/content/learning-paths/smartphones-and-mobile/profiling-ml-on-arm/start-profile-dropdown.png b/content/learning-paths/smartphones-and-mobile/profiling-ml-on-arm/start-profile-dropdown.png new file mode 100644 index 0000000000..e7d16270f8 Binary files /dev/null and b/content/learning-paths/smartphones-and-mobile/profiling-ml-on-arm/start-profile-dropdown.png differ diff --git a/content/learning-paths/smartphones-and-mobile/profiling-ml-on-arm/streamline_marker.png b/content/learning-paths/smartphones-and-mobile/profiling-ml-on-arm/streamline_marker.png new file mode 100644 index 0000000000..e7ec90f36e Binary files /dev/null and b/content/learning-paths/smartphones-and-mobile/profiling-ml-on-arm/streamline_marker.png differ diff --git a/content/learning-paths/smartphones-and-mobile/profiling-ml-on-arm/why-profile.md b/content/learning-paths/smartphones-and-mobile/profiling-ml-on-arm/why-profile.md new file mode 100644 index 0000000000..7d688a4ad6 --- /dev/null +++ b/content/learning-paths/smartphones-and-mobile/profiling-ml-on-arm/why-profile.md @@ -0,0 +1,23 @@ +--- +title: Why do you need to profile your ML application? +weight: 2 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## Performance +Working out what is taking the time and memory in your application is the first step to getting the performance you want. Profiling can help you identify the bottlenecks in your application and understand how to optimize it. + +With Machine Learning (ML) applications, the inference of the Neural Network (NN) itself is often the heaviest part of the application in terms of computation and memory usage. This is not guaranteed however, so it is important to profile the application as a whole to see if pre- or post-processing or other code is an issue. + +In this Learning Path, you will profile an Android example using TFLite, but most of the steps shown will also work with Linux and cover a wide range of Arm devices. The principles for profiling your application are the same for use with other inference engines and platforms, but the tools are different. + +## Tools + +You will need to use different tools to profile the ML inference or the application's performance running on your Arm device. + +For profiling the ML inference, you will use [ArmNN](https://github.com/ARM-software/armnn/releases)'s ExecuteNetwork. + +For profiling the application as a whole, you will use [Arm Performance Studio](https://developer.arm.com/Tools%20and%20Software/Arm%20Performance%20Studio)'s Streamline, and the Android Studio Profiler. +