Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -3,25 +3,24 @@ title: Run OpenAI Whisper Audio Model efficiently on Arm with Hugging Face Trans

minutes_to_complete: 15

who_is_this_for: This Learning Path is for software developers, ML engineers, and those looking to run Whisper ASR Model on Arm Neoverse based CPUs efficiently and build speech transcription based applications around it.
who_is_this_for: This Learning Path is for software developers looking to run the Whisper automatic speech recognition (ASR) model efficiently. You will use an Arm-based cloud instance to run and build speech transcription based applications.

learning_objectives:
- Install the dependencies to run the Whisper Model
- Run the OpenAI Whisper model using Hugging Face Transformers framework.
- Run the whisper-large-v3-turbo model on Arm CPU efficiently.
- Perform the audio to text transcription with Whisper.
- Observe the total time taken to generate transcript with Whisper.
- Run the OpenAI Whisper model using Hugging Face Transformers.
- Enable performance-enhancing features for running the model on Arm CPUs.
- Compare the total time taken to generate transcript with Whisper.


prerequisites:
- Amazon Graviton4 (or other Arm) compute instance with 32 cores, 8GB of RAM, and 32GB disk space.
- An [Arm-based compute instance](/learning-paths/servers-and-cloud-computing/intro/) with 32 cores, 8GB of RAM, and 32GB disk space running Ubuntu.
- Basic understanding of Python and ML concepts.
- Understanding of Whisper ASR Model fundamentals.

author: Nobel Chowdary Mandepudi

### Tags
skilllevels: Intermediate
skilllevels: Introductory
armips:
- Neoverse
subjects: ML
Expand All @@ -30,7 +29,15 @@ operatingsystems:
tools_software_languages:
- Python
- Whisper
- AWS Graviton
cloud_service_providers: AWS


further_reading:
- resource:
title: Hugging Face Transformers documentation
link: https://huggingface.co/transformers/v4.11.3/index.html
type: documentation


### FIXED, DO NOT MODIFY
# ================================================================================
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -10,19 +10,23 @@ layout: "learningpathall"

## Before you begin

This Learning Path demonstrates how to run the whisper-large-v3-turbo model as an application that takes the audio input and computes out the text transcript of it. The instructions in this Learning Path have been designed for Arm servers running Ubuntu 24.04 LTS. You need an Arm server instance with 32 cores, atleast 8GB of RAM and 32GB disk to run this example. The instructions have been tested on a AWS c8g.8xlarge instance.
This Learning Path demonstrates how to run the [whisper-large-v3-turbo model](https://huggingface.co/openai/whisper-large-v3-turbo) as an application that takes an audio input and computes the text transcript of it. The instructions in this Learning Path have been designed for Arm servers running Ubuntu 24.04 LTS. You need an Arm server instance with 32 cores, atleast 8GB of RAM and 32GB disk to run this example. The instructions have been tested on a AWS Graviton4 `c8g.8xlarge` instance.

## Overview

OpenAI Whisper is an open-source Automatic Speech Recognition (ASR) model trained on the multilingual and multitask data, which enables the transcript generation in multiple languages and translations from different languages to English. We will explore the foundational aspects of speech-to-text transcription applications, specifically focusing on running OpenAI’s Whisper on an Arm CPU. We will discuss the implementation and performance considerations required to efficiently deploy Whisper using Hugging Face Transformers framework.
OpenAI Whisper is an open-source Automatic Speech Recognition (ASR) model trained on the multilingual and multitask data, which enables the transcript generation in multiple languages and translations from different languages to English. You will learn about the foundational aspects of speech-to-text transcription applications, specifically focusing on running OpenAI’s Whisper on an Arm CPU. Lastly, you will explore the implementation and performance considerations required to efficiently deploy Whisper using Hugging Face Transformers framework.

### Speech-to-text ML applications

Speech-to-text (STT) transcription applications transform spoken language into written text, enabling voice-driven interfaces, accessibility tools, and real-time communication services. Audio is first cleaned and converted into a format suitable for processing, then passed through a deep learning model trained to recognize speech patterns. Advanced language models help refine the output, improving accuracy by predicting likely word sequences based on context. Whether running on cloud servers, STT applications must balance accuracy, latency, and computational efficiency to meet the needs of diverse use cases.

## Install dependencies

Install the following packages on your Arm based server instance:

```bash
sudo apt update
sudo apt install python3-pip python3-venv ffmpeg -y
sudo apt install python3-pip python3-venv ffmpeg wget -y
```

## Install Python Dependencies
Expand All @@ -47,21 +51,18 @@ pip install torch transformers accelerate

## Download the sample audio file

Download a sample audio file, which is about 33sec audio in .wav format or use your own audio file:
Download a sample audio file, which is about 33 second audio in .wav format. You can use any .wav sound file if you'd like to try some other examples.
```bash
wget https://www.voiptroubleshooter.com/open_speech/american/OSR_us_000_0010_8k.wav
wget https://www.voiptroubleshooter.com/open_speech/american/OSR_us_000_0010_8k.wav
```

## Create a python script for audio to text transcription

Create a python file:
You will use the Hugging Face `transformers` framework to help process the audio. It contains classes that configures the model, and prepares it for inference. `pipeline` is an end-to-end function for NLP tasks. In the code below, it's configured to do pre- and post-processing of the sample in this example, as well as running the actual inference.

```bash
vim whisper-application.py
```
Using a file editor of your choice, create a python file named `whisper-application.py` with the content shown below:

Write the following code in the `whisper-application.py` file:
```python
```python { file_name="whisper-application.py" }
import torch
from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline
import time
Expand Down Expand Up @@ -115,20 +116,18 @@ seconds = (duration - ((hours * 3600) + (minutes * 60)))
msg = f'\nInferencing elapsed time: {seconds:4.2f} seconds\n'

print(msg)

```

## Use the Arm specific flags:

Use the following flags to enable fast math GEMM kernels, Linux Transparent Huge Page (THP) allocations, logs to confirm kernel and set LRU cache capacity and OMP_NUM_THREADS to run the Whisper efficiently on Arm machines.
Enable verbose mode for the output and run the script:

```bash
export DNNL_DEFAULT_FPMATH_MODE=BF16
export THP_MEM_ALLOC_ENABLE=1
export LRU_CACHE_CAPACITY=1024
export OMP_NUM_THREADS=32
export DNNL_VERBOSE=1
export DNNL_VERBOSE=1
python3 whisper-application.py
```
{{% notice Note %}}
BF16 support is merged into PyTorch versions greater than 2.3.0.
{{% /notice %}}

You should see output similar to the image below with a log output, transcript of the audio and the `Inference elapsed time`.

![frontend](whisper_output_no_flags.png)


You've now run the Whisper model successfully on your Arm-based CPU. Continue to the next section to configure flags that can increase the performance your running model.
Original file line number Diff line number Diff line change
Expand Up @@ -5,24 +5,40 @@ weight: 4
layout: learningpathall
---

## Setting environment variables that impact performance

Speech-to-text applications often process large amounts of audio data in real time, requiring efficient computation to balance accuracy and speed. Low-level implementations of the kernels in the neural network enhance performance by reducing processing overhead. When tailored for specific hardware architectures, such as Arm CPUs, these kernels accelerate key tasks like feature extraction and neural network inference. Optimized kernels ensure that speech models like OpenAI’s Whisper can run efficiently, making high-quality transcription more accessible across various server applications.

Other considerations below allow us to use the memory more efficiently. Things like allocating additional memory and threads for a certain task can increase performance. By enabling these hardware-aware options, applications achieve lower latency, reduced power consumption, and smoother real-time transcription.

Use the following flags to enable fast math BFloat16(BF16) GEMM kernels, Linux Transparent Huge Page (THP) allocations, logs to confirm kernel and set LRU cache capacity and OMP_NUM_THREADS to run the Whisper efficiently on Arm machines.

```bash
export DNNL_DEFAULT_FPMATH_MODE=BF16
export THP_MEM_ALLOC_ENABLE=1
export LRU_CACHE_CAPACITY=1024
export OMP_NUM_THREADS=32
```

{{% notice Note %}}
BF16 support is merged into PyTorch versions greater than 2.3.0.
{{% /notice %}}

## Run Whisper File
After installing the dependencies and enabling the Arm specific flags in the previous step, now lets run the Whisper model and analyze it.
After setting the environment variables in the previous step, now lets run the Whisper model again and analyze the performance impact.

Run the `whisper-application.py` file:

```python
python3 whisper-application.py
```

## Output
## Analyze output

You should see output similar to the image below with the log since we enabled verbose, transcript of the audio and the audio transcription time:
![frontend](whisper_output.png)
You should now observe that the processing time has gone down compared to the last run:

## Analyze
![frontend](whisper_output.png)

The output in the above image has the log containing `attr-fpmath:bf16`, which confirms that fast math BF16 kernels are used in the compute process to improve the performance.

It also generated the text transcript of the audio and the `Inference elapsed time`.

By enabling the Arm specific flags as described in the learning path you can see the performance upliftment with the Whisper using Hugging Face Transformers framework on Arm.
By enabling the environment variables as described in the learning path you can see the performance uplift with the Whisper using Hugging Face Transformers framework on Arm.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.