PyHARP is a companion package for HARP, an application which enables the seamless integration of machine learning models for audio into Digital Audio Workstations (DAWs). This repository provides a lightweight wrapper to embed arbitrary Python code for audio processing into Gradio endpoints which are accessible through HARP. In this way, HARP supports offline remote processing with algorithms or models that may be too resource-hungry to run on common hardware. HARP can be run as a standalone or from within DAWs that support external sample editing, such as REAPER, Logic Pro X, or Ableton Live. Please see our main repository for more information and instructions on how to install and run HARP.
If you plan on running or debugging a PyHARP app locally, you will need to install pyharp
:
git clone https://github.com/TEAMuP-dev/pyharp
pip install -e pyharp
cd pyharp
We provide several examples of how to create a PyHARP app under the examples/
directory, along with a template for new apps. You can also find a list of models already deployed as PyHARP apps here.
In order to run an app, you will need to install its corresponding dependencies. For example, to install the dependences for our pitch shifter example:
pip install -r examples/pitch_shifter/requirements.txt
The app can then be run with app.py
:
python examples/pitch_shifter/app.py
This will create a local Gradio endpoint at the URL http://localhost:<PORT>
, as well as a forwarded public Gradio endpoint at the URL https://<RANDOM_ID>.gradio.live/
.
Below, you can see an example command line output after running app.py
. It shows both the local endpoint (local URL) and the forwarded enpoint (public URL)
You can see your Gradio app in HARP by loading either the local URL or public URL as a custom path in HARP, as is shown below.
Automatically generated Gradio endpoints are only available for 72 hours. If you'd like to keep the endpoint active and share it with other users, you can leverage HuggingFace Spaces (similar hosting services are also available) to host your PyHARP app indefinitely:
- Create a new HuggingFace Space
- Clone the initialized repository locally:
git clone https://huggingface.co/spaces/<USERNAME>/<SPACE_NAME>
- Add your files to the repository, commit, then push to the
main
branch:
git add .
git commit -m "initial commit"
git push -u origin main
Your PyHARP app will then begin running at https://huggingface.co/spaces/<USERNAME>/<SPACE_NAME>
. The shorthand <USERNAME>/<SPACE_NAME>
can also be used within HARP to reference the endpoint.
Here are a few tips and best-practices when dealing with HuggingFace Spaces:
- Spaces operate based off of the files in the
main
branch - An access token may be required to push commits to HuggingFace Spaces
- A
README.md
file with metadata will be created automatically when a Space is initialized- This file also controls the Gradio version used for the Space
- HARP does not currently support the latest version of Gradio
- We recommend using version 4.7.1 at this time
- A
requirements.txt
file specifying all dependencies must be included for a Space to work properly - A
.gitignore
file should be added to maintain repository orderliness (e.g. to ignoresrc/_outputs
)
PyHARP apps can be accessed from within HARP through the local or forwarded URL corresponding to their active Gradio endpoints (see above), or the URL corresponding to their dedicated hosting service (see above), if applicable.
A model card defines various attributes of a PyHARP app and helps users understand its intended use. This information is also parsed within HARP and displayed when the model is selected.
The following processing code corresponds to our pitch shifter example:
from pyharp import ModelCard
model_card = ModelCard(
name="Pitch Shifter",
description="A pitch shifting example for HARP.",
author="Hugo Flores Garcia",
tags=["example", "pitch shift"],
midi_in=False,
midi_out=False
)
In PyHARP, arbitrary audio processing code is wrapped within a single function process_fn
, for use with Gradio. The function should accept as an argument a path for input audio and should return a single path for output audio. Additional input arguments associated with the values of any Gradio Components used as GUI controls can also be given.
The following processing code corresponds to our pitch shifter example:
from pyharp import load_audio, save_audio, OutputLabel, LabelList
import torchaudio
import torch
@torch.inference_mode()
def process_fn(input_audio_path, pitch_shift_amount):
sig = load_audio(input_audio_path)
ps = torchaudio.transforms.PitchShift(
sig.sample_rate,
n_steps=pitch_shift_amount,
bins_per_octave=12,
n_fft=512
)
sig.audio_data = ps(sig.audio_data)
output_audio_path = save_audio(sig)
output_labels = LabelList()
output_labels.append(OutputLabel(label='short label', t=0.0, description='longer description'))
return output_audio_path, output_labels
The function takes two arguments:
input_audio_path
: the filepath of the audio to processpitch_shift
: the amount to pitch shift by (in semitones)
and returns:
output_audio_path
: the filepath of the processed audiooutput_labels
: any labels to display
Note that this code uses the audiotools library from Descript (installation instructions can be found here).
The main Gradio code block for a PyHARP app consists of defining the input and output Gradio Components, which should include an Audio component for both the input and output, and launching the endpoint. The build_endpoint
function connects the components to the I/O of process_fn
and extracts HARP-readable metadata from the model card and components to include in the endpoint. Currently, HARP supports the Slider and Textbox components as GUI controls.
The following endpoint code corresponds to our pitch shifter example:
from pyharp import build_endpoint
import gradio as gr
with gr.Blocks() as demo:
# Define the Gradio interface
components = [
gr.Slider(
minimum=-24,
maximum=24,
step=1,
value=7,
label="Pitch Shift (semitones)"
),
]
# Build a HARP-compatible endpoint
app = build_endpoint(model_card=model_card,
components=components,
process_fn=process_fn)
demo.queue() # see the NOTE below
demo.launch(share=True)
NOTE: Make sure the order of the inputs matches the order of the arguments in process_fn
.
NOTE: All of the gr.Audio
components MUST have type="filepath"
in order to work with HARP.
NOTE: In order to be able to cancel an ongoing processing job within HARP, queueing in Gradio needs to be enabled by calling demo.queue()
.
If you want to build an endpoint that utilizes a pre-trained model, we recommend the following:
- Load the model outside of
process_fn
, so that it is only initialized once - Store model weights within your app repository using Git Large File Storage