We present SONIQUE, a model for generating background music tailored to video content. Unlike traditional video-to-music generation approaches, which rely heavily on paired audio-visual datasets, SONIQUE leverages unpaired data, combining royalty-free music and independent video sources. By utilizing large language models (LLMs) for video understanding and converting visual descriptions into musical tags, alongside a U-Net-based conditional diffusion model, SONIQUE enables customizable music generation. Users can control specific aspects of the music, such as instruments, genres, tempo, and melodies, ensuring the generated output fits their creative vision. SONIQUE is open-source, with a demo available online.
Performance: Executing the entire process on an NVIDIA 4090 graphics card is accomplished in under a minute. This model requires less than 14 GB GPU memory. When operated on an NVIDIA 3070 Laptop GPU with 8 GB of memory, the process duration extends to 6 minutes.
- Install
- Model Checkpoint
- Data Collection & Preprocessing
- Video-to-music-generation
- Output Tuning
- Subjective Evaluation
- Citation
- Clone this repo
- Create a conda environment:
conda env create -f environment.yml
- Activate the environment, navigate to the root, and run:
pip install .
- After installation, you may run the demo with UI interface:
python run_gradio.py --model-config best_model.json --ckpt-path ./ckpts/stable_ep=220.ckpt
- To run the demo without interface:
python inference.py --model-config best_model.json --ckpt-path ./ckpts/stable_ep=220.ckpt
--use-video
:- Use input video as condition
- Default: False
--input-video
:- Path to input video
- Default: None
--use-init
:- Use melody condition
- Default: False
init-audio
:- Melody condition path
- Default: None
--llms
:- Selection of the name of Large Language Model to extract video description to tags
- Default: Mistral 7B
--low-resource
:- If set to True, models from video -> tags stage will run in 4-bit. Only set it to False if you have enough GPU memory.
- Default: True
--instruments
:- Input instrument condition
- Default: None
--genres
:- Input genre condition
- Default: None
--tempo-rate
:- Input tempo rate condition
- Default: None
Pretrained model can be download here. Please download, unzip, and save in the root of this project.
sonique/
├── ckpts/
│ ├── .../
├── sonique/
├── run_gradio.py/
...
In SONIQUE, tag generation for training starts by feeding raw musical data into LP-MusicCaps to generate initial captions. These captions are processed by Qwen 14B in two steps: first, it converts the captions into tags, then it cleans the data by removing any incorrect or misleading tags (e.g., ”Low Quality”). This results in a clean set of tags for training.
SONIQUE is a multi-model tool leveraging on stable_audio_tools, Video_LLaMA, and popular LLMs from Huggingface.
Video description is extracted from the input video. We use Video_LLaMA to extract video description from the video. Then it will be pass to LLMs to converted them into tags that describe the background music. For the LLMs currently support:
- Mistrial 7B (default)
- Qwen 14B
- Gemma 7B (You will need to get authenticate from Google)
Users can then fine-tune the music generation by providing additional prompts or specifying negative prompts. The final output is background music that matches both the video and user preferences.
We generate a demo with seven examples using SONIQUE. These generated videos were evaluated by a group of 38 individuals, including of artists with video editing backgrounds and music technology students.
Overall, 75% of users rated the generated audio as somewhat, very, or perfectly related to the video, with ”perfectly related” being the most common rating. This positive feedback highlights SONIQUE’s effectiveness in producing audio that aligns well with video content. However, 25% of users found the audio to have little or no relation to the video, indicating that the model struggles to capture the mood or sync the music with specific video events.
Please consider citing the project if it helps your research:
@misc{zhang2024soniquevideobackgroundmusic,
title={SONIQUE: Video Background Music Generation Using Unpaired Audio-Visual Data},
author={Liqian Zhang and Magdalena Fuentes},
year={2024},
eprint={2410.03879},
archivePrefix={arXiv},
primaryClass={cs.SD},
url={https://arxiv.org/abs/2410.03879},
}