Skip to content

Exploring the Efficacy of Pre-trained Checkpoints in Text-to-Music Generation Task [AAAI 2023 Workshop]

License

Notifications You must be signed in to change notification settings

sander-wood/text-to-music

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

21 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Exploring the Efficacy of Pre-trained Checkpoints in Text-to-Music Generation Task

Model description

This language-music model takes BART-base fine-tunes on 282,870 English text-music pairs, where all scores are represented in ABC notation. It was introduced in the paper Exploring the Efficacy of Pre-trained Checkpoints in Text-to-Music Generation Task by Wu et al. and released in huggingface.

It is capable of generating complete and semantically consistent sheet music directly from descriptions in natural language based on text. To the best of our knowledge, this is the first model that achieves text-conditional symbolic music generation which is trained on real text-music pairs, and the music is generated entirely by the model and without any hand-crafted rules.

This language-music model is available for online use and experience on Textune: Generating Tune from Text. With this online platform, you can easily input your desired text descriptions and receive a generated sheet music output from the model.

Due to copyright reasons, we are unable to publicly release the training dataset of this model. Instead, we have made available the WikiMusicText (WikiMT) dataset, which includes 1010 pairs of text-music data and can be used to evaluate the performance of language-music models.

Intended uses & limitations

You can use this model for text-conditional music generation. All scores generated by this model can be written on one stave (for vocal solo or instrumental solo) in standard classical notation, and are in a variety of styles, e.g., blues, classical, folk, jazz, pop, and world music. The generated tunes are in ABC notation, and can be converted to sheet music or audio using this website, or this software.

Its creativity is limited, can not perform well on tasks requiring a high degree of creativity (e.g., melody style transfer), and it is input-sensitive. For more information, please check our paper.

How to use

  1. Install dependencies:
torch                        1.9.1+cu111
Unidecode                    1.3.4
samplings                    0.1.7
transformers                 4.18.0
  1. Set the input text input_text.txt for conditional music generation.
This is a traditional Irish dance music.
Note Length-1/8
Meter-6/8
Key-D

You can use the following types of text as prompts: 1) musical analysis (e.g., tonal analysis and harmonic analysis), 2) meta-information (e.g., key and meter), 3) the context in which the piece was composed (e.g., history and story), and 4) subjective perceptions (e.g., sentiment and preference).

Of course, you can also mix several types of descriptions together. In the folder output_tunes we provided some samples of inputs and outputs.

  1. Run the script run_inference.py. When running a script for the first time, the downloaded files will be cached for future reuse.
python run_inference.py -num_tunes 3 -max_length 1024 -top_p 0.9 -temperature 1.0 -seed 0
  1. Enjoy tunes in the folder output_tunes! If you want to convert these ABC tunes to MIDI or audio, please refer to Intended uses & limitations.
X:1
L:1/8
M:6/8
K:D
 A | BEE BEE | Bdf edB | BAF FEF | DFA BAF | BEE BEE | Bdf edB | BAF DAF | FED E2 :: A |
 Bef gfe | faf edB | BAF FEF | DFA BAF | Bef gfe | faf edB | BAF DAF | FED E2 :|

X:2
L:1/8
M:6/8
K:D
 A |: DED F2 A | d2 f ecA | G2 B F2 A | E2 F GFE | DED F2 A | d2 f ecA | Bgf edc |1 d3 d2 A :|2
 d3 d2 a || a2 f d2 e | f2 g agf | g2 e c2 d | e2 f gfe | fed gfe | agf bag | fed cde | d3 d2 a |
 agf fed | Adf agf | gfe ecA | Ace gfe | fed gfe | agf bag | fed cde | d3 d2 ||

X:3
L:1/8
M:6/8
K:D
 BEE BEE | Bdf edB | BAF FEF | DFA dBA | BEE BEE | Bdf edB | BAF FEF |1 DED DFA :|2 DED D2 e |:
 faf edB | BAF DFA | BAF FEF | DFA dBA | faf edB | BAF DFA | BdB AFA |1 DED D2 e :|2 DED DFA ||

Usage

optional arguments:
  -h, --help            show this help message and exit
  -num_tunes NUM_TUNES  the number of independently computed returned tunes
  -max_length MAX_LENGTH
                        integer to define the maximum length in tokens of each
                        tune
  -top_p TOP_P          float to define the tokens that are within the sample
                        operation of text generation
  -temperature TEMPERATURE
                        the temperature of the sampling operation
  -seed SEED            seed for randomstate

BibTeX entry and citation info

@inproceedings{
wu2023exploring,
title={Exploring the Efficacy of Pre-trained Checkpoints in Text-to-Music Generation Task}, 
author={Shangda Wu and Maosong Sun},
booktitle={The AAAI-23 Workshop on Creative AI Across Modalities},
year={2023},
url={https://openreview.net/forum?id=QmWXskBhesn}
}

About

Exploring the Efficacy of Pre-trained Checkpoints in Text-to-Music Generation Task [AAAI 2023 Workshop]

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages