Skip to content

Latest commit

 

History

History
116 lines (83 loc) · 5.12 KB

README.md

File metadata and controls

116 lines (83 loc) · 5.12 KB

SongComposer

This repository is the official implementation of SongComposer.

SongComposer: A Large Language Model for Lyric and Melody Composition in Song Generation

Shuangrui Ding*1, Zihan Liu*2,3, Xiaoyi Dong3, Pan Zhang3, Rui Qian1, Conghui He3, Dahua Lin3, Jiaqi Wang†3

1The Chinese University of Hong Kong, 2Beihang University, 3Shanghai AI Laboratory

* Equal Contribution. Corresponding authors.

📜 News

🚀 [2023/3/21] The finetune code of SongComposer are publicly available and the weights of SongComposer_pretrain and SongComposer_sft are publicly available on Hugging Face🤗.

🚀 [2023/2/28] The paper and demo page are released!

💡 Highlights

  • 🔥 SongComposer composes melodies and lyrics with symbolic song representations, with the benefit of better token efficiency, precise representation, flexible format, and human-readable output.
  • 🔥 SongCompose-PT, a comprehensive pretraining dataset that includes lyrics, melodies, and paired lyrics and melodies in either Chinese or English, will be released.
  • 🔥 SongComposer outperforms advanced LLMs like GPT-4 in tasks such as lyric-to-melody generation, melody-to-lyric generation, song continuation, and text-to-song creation.

👨‍💻 Todo

  • Release of SongCompose-PT dataset
  • Online Demo of SongComposer
  • Code of SongComposer
  • Demo of SongComposer

🛠️ Usage

Requirements

  • python 3.9 and above
  • pytorch 2.0 and above
  • CUDA 12.0 and above are recommended (this is for GPU users)

Installation

Before running the code, make sure you have setup the environment and installed the required packages. Make sure you meet the above requirements, and then install the dependent libraries. Please refer to the installation section of finetune scripts.

Quickstart

We provide a simple example to show how to use SongComposer-SFT with 🤗 Transformers.

🤗 Transformers
from transformers import AutoTokenizer, AutoModelForCausalLM
ckpt_path = "Mar2Ding/songcomposer_sft"
tokenizer = AutoTokenizer.from_pretrained(ckpt_path, trust_remote_code=True)
model = AutoModel.from_pretrained(ckpt_path, trust_remote_code=True).cuda().half()
prompt = 'Create a song on brave and sacrificing with a rapid pace.'
model.inference(prompt, tokenizer)

Finetune

Please refer to our finetune scripts.

Inference

We have provide a notebook (inference.ipynb) for the inference stage.

⭐ Samples

Audio samples, including our SongComposer and other baselines, are available on our Demo Page. The samples span four tasks related to song generation, covering both English and Chinese.

✒️ Citation

If you find our work helpful for your research, please consider giving a star ⭐ and citation 📝

@misc{ding2024songcomposer,
      title={SongComposer: A Large Language Model for Lyric and Melody Composition in Song Generation}, 
      author={Shuangrui Ding and Zihan Liu and Xiaoyi Dong and Pan Zhang and Rui Qian and Conghui He and Dahua Lin and Jiaqi Wang},
      year={2024},
      eprint={2402.17645},
      archivePrefix={arXiv},
      primaryClass={cs.SD}
}